text stringlengths 0 2k | heading1 stringlengths 4 79 | source_page_url stringclasses 183 values | source_page_title stringclasses 183 values |
|---|---|---|---|
Start by connecting instantiating a `client` instance and connecting it to a Gradio app that is running on Hugging Face Spaces or generally anywhere on the web.
| Connecting to a running Gradio App | https://gradio.app/guides/getting-started-with-the-js-client | Gradio Clients And Lite - Getting Started With The Js Client Guide |
```js
import { Client } from "@gradio/client";
const app = await Client.connect("abidlabs/en2fr"); // a Space that translates from English to French
```
You can also connect to private Spaces by passing in your HF token with the `token` property of the options parameter. You can get your HF token here: https://huggingface.co/settings/tokens
```js
import { Client } from "@gradio/client";
const app = await Client.connect("abidlabs/my-private-space", { token: "hf_..." })
```
| Connecting to a Hugging Face Space | https://gradio.app/guides/getting-started-with-the-js-client | Gradio Clients And Lite - Getting Started With The Js Client Guide |
While you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space, and then use it to make as many requests as you'd like! You'll need to pass in your [Hugging Face token](https://huggingface.co/settings/tokens)).
`Client.duplicate` is almost identical to `Client.connect`, the only difference is under the hood:
```js
import { Client, handle_file } from "@gradio/client";
const response = await fetch(
"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3"
);
const audio_file = await response.blob();
const app = await Client.duplicate("abidlabs/whisper", { token: "hf_..." });
const transcription = await app.predict("/predict", [handle_file(audio_file)]);
```
If you have previously duplicated a Space, re-running `Client.duplicate` will _not_ create a new Space. Instead, the client will attach to the previously-created Space. So it is safe to re-run the `Client.duplicate` method multiple times with the same space.
**Note:** if the original Space uses GPUs, your private Space will as well, and your Hugging Face account will get billed based on the price of the GPU. To minimize charges, your Space will automatically go to sleep after 5 minutes of inactivity. You can also set the hardware using the `hardware` and `timeout` properties of `duplicate`'s options object like this:
```js
import { Client } from "@gradio/client";
const app = await Client.duplicate("abidlabs/whisper", {
token: "hf_...",
timeout: 60,
hardware: "a10g-small"
});
```
| Duplicating a Space for private use | https://gradio.app/guides/getting-started-with-the-js-client | Gradio Clients And Lite - Getting Started With The Js Client Guide |
If your app is running somewhere else, just provide the full URL instead, including the "http://" or "https://". Here's an example of making predictions to a Gradio app that is running on a share URL:
```js
import { Client } from "@gradio/client";
const app = Client.connect("https://bec81a83-5b5c-471e.gradio.live");
```
| Connecting a general Gradio app | https://gradio.app/guides/getting-started-with-the-js-client | Gradio Clients And Lite - Getting Started With The Js Client Guide |
If the Gradio application you are connecting to [requires a username and password](/guides/sharing-your-appauthentication), then provide them as a tuple to the `auth` argument of the `Client` class:
```js
import { Client } from "@gradio/client";
Client.connect(
space_name,
{ auth: [username, password] }
)
```
| Connecting to a Gradio app with auth | https://gradio.app/guides/getting-started-with-the-js-client | Gradio Clients And Lite - Getting Started With The Js Client Guide |
Once you have connected to a Gradio app, you can view the APIs that are available to you by calling the `Client`'s `view_api` method.
For the Whisper Space, we can do this:
```js
import { Client } from "@gradio/client";
const app = await Client.connect("abidlabs/whisper");
const app_info = await app.view_api();
console.log(app_info);
```
And we will see the following:
```json
{
"named_endpoints": {
"/predict": {
"parameters": [
{
"label": "text",
"component": "Textbox",
"type": "string"
}
],
"returns": [
{
"label": "output",
"component": "Textbox",
"type": "string"
}
]
}
},
"unnamed_endpoints": {}
}
```
This shows us that we have 1 API endpoint in this space, and shows us how to use the API endpoint to make a prediction: we should call the `.predict()` method (which we will explore below), providing a parameter `input_audio` of type `string`, which is a url to a file.
We should also provide the `api_name='/predict'` argument to the `predict()` method. Although this isn't necessary if a Gradio app has only 1 named endpoint, it does allow us to call different endpoints in a single app if they are available. If an app has unnamed API endpoints, these can also be displayed by running `.view_api(all_endpoints=True)`.
| Inspecting the API endpoints | https://gradio.app/guides/getting-started-with-the-js-client | Gradio Clients And Lite - Getting Started With The Js Client Guide |
As an alternative to running the `.view_api()` method, you can click on the "Use via API" link in the footer of the Gradio app, which shows us the same information, along with example usage.

The View API page also includes an "API Recorder" that lets you interact with the Gradio UI normally and converts your interactions into the corresponding code to run with the JS Client.
| The "View API" Page | https://gradio.app/guides/getting-started-with-the-js-client | Gradio Clients And Lite - Getting Started With The Js Client Guide |
The simplest way to make a prediction is simply to call the `.predict()` method with the appropriate arguments:
```js
import { Client } from "@gradio/client";
const app = await Client.connect("abidlabs/en2fr");
const result = await app.predict("/predict", ["Hello"]);
```
If there are multiple parameters, then you should pass them as an array to `.predict()`, like this:
```js
import { Client } from "@gradio/client";
const app = await Client.connect("gradio/calculator");
const result = await app.predict("/predict", [4, "add", 5]);
```
For certain inputs, such as images, you should pass in a `Buffer`, `Blob` or `File` depending on what is most convenient. In node, this would be a `Buffer` or `Blob`; in a browser environment, this would be a `Blob` or `File`.
```js
import { Client, handle_file } from "@gradio/client";
const response = await fetch(
"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3"
);
const audio_file = await response.blob();
const app = await Client.connect("abidlabs/whisper");
const result = await app.predict("/predict", [handle_file(audio_file)]);
```
| Making a prediction | https://gradio.app/guides/getting-started-with-the-js-client | Gradio Clients And Lite - Getting Started With The Js Client Guide |
If the API you are working with can return results over time, or you wish to access information about the status of a job, you can use the iterable interface for more flexibility. This is especially useful for iterative endpoints or generator endpoints that will produce a series of values over time as discrete responses.
```js
import { Client } from "@gradio/client";
function log_result(payload) {
const {
data: [translation]
} = payload;
console.log(`The translated result is: ${translation}`);
}
const app = await Client.connect("abidlabs/en2fr");
const job = app.submit("/predict", ["Hello"]);
for await (const message of job) {
log_result(message);
}
```
| Using events | https://gradio.app/guides/getting-started-with-the-js-client | Gradio Clients And Lite - Getting Started With The Js Client Guide |
The event interface also allows you to get the status of the running job by instantiating the client with the `events` options passing `status` and `data` as an array:
```ts
import { Client } from "@gradio/client";
const app = await Client.connect("abidlabs/en2fr", {
events: ["status", "data"]
});
```
This ensures that status messages are also reported to the client.
`status`es are returned as an object with the following attributes: `status` (a human readbale status of the current job, `"pending" | "generating" | "complete" | "error"`), `code` (the detailed gradio code for the job), `position` (the current position of this job in the queue), `queue_size` (the total queue size), `eta` (estimated time this job will complete), `success` (a boolean representing whether the job completed successfully), and `time` ( as `Date` object detailing the time that the status was generated).
```js
import { Client } from "@gradio/client";
function log_status(status) {
console.log(
`The current status for this job is: ${JSON.stringify(status, null, 2)}.`
);
}
const app = await Client.connect("abidlabs/en2fr", {
events: ["status", "data"]
});
const job = app.submit("/predict", ["Hello"]);
for await (const message of job) {
if (message.type === "status") {
log_status(message);
}
}
```
| Status | https://gradio.app/guides/getting-started-with-the-js-client | Gradio Clients And Lite - Getting Started With The Js Client Guide |
The job instance also has a `.cancel()` method that cancels jobs that have been queued but not started. For example, if you run:
```js
import { Client } from "@gradio/client";
const app = await Client.connect("abidlabs/en2fr");
const job_one = app.submit("/predict", ["Hello"]);
const job_two = app.submit("/predict", ["Friends"]);
job_one.cancel();
job_two.cancel();
```
If the first job has started processing, then it will not be canceled but the client will no longer listen for updates (throwing away the job). If the second job has not yet started, it will be successfully canceled and removed from the queue.
| Cancelling Jobs | https://gradio.app/guides/getting-started-with-the-js-client | Gradio Clients And Lite - Getting Started With The Js Client Guide |
Some Gradio API endpoints do not return a single value, rather they return a series of values. You can listen for these values in real time using the iterable interface:
```js
import { Client } from "@gradio/client";
const app = await Client.connect("gradio/count_generator");
const job = app.submit(0, [9]);
for await (const message of job) {
console.log(message.data);
}
```
This will log out the values as they are generated by the endpoint.
You can also cancel jobs that that have iterative outputs, in which case the job will finish immediately.
```js
import { Client } from "@gradio/client";
const app = await Client.connect("gradio/count_generator");
const job = app.submit(0, [9]);
for await (const message of job) {
console.log(message.data);
}
setTimeout(() => {
job.cancel();
}, 3000);
```
| Generator Endpoints | https://gradio.app/guides/getting-started-with-the-js-client | Gradio Clients And Lite - Getting Started With The Js Client Guide |
You generally don't need to install cURL, as it comes pre-installed on many operating systems. Run:
```bash
curl --version
```
to confirm that `curl` is installed. If it is not already installed, you can install it by visiting https://curl.se/download.html.
| Installation | https://gradio.app/guides/querying-gradio-apps-with-curl | Gradio Clients And Lite - Querying Gradio Apps With Curl Guide |
To query a Gradio app, you'll need its full URL. This is usually just the URL that the Gradio app is hosted on, for example: https://bec81a83-5b5c-471e.gradio.live
**Hugging Face Spaces**
However, if you are querying a Gradio on Hugging Face Spaces, you will need to use the URL of the embedded Gradio app, not the URL of the Space webpage. For example:
```bash
❌ Space URL: https://huggingface.co/spaces/abidlabs/en2fr
✅ Gradio app URL: https://abidlabs-en2fr.hf.space/
```
You can get the Gradio app URL by clicking the "view API" link at the bottom of the page. Or, you can right-click on the page and then click on "View Frame Source" or the equivalent in your browser to view the URL of the embedded Gradio app.
While you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space,
and then use it to make as many requests as you'd like!
Note: to query private Spaces, you will need to pass in your Hugging Face (HF) token. You can get your HF token here: https://huggingface.co/settings/tokens. In this case, you will need to include an additional header in both of your `curl` calls that we'll discuss below:
```bash
-H "Authorization: Bearer $HF_TOKEN"
```
Now, we are ready to make the two `curl` requests.
| Step 0: Get the URL for your Gradio App | https://gradio.app/guides/querying-gradio-apps-with-curl | Gradio Clients And Lite - Querying Gradio Apps With Curl Guide |
The first of the two `curl` requests is `POST` request that submits the input payload to the Gradio app.
The syntax of the `POST` request is as follows:
```bash
$ curl -X POST $URL/call/$API_NAME -H "Content-Type: application/json" -d '{
"data": $PAYLOAD
}'
```
Here:
* `$URL` is the URL of the Gradio app as obtained in Step 0
* `$API_NAME` is the name of the API endpoint for the event that you are running. You can get the API endpoint names by clicking the "view API" link at the bottom of the page.
* `$PAYLOAD` is a valid JSON data list containing the input payload, one element for each input component.
When you make this `POST` request successfully, you will get an event id that is printed to the terminal in this format:
```bash
>> {"event_id": $EVENT_ID}
```
This `EVENT_ID` will be needed in the subsequent `curl` request to fetch the results of the prediction.
Here are some examples of how to make the `POST` request
**Basic Example**
Revisiting the example at the beginning of the page, here is how to make the `POST` request for a simple Gradio application that takes in a single input text component:
```bash
$ curl -X POST https://abidlabs-en2fr.hf.space/call/predict -H "Content-Type: application/json" -d '{
"data": ["Hello, my friend."]
}'
```
**Multiple Input Components**
This [Gradio demo](https://huggingface.co/spaces/gradio/hello_world_3) accepts three inputs: a string corresponding to the `gr.Textbox`, a boolean value corresponding to the `gr.Checkbox`, and a numerical value corresponding to the `gr.Slider`. Here is the `POST` request:
```bash
curl -X POST https://gradio-hello-world-3.hf.space/call/predict -H "Content-Type: application/json" -d '{
"data": ["Hello", true, 5]
}'
```
**Private Spaces**
As mentioned earlier, if you are making a request to a private Space, you will need to pass in a [Hugging Face token](https://huggingface.co/settings/tokens) that has read access to the Space. The request will look like this:
```bash
| Step 1: Make a Prediction (POST) | https://gradio.app/guides/querying-gradio-apps-with-curl | Gradio Clients And Lite - Querying Gradio Apps With Curl Guide |
king a request to a private Space, you will need to pass in a [Hugging Face token](https://huggingface.co/settings/tokens) that has read access to the Space. The request will look like this:
```bash
$ curl -X POST https://private-space.hf.space/call/predict -H "Content-Type: application/json" -H "Authorization: Bearer $HF_TOKEN" -d '{
"data": ["Hello, my friend."]
}'
```
**Files**
If you are using `curl` to query a Gradio application that requires file inputs, the files *need* to be provided as URLs, and The URL needs to be enclosed in a dictionary in this format:
```bash
{"path": $URL}
```
Here is an example `POST` request:
```bash
$ curl -X POST https://gradio-image-mod.hf.space/call/predict -H "Content-Type: application/json" -d '{
"data": [{"path": "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png"}]
}'
```
**Stateful Demos**
If your Gradio demo [persists user state](/guides/interface-state) across multiple interactions (e.g. is a chatbot), you can pass in a `session_hash` alongside the `data`. Requests with the same `session_hash` are assumed to be part of the same user session. Here's how that might look:
```bash
These two requests will share a session
curl -X POST https://gradio-chatinterface-random-response.hf.space/call/chat -H "Content-Type: application/json" -d '{
"data": ["Are you sentient?"],
"session_hash": "randomsequence1234"
}'
curl -X POST https://gradio-chatinterface-random-response.hf.space/call/chat -H "Content-Type: application/json" -d '{
"data": ["Really?"],
"session_hash": "randomsequence1234"
}'
This request will be treated as a new session
curl -X POST https://gradio-chatinterface-random-response.hf.space/call/chat -H "Content-Type: application/json" -d '{
"data": ["Are you sentient?"],
"session_hash": "newsequence5678"
}'
```
| Step 1: Make a Prediction (POST) | https://gradio.app/guides/querying-gradio-apps-with-curl | Gradio Clients And Lite - Querying Gradio Apps With Curl Guide |
ient?"],
"session_hash": "newsequence5678"
}'
```
| Step 1: Make a Prediction (POST) | https://gradio.app/guides/querying-gradio-apps-with-curl | Gradio Clients And Lite - Querying Gradio Apps With Curl Guide |
Once you have received the `EVENT_ID` corresponding to your prediction, you can stream the results. Gradio stores these results in a least-recently-used cache in the Gradio app. By default, the cache can store 2,000 results (across all users and endpoints of the app).
To stream the results for your prediction, make a `GET` request with the following syntax:
```bash
$ curl -N $URL/call/$API_NAME/$EVENT_ID
```
Tip: If you are fetching results from a private Space, include a header with your HF token like this: `-H "Authorization: Bearer $HF_TOKEN"` in the `GET` request.
This should produce a stream of responses in this format:
```bash
event: ...
data: ...
event: ...
data: ...
...
```
Here: `event` can be one of the following:
* `generating`: indicating an intermediate result
* `complete`: indicating that the prediction is complete and the final result
* `error`: indicating that the prediction was not completed successfully
* `heartbeat`: sent every 15 seconds to keep the request alive
The `data` is in the same format as the input payload: valid JSON data list containing the output result, one element for each output component.
Here are some examples of what results you should expect if a request is completed successfully:
**Basic Example**
Revisiting the example at the beginning of the page, we would expect the result to look like this:
```bash
event: complete
data: ["Bonjour, mon ami."]
```
**Multiple Outputs**
If your endpoint returns multiple values, they will appear as elements of the `data` list:
```bash
event: complete
data: ["Good morning Hello. It is 5 degrees today", -15.0]
```
**Streaming Example**
If your Gradio app [streams a sequence of values](/guides/streaming-outputs), then they will be streamed directly to your terminal, like this:
```bash
event: generating
data: ["Hello, w!"]
event: generating
data: ["Hello, wo!"]
event: generating
data: ["Hello, wor!"]
event: generating
data: ["Hello, worl!"]
event: generating
data: ["Hello, w | Step 2: GET the result | https://gradio.app/guides/querying-gradio-apps-with-curl | Gradio Clients And Lite - Querying Gradio Apps With Curl Guide |
```bash
event: generating
data: ["Hello, w!"]
event: generating
data: ["Hello, wo!"]
event: generating
data: ["Hello, wor!"]
event: generating
data: ["Hello, worl!"]
event: generating
data: ["Hello, world!"]
event: complete
data: ["Hello, world!"]
```
**File Example**
If your Gradio app returns a file, the file will be represented as a dictionary in this format (including potentially some additional keys):
```python
{
"orig_name": "example.jpg",
"path": "/path/in/server.jpg",
"url": "https:/example.com/example.jpg",
"meta": {"_type": "gradio.FileData"}
}
```
In your terminal, it may appear like this:
```bash
event: complete
data: [{"path": "/tmp/gradio/359933dc8d6cfe1b022f35e2c639e6e42c97a003/image.webp", "url": "https://gradio-image-mod.hf.space/c/file=/tmp/gradio/359933dc8d6cfe1b022f35e2c639e6e42c97a003/image.webp", "size": null, "orig_name": "image.webp", "mime_type": null, "is_stream": false, "meta": {"_type": "gradio.FileData"}}]
```
| Step 2: GET the result | https://gradio.app/guides/querying-gradio-apps-with-curl | Gradio Clients And Lite - Querying Gradio Apps With Curl Guide |
What if your Gradio application has [authentication enabled](/guides/sharing-your-appauthentication)? In that case, you'll need to make an additional `POST` request with cURL to authenticate yourself before you make any queries. Here are the complete steps:
First, login with a `POST` request supplying a valid username and password:
```bash
curl -X POST $URL/login \
-d "username=$USERNAME&password=$PASSWORD" \
-c cookies.txt
```
If the credentials are correct, you'll get `{"success":true}` in response and the cookies will be saved in `cookies.txt`.
Next, you'll need to include these cookies when you make the original `POST` request, like this:
```bash
$ curl -X POST $URL/call/$API_NAME -b cookies.txt -H "Content-Type: application/json" -d '{
"data": $PAYLOAD
}'
```
Finally, you'll need to `GET` the results, again supplying the cookies from the file:
```bash
curl -N $URL/call/$API_NAME/$EVENT_ID -b cookies.txt
```
| Authentication | https://gradio.app/guides/querying-gradio-apps-with-curl | Gradio Clients And Lite - Querying Gradio Apps With Curl Guide |
Let's start with what seems like the most complex bit -- using machine learning to remove the music from a video.
Luckily for us, there's an existing Space we can use to make this process easier: [https://huggingface.co/spaces/abidlabs/music-separation](https://huggingface.co/spaces/abidlabs/music-separation). This Space takes an audio file and produces two separate audio files: one with the instrumental music and one with all other sounds in the original clip. Perfect to use with our client!
Open a new Python file, say `main.py`, and start by importing the `Client` class from `gradio_client` and connecting it to this Space:
```py
from gradio_client import Client, handle_file
client = Client("abidlabs/music-separation")
def acapellify(audio_path):
result = client.predict(handle_file(audio_path), api_name="/predict")
return result[0]
```
That's all the code that's needed -- notice that the API endpoints returns two audio files (one without the music, and one with just the music) in a list, and so we just return the first element of the list.
---
**Note**: since this is a public Space, there might be other users using this Space as well, which might result in a slow experience. You can duplicate this Space with your own [Hugging Face token](https://huggingface.co/settings/tokens) and create a private Space that only you have will have access to and bypass the queue. To do that, simply replace the first two lines above with:
```py
from gradio_client import Client
client = Client.duplicate("abidlabs/music-separation", token=YOUR_HF_TOKEN)
```
Everything else remains the same!
---
Now, of course, we are working with video files, so we first need to extract the audio from the video files. For this, we will be using the `ffmpeg` library, which does a lot of heavy lifting when it comes to working with audio and video files. The most common way to use `ffmpeg` is through the command line, which we'll call via Python's `subprocess` module:
Our video proc | Step 1: Write the Video Processing Function | https://gradio.app/guides/fastapi-app-with-the-gradio-client | Gradio Clients And Lite - Fastapi App With The Gradio Client Guide |
f heavy lifting when it comes to working with audio and video files. The most common way to use `ffmpeg` is through the command line, which we'll call via Python's `subprocess` module:
Our video processing workflow will consist of three steps:
1. First, we start by taking in a video filepath and extracting the audio using `ffmpeg`.
2. Then, we pass in the audio file through the `acapellify()` function above.
3. Finally, we combine the new audio with the original video to produce a final acapellified video.
Here's the complete code in Python, which you can add to your `main.py` file:
```python
import subprocess
def process_video(video_path):
old_audio = os.path.basename(video_path).split(".")[0] + ".m4a"
subprocess.run(['ffmpeg', '-y', '-i', video_path, '-vn', '-acodec', 'copy', old_audio])
new_audio = acapellify(old_audio)
new_video = f"acap_{video_path}"
subprocess.call(['ffmpeg', '-y', '-i', video_path, '-i', new_audio, '-map', '0:v', '-map', '1:a', '-c:v', 'copy', '-c:a', 'aac', '-strict', 'experimental', f"static/{new_video}"])
return new_video
```
You can read up on [ffmpeg documentation](https://ffmpeg.org/ffmpeg.html) if you'd like to understand all of the command line parameters, as they are beyond the scope of this tutorial.
| Step 1: Write the Video Processing Function | https://gradio.app/guides/fastapi-app-with-the-gradio-client | Gradio Clients And Lite - Fastapi App With The Gradio Client Guide |
Next up, we'll create a simple FastAPI app. If you haven't used FastAPI before, check out [the great FastAPI docs](https://fastapi.tiangolo.com/). Otherwise, this basic template, which we add to `main.py`, will look pretty familiar:
```python
import os
from fastapi import FastAPI, File, UploadFile, Request
from fastapi.responses import HTMLResponse, RedirectResponse
from fastapi.staticfiles import StaticFiles
from fastapi.templating import Jinja2Templates
app = FastAPI()
os.makedirs("static", exist_ok=True)
app.mount("/static", StaticFiles(directory="static"), name="static")
templates = Jinja2Templates(directory="templates")
videos = []
@app.get("/", response_class=HTMLResponse)
async def home(request: Request):
return templates.TemplateResponse(
"home.html", {"request": request, "videos": videos})
@app.post("/uploadvideo/")
async def upload_video(video: UploadFile = File(...)):
video_path = video.filename
with open(video_path, "wb+") as fp:
fp.write(video.file.read())
new_video = process_video(video.filename)
videos.append(new_video)
return RedirectResponse(url='/', status_code=303)
```
In this example, the FastAPI app has two routes: `/` and `/uploadvideo/`.
The `/` route returns an HTML template that displays a gallery of all uploaded videos.
The `/uploadvideo/` route accepts a `POST` request with an `UploadFile` object, which represents the uploaded video file. The video file is "acapellified" via the `process_video()` method, and the output video is stored in a list which stores all of the uploaded videos in memory.
Note that this is a very basic example and if this were a production app, you will need to add more logic to handle file storage, user authentication, and security considerations.
| Step 2: Create a FastAPI app (Backend Routes) | https://gradio.app/guides/fastapi-app-with-the-gradio-client | Gradio Clients And Lite - Fastapi App With The Gradio Client Guide |
Finally, we create the frontend of our web application. First, we create a folder called `templates` in the same directory as `main.py`. We then create a template, `home.html` inside the `templates` folder. Here is the resulting file structure:
```csv
├── main.py
├── templates
│ └── home.html
```
Write the following as the contents of `home.html`:
```html
<!DOCTYPE html> <html> <head> <title>Video Gallery</title>
<style> body { font-family: sans-serif; margin: 0; padding: 0;
background-color: f5f5f5; } h1 { text-align: center; margin-top: 30px;
margin-bottom: 20px; } .gallery { display: flex; flex-wrap: wrap;
justify-content: center; gap: 20px; padding: 20px; } .video { border: 2px solid
ccc; box-shadow: 0px 0px 10px rgba(0, 0, 0, 0.2); border-radius: 5px; overflow:
hidden; width: 300px; margin-bottom: 20px; } .video video { width: 100%; height:
200px; } .video p { text-align: center; margin: 10px 0; } form { margin-top:
20px; text-align: center; } input[type="file"] { display: none; } .upload-btn {
display: inline-block; background-color: 3498db; color: fff; padding: 10px
20px; font-size: 16px; border: none; border-radius: 5px; cursor: pointer; }
.upload-btn:hover { background-color: 2980b9; } .file-name { margin-left: 10px;
} </style> </head> <body> <h1>Video Gallery</h1> {% if videos %}
<div class="gallery"> {% for video in videos %} <div class="video">
<video controls> <source src="{{ url_for('static', path=video) }}"
type="video/mp4"> Your browser does not support the video tag. </video>
<p>{{ video }}</p> </div> {% endfor %} </div> {% else %} <p>No
videos uploaded yet.</p> {% endif %} <form action="/uploadvideo/"
method="post" enctype="multipart/form-data"> <label for="video-upload"
class="upload-btn">Choose video file</label> <input type="file"
name="video" id="video-upload"> <span class="file-name"></span> <button
type="submit" class="upload-btn">Upload</butto | Step 3: Create a FastAPI app (Frontend Template) | https://gradio.app/guides/fastapi-app-with-the-gradio-client | Gradio Clients And Lite - Fastapi App With The Gradio Client Guide |
class="upload-btn">Choose video file</label> <input type="file"
name="video" id="video-upload"> <span class="file-name"></span> <button
type="submit" class="upload-btn">Upload</button> </form> <script> //
Display selected file name in the form const fileUpload =
document.getElementById("video-upload"); const fileName =
document.querySelector(".file-name"); fileUpload.addEventListener("change", (e)
=> { fileName.textContent = e.target.files[0].name; }); </script> </body>
</html>
```
| Step 3: Create a FastAPI app (Frontend Template) | https://gradio.app/guides/fastapi-app-with-the-gradio-client | Gradio Clients And Lite - Fastapi App With The Gradio Client Guide |
Finally, we are ready to run our FastAPI app, powered by the Gradio Python Client!
Open up a terminal and navigate to the directory containing `main.py`. Then run the following command in the terminal:
```bash
$ uvicorn main:app
```
You should see an output that looks like this:
```csv
Loaded as API: https://abidlabs-music-separation.hf.space ✔
INFO: Started server process [1360]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
```
And that's it! Start uploading videos and you'll get some "acapellified" videos in response (might take seconds to minutes to process depending on the length of your videos). Here's how the UI looks after uploading two videos:

If you'd like to learn more about how to use the Gradio Python Client in your projects, [read the dedicated Guide](/guides/getting-started-with-the-python-client/).
| Step 4: Run your FastAPI app | https://gradio.app/guides/fastapi-app-with-the-gradio-client | Gradio Clients And Lite - Fastapi App With The Gradio Client Guide |
What are agents?
A [LangChain agent](https://docs.langchain.com/docs/components/agents/agent) is a Large Language Model (LLM) that takes user input and reports an output based on using one of many tools at its disposal.
What is Gradio?
[Gradio](https://github.com/gradio-app/gradio) is the defacto standard framework for building Machine Learning Web Applications and sharing them with the world - all with just python! 🐍
| Some background | https://gradio.app/guides/gradio-and-llm-agents | Gradio Clients And Lite - Gradio And Llm Agents Guide |
To get started with `gradio_tools`, all you need to do is import and initialize your tools and pass them to the langchain agent!
In the following example, we import the `StableDiffusionPromptGeneratorTool` to create a good prompt for stable diffusion, the
`StableDiffusionTool` to create an image with our improved prompt, the `ImageCaptioningTool` to caption the generated image, and
the `TextToVideoTool` to create a video from a prompt.
We then tell our agent to create an image of a dog riding a skateboard, but to please improve our prompt ahead of time. We also ask
it to caption the generated image and create a video for it. The agent can decide which tool to use without us explicitly telling it.
```python
import os
if not os.getenv("OPENAI_API_KEY"):
raise ValueError("OPENAI_API_KEY must be set")
from langchain.agents import initialize_agent
from langchain.llms import OpenAI
from gradio_tools import (StableDiffusionTool, ImageCaptioningTool, StableDiffusionPromptGeneratorTool,
TextToVideoTool)
from langchain.memory import ConversationBufferMemory
llm = OpenAI(temperature=0)
memory = ConversationBufferMemory(memory_key="chat_history")
tools = [StableDiffusionTool().langchain, ImageCaptioningTool().langchain,
StableDiffusionPromptGeneratorTool().langchain, TextToVideoTool().langchain]
agent = initialize_agent(tools, llm, memory=memory, agent="conversational-react-description", verbose=True)
output = agent.run(input=("Please create a photo of a dog riding a skateboard "
"but improve my prompt prior to using an image generator."
"Please caption the generated image and create a video for it using the improved prompt."))
```
You'll note that we are using some pre-built tools that come with `gradio_tools`. Please see this [doc](https://github.com/freddyaboulton/gradio-toolsgradio-tools-gradio--llm-agents) for a complete list of the tools that come with `gradio_tools`.
If | gradio_tools - An end-to-end example | https://gradio.app/guides/gradio-and-llm-agents | Gradio Clients And Lite - Gradio And Llm Agents Guide |
that come with `gradio_tools`. Please see this [doc](https://github.com/freddyaboulton/gradio-toolsgradio-tools-gradio--llm-agents) for a complete list of the tools that come with `gradio_tools`.
If you would like to use a tool that's not currently in `gradio_tools`, it is very easy to add your own. That's what the next section will cover.
| gradio_tools - An end-to-end example | https://gradio.app/guides/gradio-and-llm-agents | Gradio Clients And Lite - Gradio And Llm Agents Guide |
The core abstraction is the `GradioTool`, which lets you define a new tool for your LLM as long as you implement a standard interface:
```python
class GradioTool(BaseTool):
def __init__(self, name: str, description: str, src: str) -> None:
@abstractmethod
def create_job(self, query: str) -> Job:
pass
@abstractmethod
def postprocess(self, output: Tuple[Any] | Any) -> str:
pass
```
The requirements are:
1. The name for your tool
2. The description for your tool. This is crucial! Agents decide which tool to use based on their description. Be precise and be sure to include example of what the input and the output of the tool should look like.
3. The url or space id, e.g. `freddyaboulton/calculator`, of the Gradio application. Based on this value, `gradio_tool` will create a [gradio client](https://github.com/gradio-app/gradio/blob/main/client/python/README.md) instance to query the upstream application via API. Be sure to click the link and learn more about the gradio client library if you are not familiar with it.
4. create_job - Given a string, this method should parse that string and return a job from the client. Most times, this is as simple as passing the string to the `submit` function of the client. More info on creating jobs [here](https://github.com/gradio-app/gradio/blob/main/client/python/README.mdmaking-a-prediction)
5. postprocess - Given the result of the job, convert it to a string the LLM can display to the user.
6. _Optional_ - Some libraries, e.g. [MiniChain](https://github.com/srush/MiniChain/tree/main), may need some info about the underlying gradio input and output types used by the tool. By default, this will return gr.Textbox() but
if you'd like to provide more accurate info, implement the `_block_input(self, gr)` and `_block_output(self, gr)` methods of the tool. The `gr` variable is the gradio module (the result of `import gradio as gr`). It will be
automatically imported by the `GradiTool` parent | gradio_tools - creating your own tool | https://gradio.app/guides/gradio-and-llm-agents | Gradio Clients And Lite - Gradio And Llm Agents Guide |
lf, gr)` and `_block_output(self, gr)` methods of the tool. The `gr` variable is the gradio module (the result of `import gradio as gr`). It will be
automatically imported by the `GradiTool` parent class and passed to the `_block_input` and `_block_output` methods.
And that's it!
Once you have created your tool, open a pull request to the `gradio_tools` repo! We welcome all contributions.
| gradio_tools - creating your own tool | https://gradio.app/guides/gradio-and-llm-agents | Gradio Clients And Lite - Gradio And Llm Agents Guide |
Here is the code for the StableDiffusion tool as an example:
```python
from gradio_tool import GradioTool
import os
class StableDiffusionTool(GradioTool):
"""Tool for calling stable diffusion from llm"""
def __init__(
self,
name="StableDiffusion",
description=(
"An image generator. Use this to generate images based on "
"text input. Input should be a description of what the image should "
"look like. The output will be a path to an image file."
),
src="gradio-client-demos/stable-diffusion",
token=None,
) -> None:
super().__init__(name, description, src, token)
def create_job(self, query: str) -> Job:
return self.client.submit(query, "", 9, fn_index=1)
def postprocess(self, output: str) -> str:
return [os.path.join(output, i) for i in os.listdir(output) if not i.endswith("json")][0]
def _block_input(self, gr) -> "gr.components.Component":
return gr.Textbox()
def _block_output(self, gr) -> "gr.components.Component":
return gr.Image()
```
Some notes on this implementation:
1. All instances of `GradioTool` have an attribute called `client` that is a pointed to the underlying [gradio client](https://github.com/gradio-app/gradio/tree/main/client/pythongradio_client-use-a-gradio-app-as-an-api----in-3-lines-of-python). That is what you should use
in the `create_job` method.
2. `create_job` just passes the query string to the `submit` function of the client with some other parameters hardcoded, i.e. the negative prompt string and the guidance scale. We could modify our tool to also accept these values from the input string in a subsequent version.
3. The `postprocess` method simply returns the first image from the gallery of images created by the stable diffusion space. We use the `os` module to get the full path of the image.
| Example tool - Stable Diffusion | https://gradio.app/guides/gradio-and-llm-agents | Gradio Clients And Lite - Gradio And Llm Agents Guide |
You now know how to extend the abilities of your LLM with the 1000s of gradio spaces running in the wild!
Again, we welcome any contributions to the [gradio_tools](https://github.com/freddyaboulton/gradio-tools) library.
We're excited to see the tools you all build!
| Conclusion | https://gradio.app/guides/gradio-and-llm-agents | Gradio Clients And Lite - Gradio And Llm Agents Guide |
Gradio themes are the easiest way to customize the look and feel of your app. You can choose from a variety of themes, or create your own. To do so, pass the `theme=` kwarg to the `launch()` method of the `Blocks` constructor. For example:
```python
with gr.Blocks() as demo:
... your code here
demo.launch(theme=gr.themes.Glass())
...
```
Gradio comes with a set of prebuilt themes which you can load from `gr.themes.*`. You can extend these themes or create your own themes from scratch - see the [Theming guide](/guides/theming-guide) for more details.
For additional styling ability, you can pass any CSS to your app as a string using the `css=` kwarg in the `launch()` method. You can also pass a pathlib.Path to a css file or a list of such paths to the `css_paths=` kwarg in the `launch()` method.
**Warning**: The use of query selectors in custom JS and CSS is _not_ guaranteed to work across Gradio versions that bind to Gradio's own HTML elements as the Gradio HTML DOM may change. We recommend using query selectors sparingly.
The base class for the Gradio app is `gradio-container`, so here's an example that changes the background color of the Gradio app:
```python
with gr.Blocks() as demo:
... your code here
demo.launch(css=".gradio-container {background-color: red}")
...
```
If you'd like to reference external files in your css, preface the file path (which can be a relative or absolute path) with `"/gradio_api/file="`, for example:
```python
with gr.Blocks() as demo:
... your code here
demo.launch(css=".gradio-container {background: url('/gradio_api/file=clouds.jpg')}")
...
```
Note: By default, most files in the host machine are not accessible to users running the Gradio app. As a result, you should make sure that any referenced files (such as `clouds.jpg` here) are either URLs or [allowed paths, as described here](/main/guides/file-access).
| Adding custom CSS to your demo | https://gradio.app/guides/custom-CSS-and-JS | Building With Blocks - Custom Css And Js Guide |
You can `elem_id` to add an HTML element `id` to any component, and `elem_classes` to add a class or list of classes. This will allow you to select elements more easily with CSS. This approach is also more likely to be stable across Gradio versions as built-in class names or ids may change (however, as mentioned in the warning above, we cannot guarantee complete compatibility between Gradio versions if you use custom CSS as the DOM elements may themselves change).
```python
css = """
warning {background-color: FFCCCB}
.feedback textarea {font-size: 24px !important}
"""
with gr.Blocks() as demo:
box1 = gr.Textbox(value="Good Job", elem_classes="feedback")
box2 = gr.Textbox(value="Failure", elem_id="warning", elem_classes="feedback")
demo.launch(css=css)
```
The CSS `warning` ruleset will only target the second Textbox, while the `.feedback` ruleset will target both. Note that when targeting classes, you might need to put the `!important` selector to override the default Gradio styles.
| The `elem_id` and `elem_classes` Arguments | https://gradio.app/guides/custom-CSS-and-JS | Building With Blocks - Custom Css And Js Guide |
There are 3 ways to add javascript code to your Gradio demo:
1. You can add JavaScript code as a string to the `js` parameter of the `Blocks` or `Interface` initializer. This will run the JavaScript code when the demo is first loaded.
Below is an example of adding custom js to show an animated welcome message when the demo first loads.
$code_blocks_js_load
$demo_blocks_js_load
2. When using `Blocks` and event listeners, events have a `js` argument that can take a JavaScript function as a string and treat it just like a Python event listener function. You can pass both a JavaScript function and a Python function (in which case the JavaScript function is run first) or only Javascript (and set the Python `fn` to `None`). Take a look at the code below:
$code_blocks_js_methods
$demo_blocks_js_methods
3. Lastly, you can add JavaScript code to the `head` param of the `Blocks` initializer. This will add the code to the head of the HTML document. For example, you can add Google Analytics to your demo like so:
```python
head = f"""
<script async src="https://www.googletagmanager.com/gtag/js?id={google_analytics_tracking_id}"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){{dataLayer.push(arguments);}}
gtag('js', new Date());
gtag('config', '{google_analytics_tracking_id}');
</script>
"""
with gr.Blocks() as demo:
gr.HTML("<h1>My App</h1>")
demo.launch(head=head)
```
The `head` parameter accepts any HTML tags you would normally insert into the `<head>` of a page. For example, you can also include `<meta>` tags to `head` in order to update the social sharing preview for your Gradio app like this:
```py
import gradio as gr
custom_head = """
<!-- HTML Meta Tags -->
<title>Sample App</title>
<meta name="description" content="An open-source web application showcasing various features and capabilities.">
<!-- Facebook Meta Tags -->
<meta property="og:url" content="https://example.com">
<meta property="og:type" content="webs | Adding custom JavaScript to your demo | https://gradio.app/guides/custom-CSS-and-JS | Building With Blocks - Custom Css And Js Guide |
n open-source web application showcasing various features and capabilities.">
<!-- Facebook Meta Tags -->
<meta property="og:url" content="https://example.com">
<meta property="og:type" content="website">
<meta property="og:title" content="Sample App">
<meta property="og:description" content="An open-source web application showcasing various features and capabilities.">
<meta property="og:image" content="https://cdn.britannica.com/98/152298-050-8E45510A/Cheetah.jpg">
<!-- Twitter Meta Tags -->
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:creator" content="@example_user">
<meta name="twitter:title" content="Sample App">
<meta name="twitter:description" content="An open-source web application showcasing various features and capabilities.">
<meta name="twitter:image" content="https://cdn.britannica.com/98/152298-050-8E45510A/Cheetah.jpg">
<meta property="twitter:domain" content="example.com">
<meta property="twitter:url" content="https://example.com">
"""
with gr.Blocks(title="My App") as demo:
gr.HTML("<h1>My App</h1>")
demo.launch(head=custom_head)
```
Note that injecting custom JS can affect browser behavior and accessibility (e.g. keyboard shortcuts may be lead to unexpected behavior if your Gradio app is embedded in another webpage). You should test your interface across different browsers and be mindful of how scripts may interact with browser defaults. Here's an example where pressing `Shift + s` triggers the `click` event of a specific `Button` component if the browser focus is _not_ on an input component (e.g. `Textbox` component):
```python
import gradio as gr
shortcut_js = """
<script>
function shortcuts(e) {
var event = document.all ? window.event : e;
switch (e.target.tagName.toLowerCase()) {
case "input":
case "textarea":
break;
default:
if (e.key.toLowerCase() == "s" && e.shiftKey) {
document.getElementById("my_btn").click();
}
}
}
docum | Adding custom JavaScript to your demo | https://gradio.app/guides/custom-CSS-and-JS | Building With Blocks - Custom Css And Js Guide |
"input":
case "textarea":
break;
default:
if (e.key.toLowerCase() == "s" && e.shiftKey) {
document.getElementById("my_btn").click();
}
}
}
document.addEventListener('keypress', shortcuts, false);
</script>
"""
with gr.Blocks() as demo:
action_button = gr.Button(value="Name", elem_id="my_btn")
textbox = gr.Textbox()
action_button.click(lambda : "button pressed", None, textbox)
demo.launch(head=shortcut_js)
```
| Adding custom JavaScript to your demo | https://gradio.app/guides/custom-CSS-and-JS | Building With Blocks - Custom Css And Js Guide |
Elements within a `with gr.Row` clause will all be displayed horizontally. For example, to display two Buttons side by side:
```python
with gr.Blocks() as demo:
with gr.Row():
btn1 = gr.Button("Button 1")
btn2 = gr.Button("Button 2")
```
You can set every element in a Row to have the same height. Configure this with the `equal_height` argument.
```python
with gr.Blocks() as demo:
with gr.Row(equal_height=True):
textbox = gr.Textbox()
btn2 = gr.Button("Button 2")
```
The widths of elements in a Row can be controlled via a combination of `scale` and `min_width` arguments that are present in every Component.
- `scale` is an integer that defines how an element will take up space in a Row. If scale is set to `0`, the element will not expand to take up space. If scale is set to `1` or greater, the element will expand. Multiple elements in a row will expand proportional to their scale. Below, `btn2` will expand twice as much as `btn1`, while `btn0` will not expand at all:
```python
with gr.Blocks() as demo:
with gr.Row():
btn0 = gr.Button("Button 0", scale=0)
btn1 = gr.Button("Button 1", scale=1)
btn2 = gr.Button("Button 2", scale=2)
```
- `min_width` will set the minimum width the element will take. The Row will wrap if there isn't sufficient space to satisfy all `min_width` values.
Learn more about Rows in the [docs](https://gradio.app/docs/row).
| Rows | https://gradio.app/guides/controlling-layout | Building With Blocks - Controlling Layout Guide |
Components within a Column will be placed vertically atop each other. Since the vertical layout is the default layout for Blocks apps anyway, to be useful, Columns are usually nested within Rows. For example:
$code_rows_and_columns
$demo_rows_and_columns
See how the first column has two Textboxes arranged vertically. The second column has an Image and Button arranged vertically. Notice how the relative widths of the two columns is set by the `scale` parameter. The column with twice the `scale` value takes up twice the width.
Learn more about Columns in the [docs](https://gradio.app/docs/column).
Fill Browser Height / Width
To make an app take the full width of the browser by removing the side padding, use `gr.Blocks(fill_width=True)`.
To make top level Components expand to take the full height of the browser, use `fill_height` and apply scale to the expanding Components.
```python
import gradio as gr
with gr.Blocks(fill_height=True) as demo:
gr.Chatbot(scale=1)
gr.Textbox(scale=0)
```
| Columns and Nesting | https://gradio.app/guides/controlling-layout | Building With Blocks - Controlling Layout Guide |
Some components support setting height and width. These parameters accept either a number (interpreted as pixels) or a string. Using a string allows the direct application of any CSS unit to the encapsulating Block element.
Below is an example illustrating the use of viewport width (vw):
```python
import gradio as gr
with gr.Blocks() as demo:
im = gr.ImageEditor(width="50vw")
demo.launch()
```
| Dimensions | https://gradio.app/guides/controlling-layout | Building With Blocks - Controlling Layout Guide |
You can also create Tabs using the `with gr.Tab('tab_name'):` clause. Any component created inside of a `with gr.Tab('tab_name'):` context appears in that tab. Consecutive Tab clauses are grouped together so that a single tab can be selected at one time, and only the components within that Tab's context are shown.
For example:
$code_blocks_flipper
$demo_blocks_flipper
Also note the `gr.Accordion('label')` in this example. The Accordion is a layout that can be toggled open or closed. Like `Tabs`, it is a layout element that can selectively hide or show content. Any components that are defined inside of a `with gr.Accordion('label'):` will be hidden or shown when the accordion's toggle icon is clicked.
Learn more about [Tabs](https://gradio.app/docs/tab) and [Accordions](https://gradio.app/docs/accordion) in the docs.
| Tabs and Accordions | https://gradio.app/guides/controlling-layout | Building With Blocks - Controlling Layout Guide |
The sidebar is a collapsible panel that renders child components on the left side of the screen and can be expanded or collapsed.
For example:
$code_blocks_sidebar
Learn more about [Sidebar](https://gradio.app/docs/gradio/sidebar) in the docs.
| Sidebar | https://gradio.app/guides/controlling-layout | Building With Blocks - Controlling Layout Guide |
In order to provide a guided set of ordered steps, a controlled workflow, you can use the `Walkthrough` component with accompanying `Step` components.
The `Walkthrough` component has a visual style and user experience tailored for this usecase.
Authoring this component is very similar to `Tab`, except it is the app developers responsibility to progress through each step, by setting the appropriate ID for the parent `Walkthrough` which should correspond to an ID provided to an indvidual `Step`.
$demo_walkthrough
Learn more about [Walkthrough](https://gradio.app/docs/gradio/walkthrough) in the docs.
| Multi-step walkthroughs | https://gradio.app/guides/controlling-layout | Building With Blocks - Controlling Layout Guide |
Both Components and Layout elements have a `visible` argument that can set initially and also updated. Setting `gr.Column(visible=...)` on a Column can be used to show or hide a set of Components.
$code_blocks_form
$demo_blocks_form
| Visibility | https://gradio.app/guides/controlling-layout | Building With Blocks - Controlling Layout Guide |
In some cases, you might want to define components before you actually render them in your UI. For instance, you might want to show an examples section using `gr.Examples` above the corresponding `gr.Textbox` input. Since `gr.Examples` requires as a parameter the input component object, you will need to first define the input component, but then render it later, after you have defined the `gr.Examples` object.
The solution to this is to define the `gr.Textbox` outside of the `gr.Blocks()` scope and use the component's `.render()` method wherever you'd like it placed in the UI.
Here's a full code example:
```python
input_textbox = gr.Textbox()
with gr.Blocks() as demo:
gr.Examples(["hello", "bonjour", "merhaba"], input_textbox)
input_textbox.render()
```
Similarly, if you have already defined a component in a Gradio app, but wish to unrender it so that you can define in a different part of your application, then you can call the `.unrender()` method. In the following example, the `Textbox` will appear in the third column:
```py
import gradio as gr
with gr.Blocks() as demo:
with gr.Row():
with gr.Column():
gr.Markdown("Row 1")
textbox = gr.Textbox()
with gr.Column():
gr.Markdown("Row 2")
textbox.unrender()
with gr.Column():
gr.Markdown("Row 3")
textbox.render()
demo.launch()
```
| Defining and Rendering Components Separately | https://gradio.app/guides/controlling-layout | Building With Blocks - Controlling Layout Guide |
The `gr.HTML` component can also be used to create custom input components by triggering events. You will provide `js_on_load`, javascript code that runs when the component loads. The code has access to the `trigger` function to trigger events that Gradio can listen to, and the object `props` which has access to all the props of the component, including `value`.
$code_star_rating_events
$demo_star_rating_events
Take a look at the `js_on_load` code above. We add click event listeners to each star image to update the value via `props.value` when a star is clicked. This also re-renders the template to show the updated value. We also add a click event listener to the submit button that triggers the `submit` event. In our app, we listen to this trigger to run a function that outputs the `value` of the star rating.
The `js_on_load` scope also includes an `upload` async function that lets you upload a JavaScript `File` object directly to the Gradio server. It returns a dictionary with `path` (the server-side file path) and `url` (the public URL to access the file).
```js
const { path, url } = await upload(file);
```
Here is an example of a custom file-upload widget built with `gr.HTML`:
$code_html_upload
$demo_html_upload
You can update any other props of the component via `props.<prop_name>`, and trigger events via `trigger('<event_name>')`. The trigger event can also be send event data, e.g.
```js
trigger('event_name', { key: value, count: 123 });
```
This event data will be accessible the Python event listener functions via gr.EventData.
```python
def handle_event(evt: gr.EventData):
print(evt.key)
print(evt.count)
star_rating.event(fn=handle_event, inputs=[], outputs=[])
```
Keep in mind that event listeners attached in `js_on_load` are only attached once when the component is first rendered. If your component creates new elements dynamically that need event listeners, attach the event listener to a parent element that exists when the component load | Triggering Events and Custom Input Components | https://gradio.app/guides/custom-HTML-components | Building With Blocks - Custom Html Components Guide |
ce when the component is first rendered. If your component creates new elements dynamically that need event listeners, attach the event listener to a parent element that exists when the component loads, and check for the target. For example:
```js
element.addEventListener('click', (e) =>
if (e.target && e.target.matches('.child-element')) {
props.value = e.target.dataset.value;
}
);
```
You can trigger an event with any name. As long as the event name appears enclosed in quotes in your `js_on_load` string, you can attach a Python listener using `component.do_something(fn, ...)`. If it is not one of the standard Gradio event names, your IDE might not recognize it as an event, but it will still work as long as the event name matches in both the JS and Python code.
| Triggering Events and Custom Input Components | https://gradio.app/guides/custom-HTML-components | Building With Blocks - Custom Html Components Guide |
You can call Python functions directly from your `js_on_load` code using the `server_functions` parameter. Pass a list of Python functions to `server_functions`, and they become available as async methods on a `server` object inside `js_on_load`.
$code_html_server_functions
$demo_html_server_functions
| Server Functions | https://gradio.app/guides/custom-HTML-components | Building With Blocks - Custom Html Components Guide |
If you are reusing the same HTML component in multiple places, you can create a custom component class by subclassing `gr.HTML` and setting default values for the templates and other arguments. Here's an example of creating a reusable StarRating component.
$code_star_rating_component
$demo_star_rating_component
Note: Gradio requires all components to accept certain arguments, such as `render`. You do not need
to handle these arguments, but you do need to accept them in your component constructor and pass
them to the parent `gr.HTML` class. Otherwise, your component may not behave correctly. The easiest
way is to add `**kwargs` to your `__init__` method and pass it to `super().__init__()`, just like in the code example above.
We've created several custom HTML components as reusable components as examples you can reference in [this directory](https://github.com/gradio-app/gradio/tree/main/gradio/components/custom_html_components).
| Component Classes | https://gradio.app/guides/custom-HTML-components | Building With Blocks - Custom Html Components Guide |
The `gr.HTML` component can also be used as a container for other Gradio components using the `@children` placeholder. This allows you to create custom layouts with HTML/CSS.
The `@children` must be at the top-level of the `html_template`. Since children cannot be nested inside the template, target the parent element directly with your CSS and JavaScript if you need to style or interact with the container of the children.
Here's a basic example:
$code_html_children
$demo_html_children
In this example, the `@children` placeholder marks where the child components (the Name and Email textboxes) will be rendered. Notice how in the `css_template` we target the parent element to style the container div that wraps the children.
API / MCP support
To make your custom HTML component work with Gradio's built-in support for API and MCP (Model Context Protocol) usage, you need to define how its data should be serialized. There are two ways to do this:
**Option 1: Define an `api_info()` method**
Add an `api_info()` method that returns a JSON schema dictionary describing your component's data format. This is what we do in the StarRating class above.
**Option 2: Define a Pydantic data model**
For more complex data structures, you can define a Pydantic model that inherits from `GradioModel` or `GradioRootModel`:
```python
from gradio.data_classes import GradioModel, GradioRootModel
class MyComponentData(GradioModel):
items: List[str]
count: int
class MyComponent(gr.HTML):
data_model = MyComponentData
```
Use `GradioModel` when your data is a dictionary with named fields, or `GradioRootModel` when your data is a simple type (string, list, etc.) that doesn't need to be wrapped in a dictionary. By defining a `data_model`, your component automatically implements API methods.
| Embedding Components in HTML | https://gradio.app/guides/custom-HTML-components | Building With Blocks - Custom Html Components Guide |
Once you've built a custom HTML component, you can share it with the community by pushing it to the [HTML Components Gallery](https://www.gradio.app/custom-components/html-gallery). The gallery lets anyone browse, interact with, and copy the Python code for community-contributed components.
Call `push_to_hub` on any `gr.HTML` instance or subclass:
```python
star_rating = StarRating()
star_rating.push_to_hub(
name="Star Rating",
description="Interactive 5-star rating with click-to-rate",
author="your-hf-username",
tags=["input", "rating"],
repo_url="https://github.com/your-username/your-repo",
)
```
This opens a pull request on the gallery's HuggingFace dataset repo. Once approved, your component will appear in the gallery for others to discover and use.
Tip: The `push_to_hub` method has a `head` parameter that deserves special attention. If your component uses an external library loaded via the `head` parameter of `launch` (e.g. `head='<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>'`), pass the same `head` string to `push_to_hub` so that the gallery can load those scripts when rendering your component.
Authentication
You need a HuggingFace **write token** to push components. Either pass it directly:
```python
star_rating.push_to_hub(..., token="hf_xxxxx")
```
Or log in beforehand with the HuggingFace CLI, and the cached token will be used automatically:
```bash
huggingface-cli login
```
| Sharing Components with `push_to_hub` | https://gradio.app/guides/custom-HTML-components | Building With Blocks - Custom Html Components Guide |
Keep in mind that using `gr.HTML` to create custom components involves injecting raw HTML and JavaScript into your Gradio app. Be cautious about using untrusted user input into `html_template` and `js_on_load`, as this could lead to cross-site scripting (XSS) vulnerabilities.
You should also expect that any Python event listeners that take your `gr.HTML` component as input could have any arbitrary value passed to them, not just the values you expect the frontend to be able to set for `value`. Sanitize and validate user input appropriately in public applications.
| Security Considerations | https://gradio.app/guides/custom-HTML-components | Building With Blocks - Custom Html Components Guide |
- Browse the [HTML Components Gallery](https://www.gradio.app/custom-components/html-gallery) to see what the community has built and copy components into your own apps.
- Check out more examples in [this directory](https://github.com/gradio-app/gradio/tree/main/gradio/components/custom_html_components).
- Share your own components with `push_to_hub` to help others! | Next Steps | https://gradio.app/guides/custom-HTML-components | Building With Blocks - Custom Html Components Guide |
In the example below, we will create a variable number of Textboxes. When the user edits the input Textbox, we create a Textbox for each letter in the input. Try it out below:
$code_render_split_simple
$demo_render_split_simple
See how we can now create a variable number of Textboxes using our custom logic - in this case, a simple `for` loop. The `@gr.render` decorator enables this with the following steps:
1. Create a function and attach the @gr.render decorator to it.
2. Add the input components to the `inputs=` argument of @gr.render, and create a corresponding argument in your function for each component. This function will automatically re-run on any change to a component.
3. Add all components inside the function that you want to render based on the inputs.
Now whenever the inputs change, the function re-runs, and replaces the components created from the previous function run with the latest run. Pretty straightforward! Let's add a little more complexity to this app:
$code_render_split
$demo_render_split
By default, `@gr.render` re-runs are triggered by the `.load` listener to the app and the `.change` listener to any input component provided. We can override this by explicitly setting the triggers in the decorator, as we have in this app to only trigger on `input_text.submit` instead.
If you are setting custom triggers, and you also want an automatic render at the start of the app, make sure to add `demo.load` to your list of triggers.
| Dynamic Number of Components | https://gradio.app/guides/dynamic-apps-with-render-decorator | Building With Blocks - Dynamic Apps With Render Decorator Guide |
If you're creating components, you probably want to attach event listeners to them as well. Let's take a look at an example that takes in a variable number of Textbox as input, and merges all the text into a single box.
$code_render_merge_simple
$demo_render_merge_simple
Let's take a look at what's happening here:
1. The state variable `text_count` is keeping track of the number of Textboxes to create. By clicking on the Add button, we increase `text_count` which triggers the render decorator.
2. Note that in every single Textbox we create in the render function, we explicitly set a `key=` argument. This key allows us to preserve the value of this Component between re-renders. If you type in a value in a textbox, and then click the Add button, all the Textboxes re-render, but their values aren't cleared because the `key=` maintains the the value of a Component across a render.
3. We've stored the Textboxes created in a list, and provide this list as input to the merge button event listener. Note that **all event listeners that use Components created inside a render function must also be defined inside that render function**. The event listener can still reference Components outside the render function, as we do here by referencing `merge_btn` and `output` which are both defined outside the render function.
Just as with Components, whenever a function re-renders, the event listeners created from the previous render are cleared and the new event listeners from the latest run are attached.
This allows us to create highly customizable and complex interactions!
| Dynamic Event Listeners | https://gradio.app/guides/dynamic-apps-with-render-decorator | Building With Blocks - Dynamic Apps With Render Decorator Guide |
The `key=` argument is used to let Gradio know that the same component is being generated when your render function re-runs. This does two things:
1. The same element in the browser is re-used from the previous render for this Component. This gives browser performance gains - as there's no need to destroy and rebuild a component on a render - and preserves any browser attributes that the Component may have had. If your Component is nested within layout items like `gr.Row`, make sure they are keyed as well because the keys of the parents must also match.
2. Properties that may be changed by the user or by other event listeners are preserved. By default, only the "value" of Component is preserved, but you can specify any list of properties to preserve using the `preserved_by_key=` kwarg.
See the example below:
$code_render_preserve_key
$demo_render_preserve_key
You'll see in this example, when you change the `number_of_boxes` slider, there's a new re-render to update the number of box rows. If you click the "Change Label" buttons, they change the `label` and `info` properties of the corresponding textbox. You can also enter text in any textbox to change its value. If you change number of boxes after this, the re-renders "reset" the `info`, but the `label` and any entered `value` is still preserved.
Note you can also key any event listener, e.g. `button.click(key=...)` if the same listener is being recreated with the same inputs and outputs across renders. This gives performance benefits, and also prevents errors from occurring if an event was triggered in a previous render, then a re-render occurs, and then the previous event finishes processing. By keying your listener, Gradio knows where to send the data properly.
| Closer Look at `keys=` parameter | https://gradio.app/guides/dynamic-apps-with-render-decorator | Building With Blocks - Dynamic Apps With Render Decorator Guide |
Let's look at two examples that use all the features above. First, try out the to-do list app below:
$code_todo_list
$demo_todo_list
Note that almost the entire app is inside a single `gr.render` that reacts to the tasks `gr.State` variable. This variable is a nested list, which presents some complexity. If you design a `gr.render` to react to a list or dict structure, ensure you do the following:
1. Any event listener that modifies a state variable in a manner that should trigger a re-render must set the state variable as an output. This lets Gradio know to check if the variable has changed behind the scenes.
2. In a `gr.render`, if a variable in a loop is used inside an event listener function, that variable should be "frozen" via setting it to itself as a default argument in the function header. See how we have `task=task` in both `mark_done` and `delete`. This freezes the variable to its "loop-time" value.
Let's take a look at one last example that uses everything we learned. Below is an audio mixer. Provide multiple audio tracks and mix them together.
$code_audio_mixer
$demo_audio_mixer
Two things to note in this app:
1. Here we provide `key=` to all the components! We need to do this so that if we add another track after setting the values for an existing track, our input values to the existing track do not get reset on re-render.
2. When there are lots of components of different types and arbitrary counts passed to an event listener, it is easier to use the set and dictionary notation for inputs rather than list notation. Above, we make one large set of all the input `gr.Audio` and `gr.Slider` components when we pass the inputs to the `merge` function. In the function body we query the component values as a dict.
The `gr.render` expands gradio capabilities extensively - see what you can make out of it!
| Putting it Together | https://gradio.app/guides/dynamic-apps-with-render-decorator | Building With Blocks - Dynamic Apps With Render Decorator Guide |
Take a look at the demo below.
$code_hello_blocks
$demo_hello_blocks
- First, note the `with gr.Blocks() as demo:` clause. The Blocks app code will be contained within this clause.
- Next come the Components. These are the same Components used in `Interface`. However, instead of being passed to some constructor, Components are automatically added to the Blocks as they are created within the `with` clause.
- Finally, the `click()` event listener. Event listeners define the data flow within the app. In the example above, the listener ties the two Textboxes together. The Textbox `name` acts as the input and Textbox `output` acts as the output to the `greet` method. This dataflow is triggered when the Button `greet_btn` is clicked. Like an Interface, an event listener can take multiple inputs or outputs.
You can also attach event listeners using decorators - skip the `fn` argument and assign `inputs` and `outputs` directly:
$code_hello_blocks_decorator
| Blocks Structure | https://gradio.app/guides/blocks-and-event-listeners | Building With Blocks - Blocks And Event Listeners Guide |
In the example above, you'll notice that you are able to edit Textbox `name`, but not Textbox `output`. This is because any Component that acts as an input to an event listener is made interactive. However, since Textbox `output` acts only as an output, Gradio determines that it should not be made interactive. You can override the default behavior and directly configure the interactivity of a Component with the boolean `interactive` keyword argument, e.g. `gr.Textbox(interactive=True)`.
```python
output = gr.Textbox(label="Output", interactive=True)
```
_Note_: What happens if a Gradio component is neither an input nor an output? If a component is constructed with a default value, then it is presumed to be displaying content and is rendered non-interactive. Otherwise, it is rendered interactive. Again, this behavior can be overridden by specifying a value for the `interactive` argument.
| Event Listeners and Interactivity | https://gradio.app/guides/blocks-and-event-listeners | Building With Blocks - Blocks And Event Listeners Guide |
Take a look at the demo below:
$code_blocks_hello
$demo_blocks_hello
Instead of being triggered by a click, the `welcome` function is triggered by typing in the Textbox `inp`. This is due to the `change()` event listener. Different Components support different event listeners. For example, the `Video` Component supports a `play()` event listener, triggered when a user presses play. See the [Docs](http://gradio.app/docscomponents) for the event listeners for each Component.
| Types of Event Listeners | https://gradio.app/guides/blocks-and-event-listeners | Building With Blocks - Blocks And Event Listeners Guide |
A Blocks app is not limited to a single data flow the way Interfaces are. Take a look at the demo below:
$code_reversible_flow
$demo_reversible_flow
Note that `num1` can act as input to `num2`, and also vice-versa! As your apps get more complex, you will have many data flows connecting various Components.
Here's an example of a "multi-step" demo, where the output of one model (a speech-to-text model) gets fed into the next model (a sentiment classifier).
$code_blocks_speech_text_sentiment
$demo_blocks_speech_text_sentiment
| Multiple Data Flows | https://gradio.app/guides/blocks-and-event-listeners | Building With Blocks - Blocks And Event Listeners Guide |
The event listeners you've seen so far have a single input component. If you'd like to have multiple input components pass data to the function, you have two options on how the function can accept input component values:
1. as a list of arguments, or
2. as a single dictionary of values, keyed by the component
Let's see an example of each:
$code_calculator_list_and_dict
Both `add()` and `sub()` take `a` and `b` as inputs. However, the syntax is different between these listeners.
1. To the `add_btn` listener, we pass the inputs as a list. The function `add()` takes each of these inputs as arguments. The value of `a` maps to the argument `num1`, and the value of `b` maps to the argument `num2`.
2. To the `sub_btn` listener, we pass the inputs as a set (note the curly brackets!). When you pass a set, the function `sub()` receives a single dictionary argument `data`, where the keys are the input components and the values are the values of those components.
It is a matter of preference which syntax you prefer! For functions with many input components, option 2 may be easier to manage.
$demo_calculator_list_and_dict
| Function Input List vs Set | https://gradio.app/guides/blocks-and-event-listeners | Building With Blocks - Blocks And Event Listeners Guide |
Similarly, you may return values for multiple output components either as:
1. a list of values, or
2. a dictionary keyed by the component
Let's first see an example of (1), where we set the values of two output components by returning two values:
```python
with gr.Blocks() as demo:
food_box = gr.Number(value=10, label="Food Count")
status_box = gr.Textbox()
def eat(food):
if food > 0:
return food - 1, "full"
else:
return 0, "hungry"
gr.Button("Eat").click(
fn=eat,
inputs=food_box,
outputs=[food_box, status_box]
)
```
Above, each return statement returns two values corresponding to `food_box` and `status_box`, respectively.
**Note:** if your event listener has a single output component, you should **not** return it as a single-item list. This will not work, since Gradio does not know whether to interpret that outer list as part of your return value. You should instead just return that value directly.
Now, let's see option (2). Instead of returning a list of values corresponding to each output component in order, you can also return a dictionary, with the key corresponding to the output component and the value as the new value. This also allows you to skip updating some output components.
```python
with gr.Blocks() as demo:
food_box = gr.Number(value=10, label="Food Count")
status_box = gr.Textbox()
def eat(food):
if food > 0:
return {food_box: food - 1, status_box: "full"}
else:
return {status_box: "hungry"}
gr.Button("Eat").click(
fn=eat,
inputs=food_box,
outputs=[food_box, status_box]
)
```
Notice how when there is no food, we only update the `status_box` element. We skipped updating the `food_box` component.
Dictionary returns are helpful when an event listener affects many components on return, or conditionally affects outputs and not others.
Keep in mind that with dictionary returns, | Function Return List vs Dict | https://gradio.app/guides/blocks-and-event-listeners | Building With Blocks - Blocks And Event Listeners Guide |
d_box` component.
Dictionary returns are helpful when an event listener affects many components on return, or conditionally affects outputs and not others.
Keep in mind that with dictionary returns, we still need to specify the possible outputs in the event listener.
| Function Return List vs Dict | https://gradio.app/guides/blocks-and-event-listeners | Building With Blocks - Blocks And Event Listeners Guide |
The return value of an event listener function is usually the updated value of the corresponding output Component. Sometimes we want to update the configuration of the Component as well, such as the visibility. In this case, we return a new Component, setting the properties we want to change.
$code_blocks_essay_simple
$demo_blocks_essay_simple
See how we can configure the Textbox itself through a new `gr.Textbox()` method. The `value=` argument can still be used to update the value along with Component configuration. Any arguments we do not set will preserve their previous values.
| Updating Component Configurations | https://gradio.app/guides/blocks-and-event-listeners | Building With Blocks - Blocks And Event Listeners Guide |
In some cases, you may want to leave a component's value unchanged. Gradio includes a special function, `gr.skip()`, which can be returned from your function. Returning this function will keep the output component (or components') values as is. Let us illustrate with an example:
$code_skip
$demo_skip
Note the difference between returning `None` (which generally resets a component's value to an empty state) versus returning `gr.skip()`, which leaves the component value unchanged.
Tip: if you have multiple output components, and you want to leave all of their values unchanged, you can just return a single `gr.skip()` instead of returning a tuple of skips, one for each element.
| Not Changing a Component's Value | https://gradio.app/guides/blocks-and-event-listeners | Building With Blocks - Blocks And Event Listeners Guide |
You can also run events consecutively by using the `then` method of an event listener. This will run an event after the previous event has finished running. This is useful for running events that update components in multiple steps.
For example, in the chatbot example below, we first update the chatbot with the user message immediately, and then update the chatbot with the computer response after a simulated delay.
$code_chatbot_consecutive
$demo_chatbot_consecutive
The `.then()` method of an event listener executes the subsequent event regardless of whether the previous event raised any errors. If you'd like to only run subsequent events if the previous event executed successfully, use the `.success()` method, which takes the same arguments as `.then()`. Conversely, if you'd like to only run subsequent events if the previous event failed (i.e., raised an error), use the `.failure()` method. This is particularly useful for error handling workflows, such as displaying error messages or restoring previous states when an operation fails.
| Running Events Consecutively | https://gradio.app/guides/blocks-and-event-listeners | Building With Blocks - Blocks And Event Listeners Guide |
Often times, you may want to bind multiple triggers to the same function. For example, you may want to allow a user to click a submit button, or press enter to submit a form. You can do this using the `gr.on` method and passing a list of triggers to the `trigger`.
$code_on_listener_basic
$demo_on_listener_basic
You can use decorator syntax as well:
$code_on_listener_decorator
You can use `gr.on` to create "live" events by binding to the `change` event of components that implement it. If you do not specify any triggers, the function will automatically bind to all `change` event of all input components that include a `change` event (for example `gr.Textbox` has a `change` event whereas `gr.Button` does not).
$code_on_listener_live
$demo_on_listener_live
You can follow `gr.on` with `.then`, just like any regular event listener. This handy method should save you from having to write a lot of repetitive code!
| Binding Multiple Triggers to a Function | https://gradio.app/guides/blocks-and-event-listeners | Building With Blocks - Blocks And Event Listeners Guide |
If you want to set a Component's value to always be a function of the value of other Components, you can use the following shorthand:
```python
with gr.Blocks() as demo:
num1 = gr.Number()
num2 = gr.Number()
product = gr.Number(lambda a, b: a * b, inputs=[num1, num2])
```
This functionally the same as:
```python
with gr.Blocks() as demo:
num1 = gr.Number()
num2 = gr.Number()
product = gr.Number()
gr.on(
[num1.change, num2.change, demo.load],
lambda a, b: a * b,
inputs=[num1, num2],
outputs=product
)
```
| Binding a Component Value Directly to a Function of Other Components | https://gradio.app/guides/blocks-and-event-listeners | Building With Blocks - Blocks And Event Listeners Guide |
Global state in Gradio apps is very simple: any variable created outside of a function is shared globally between all users.
This makes managing global state very simple and without the need for external services. For example, in this application, the `visitor_count` variable is shared between all users
```py
import gradio as gr
Shared between all users
visitor_count = 0
def increment_counter():
global visitor_count
visitor_count += 1
return visitor_count
with gr.Blocks() as demo:
number = gr.Textbox(label="Total Visitors", value="Counting...")
demo.load(increment_counter, inputs=None, outputs=number)
demo.launch()
```
This means that any time you do _not_ want to share a value between users, you should declare it _within_ a function. But what if you need to share values between function calls, e.g. a chat history? In that case, you should use one of the subsequent approaches to manage state.
| Global State | https://gradio.app/guides/state-in-blocks | Building With Blocks - State In Blocks Guide |
Gradio supports session state, where data persists across multiple submits within a page session. To reiterate, session data is _not_ shared between different users of your model, and does _not_ persist if a user refreshes the page to reload the Gradio app. To store data in a session state, you need to do three things:
1. Create a `gr.State()` object. If there is a default value to this stateful object, pass that into the constructor. Note that `gr.State` objects must be [deepcopy-able](https://docs.python.org/3/library/copy.html), otherwise you will need to use a different approach as described below.
2. In the event listener, put the `State` object as an input and output as needed.
3. In the event listener function, add the variable to the input parameters and the return value.
Let's take a look at a simple example. We have a simple checkout app below where you add items to a cart. You can also see the size of the cart.
$code_simple_state
Notice how we do this with state:
1. We store the cart items in a `gr.State()` object, initialized here to be an empty list.
2. When adding items to the cart, the event listener uses the cart as both input and output - it returns the updated cart with all the items inside.
3. We can attach a `.change` listener to cart, that uses the state variable as input as well.
You can think of `gr.State` as an invisible Gradio component that can store any kind of value. Here, `cart` is not visible in the frontend but is used for calculations.
The `.change` listener for a state variable triggers after any event listener changes the value of a state variable. If the state variable holds a sequence (like a `list`, `set`, or `dict`), a change is triggered if any of the elements inside change. If it holds an object or primitive, a change is triggered if the **hash** of the value changes. So if you define a custom class and create a `gr.State` variable that is an instance of that class, make sure that the the class includes a sensible `__ | Session State | https://gradio.app/guides/state-in-blocks | Building With Blocks - State In Blocks Guide |
riggered if the **hash** of the value changes. So if you define a custom class and create a `gr.State` variable that is an instance of that class, make sure that the the class includes a sensible `__hash__` implementation.
The value of a session State variable is cleared when the user refreshes the page. The value is stored on in the app backend for 60 minutes after the user closes the tab (this can be configured by the `delete_cache` parameter in `gr.Blocks`).
Learn more about `State` in the [docs](https://gradio.app/docs/gradio/state).
**What about objects that cannot be deepcopied?**
As mentioned earlier, the value stored in `gr.State` must be [deepcopy-able](https://docs.python.org/3/library/copy.html). If you are working with a complex object that cannot be deepcopied, you can take a different approach to manually read the user's `session_hash` and store a global `dictionary` with instances of your object for each user. Here's how you would do that:
```py
import gradio as gr
class NonDeepCopyable:
def __init__(self):
from threading import Lock
self.counter = 0
self.lock = Lock() Lock objects cannot be deepcopied
def increment(self):
with self.lock:
self.counter += 1
return self.counter
Global dictionary to store user-specific instances
instances = {}
def initialize_instance(request: gr.Request):
instances[request.session_hash] = NonDeepCopyable()
return "Session initialized!"
def cleanup_instance(request: gr.Request):
if request.session_hash in instances:
del instances[request.session_hash]
def increment_counter(request: gr.Request):
if request.session_hash in instances:
instance = instances[request.session_hash]
return instance.increment()
return "Error: Session not initialized"
with gr.Blocks() as demo:
output = gr.Textbox(label="Status")
counter = gr.Number(label="Counter Value")
increment_btn = gr.Button("Increment Co | Session State | https://gradio.app/guides/state-in-blocks | Building With Blocks - State In Blocks Guide |
return "Error: Session not initialized"
with gr.Blocks() as demo:
output = gr.Textbox(label="Status")
counter = gr.Number(label="Counter Value")
increment_btn = gr.Button("Increment Counter")
increment_btn.click(increment_counter, inputs=None, outputs=counter)
Initialize instance when page loads
demo.load(initialize_instance, inputs=None, outputs=output)
Clean up instance when page is closed/refreshed
demo.unload(cleanup_instance)
demo.launch()
```
| Session State | https://gradio.app/guides/state-in-blocks | Building With Blocks - State In Blocks Guide |
Gradio also supports browser state, where data persists in the browser's localStorage even after the page is refreshed or closed. This is useful for storing user preferences, settings, API keys, or other data that should persist across sessions. To use local state:
1. Create a `gr.BrowserState` object. You can optionally provide an initial default value and a key to identify the data in the browser's localStorage.
2. Use it like a regular `gr.State` component in event listeners as inputs and outputs.
Here's a simple example that saves a user's username and password across sessions:
$code_browserstate
Note: The value stored in `gr.BrowserState` does not persist if the Grado app is restarted. To persist it, you can hardcode specific values of `storage_key` and `secret` in the `gr.BrowserState` component and restart the Gradio app on the same server name and server port. However, this should only be done if you are running trusted Gradio apps, as in principle, this can allow one Gradio app to access localStorage data that was created by a different Gradio app.
| Browser State | https://gradio.app/guides/state-in-blocks | Building With Blocks - State In Blocks Guide |
Start by installing all the dependencies. Add the following lines to a `requirements.txt` file and run `pip install -r requirements.txt`:
```bash
opencv-python
fastrtc
onnxruntime-gpu
```
We'll use the ONNX runtime to speed up YOLOv10 inference. This guide assumes you have access to a GPU. If you don't, change `onnxruntime-gpu` to `onnxruntime`. Without a GPU, the model will run slower, resulting in a laggy demo.
We'll use OpenCV for image manipulation and the [WebRTC](https://webrtc.org/) protocol to achieve near-zero latency.
**Note**: If you want to deploy this app on any cloud provider, you'll need to use your Hugging Face token to connect to a TURN server. Learn more in this [guide](https://fastrtc.org/deployment/). If you're not familiar with TURN servers, consult this [guide](https://www.twilio.com/docs/stun-turn/faqfaq-what-is-nat).
| Setting up | https://gradio.app/guides/object-detection-from-webcam-with-webrtc | Streaming - Object Detection From Webcam With Webrtc Guide |
We'll download the YOLOv10 model from the Hugging Face hub and instantiate a custom inference class to use this model.
The implementation of the inference class isn't covered in this guide, but you can find the source code [here](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n/blob/main/inference.pyL9) if you're interested. This implementation borrows heavily from this [github repository](https://github.com/ibaiGorordo/ONNX-YOLOv8-Object-Detection).
We're using the `yolov10-n` variant because it has the lowest latency. See the [Performance](https://github.com/THU-MIG/yolov10?tab=readme-ov-fileperformance) section of the README in the YOLOv10 GitHub repository.
```python
from huggingface_hub import hf_hub_download
from inference import YOLOv10
model_file = hf_hub_download(
repo_id="onnx-community/yolov10n", filename="onnx/model.onnx"
)
model = YOLOv10(model_file)
def detection(image, conf_threshold=0.3):
image = cv2.resize(image, (model.input_width, model.input_height))
new_image = model.detect_objects(image, conf_threshold)
return new_image
```
Our inference function, `detection`, accepts a numpy array from the webcam and a desired confidence threshold. Object detection models like YOLO identify many objects and assign a confidence score to each. The lower the confidence, the higher the chance of a false positive. We'll let users adjust the confidence threshold.
The function returns a numpy array corresponding to the same input image with all detected objects in bounding boxes.
| The Inference Function | https://gradio.app/guides/object-detection-from-webcam-with-webrtc | Streaming - Object Detection From Webcam With Webrtc Guide |
The Gradio demo is straightforward, but we'll implement a few specific features:
1. Use the `WebRTC` custom component to ensure input and output are sent to/from the server with WebRTC.
2. The [WebRTC](https://github.com/freddyaboulton/gradio-webrtc) component will serve as both an input and output component.
3. Utilize the `time_limit` parameter of the `stream` event. This parameter sets a processing time for each user's stream. In a multi-user setting, such as on Spaces, we'll stop processing the current user's stream after this period and move on to the next.
We'll also apply custom CSS to center the webcam and slider on the page.
```python
import gradio as gr
from fastrtc import WebRTC
css = """.my-group {max-width: 600px !important; max-height: 600px !important;}
.my-column {display: flex !important; justify-content: center !important; align-items: center !important;}"""
with gr.Blocks(css=css) as demo:
gr.HTML(
"""
<h1 style='text-align: center'>
YOLOv10 Webcam Stream (Powered by WebRTC ⚡️)
</h1>
"""
)
with gr.Column(elem_classes=["my-column"]):
with gr.Group(elem_classes=["my-group"]):
image = WebRTC(label="Stream", rtc_configuration=rtc_configuration)
conf_threshold = gr.Slider(
label="Confidence Threshold",
minimum=0.0,
maximum=1.0,
step=0.05,
value=0.30,
)
image.stream(
fn=detection, inputs=[image, conf_threshold], outputs=[image], time_limit=10
)
if __name__ == "__main__":
demo.launch()
```
| The Gradio Demo | https://gradio.app/guides/object-detection-from-webcam-with-webrtc | Streaming - Object Detection From Webcam With Webrtc Guide |
Our app is hosted on Hugging Face Spaces [here](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n).
You can use this app as a starting point to build real-time image applications with Gradio. Don't hesitate to open issues in the space or in the [FastRTC GitHub repo](https://github.com/gradio-app/fastrtc) if you have any questions or encounter problems. | Conclusion | https://gradio.app/guides/object-detection-from-webcam-with-webrtc | Streaming - Object Detection From Webcam With Webrtc Guide |
The next generation of AI user interfaces is moving towards audio-native experiences. Users will be able to speak to chatbots and receive spoken responses in return. Several models have been built under this paradigm, including GPT-4o and [mini omni](https://github.com/gpt-omni/mini-omni).
In this guide, we'll walk you through building your own conversational chat application using mini omni as an example. You can see a demo of the finished app below:
<video src="https://github.com/user-attachments/assets/db36f4db-7535-49f1-a2dd-bd36c487ebdf" controls
height="600" width="600" style="display: block; margin: auto;" autoplay="true" loop="true">
</video>
| Introduction | https://gradio.app/guides/conversational-chatbot | Streaming - Conversational Chatbot Guide |
Our application will enable the following user experience:
1. Users click a button to start recording their message
2. The app detects when the user has finished speaking and stops recording
3. The user's audio is passed to the omni model, which streams back a response
4. After omni mini finishes speaking, the user's microphone is reactivated
5. All previous spoken audio, from both the user and omni, is displayed in a chatbot component
Let's dive into the implementation details.
| Application Overview | https://gradio.app/guides/conversational-chatbot | Streaming - Conversational Chatbot Guide |
We'll stream the user's audio from their microphone to the server and determine if the user has stopped speaking on each new chunk of audio.
Here's our `process_audio` function:
```python
import numpy as np
from utils import determine_pause
def process_audio(audio: tuple, state: AppState):
if state.stream is None:
state.stream = audio[1]
state.sampling_rate = audio[0]
else:
state.stream = np.concatenate((state.stream, audio[1]))
pause_detected = determine_pause(state.stream, state.sampling_rate, state)
state.pause_detected = pause_detected
if state.pause_detected and state.started_talking:
return gr.Audio(recording=False), state
return None, state
```
This function takes two inputs:
1. The current audio chunk (a tuple of `(sampling_rate, numpy array of audio)`)
2. The current application state
We'll use the following `AppState` dataclass to manage our application state:
```python
from dataclasses import dataclass
@dataclass
class AppState:
stream: np.ndarray | None = None
sampling_rate: int = 0
pause_detected: bool = False
stopped: bool = False
conversation: list = []
```
The function concatenates new audio chunks to the existing stream and checks if the user has stopped speaking. If a pause is detected, it returns an update to stop recording. Otherwise, it returns `None` to indicate no changes.
The implementation of the `determine_pause` function is specific to the omni-mini project and can be found [here](https://huggingface.co/spaces/gradio/omni-mini/blob/eb027808c7bfe5179b46d9352e3fa1813a45f7c3/app.pyL98).
| Processing User Audio | https://gradio.app/guides/conversational-chatbot | Streaming - Conversational Chatbot Guide |
After processing the user's audio, we need to generate and stream the chatbot's response. Here's our `response` function:
```python
import io
import tempfile
from pydub import AudioSegment
def response(state: AppState):
if not state.pause_detected and not state.started_talking:
return None, AppState()
audio_buffer = io.BytesIO()
segment = AudioSegment(
state.stream.tobytes(),
frame_rate=state.sampling_rate,
sample_width=state.stream.dtype.itemsize,
channels=(1 if len(state.stream.shape) == 1 else state.stream.shape[1]),
)
segment.export(audio_buffer, format="wav")
with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as f:
f.write(audio_buffer.getvalue())
state.conversation.append({"role": "user",
"content": {"path": f.name,
"mime_type": "audio/wav"}})
output_buffer = b""
for mp3_bytes in speaking(audio_buffer.getvalue()):
output_buffer += mp3_bytes
yield mp3_bytes, state
with tempfile.NamedTemporaryFile(suffix=".mp3", delete=False) as f:
f.write(output_buffer)
state.conversation.append({"role": "assistant",
"content": {"path": f.name,
"mime_type": "audio/mp3"}})
yield None, AppState(conversation=state.conversation)
```
This function:
1. Converts the user's audio to a WAV file
2. Adds the user's message to the conversation history
3. Generates and streams the chatbot's response using the `speaking` function
4. Saves the chatbot's response as an MP3 file
5. Adds the chatbot's response to the conversation history
Note: The implementation of the `speaking` function is specific to the omni-mini project and can be found [here](https://huggingface.co/spaces/gradio/omni-mini/blob/main/app.pyL116).
| Generating the Response | https://gradio.app/guides/conversational-chatbot | Streaming - Conversational Chatbot Guide |
Now let's put it all together using Gradio's Blocks API:
```python
import gradio as gr
def start_recording_user(state: AppState):
if not state.stopped:
return gr.Audio(recording=True)
with gr.Blocks() as demo:
with gr.Row():
with gr.Column():
input_audio = gr.Audio(
label="Input Audio", sources="microphone", type="numpy"
)
with gr.Column():
chatbot = gr.Chatbot(label="Conversation")
output_audio = gr.Audio(label="Output Audio", streaming=True, autoplay=True)
state = gr.State(value=AppState())
stream = input_audio.stream(
process_audio,
[input_audio, state],
[input_audio, state],
stream_every=0.5,
time_limit=30,
)
respond = input_audio.stop_recording(
response,
[state],
[output_audio, state]
)
respond.then(lambda s: s.conversation, [state], [chatbot])
restart = output_audio.stop(
start_recording_user,
[state],
[input_audio]
)
cancel = gr.Button("Stop Conversation", variant="stop")
cancel.click(lambda: (AppState(stopped=True), gr.Audio(recording=False)), None,
[state, input_audio], cancels=[respond, restart])
if __name__ == "__main__":
demo.launch()
```
This setup creates a user interface with:
- An input audio component for recording user messages
- A chatbot component to display the conversation history
- An output audio component for the chatbot's responses
- A button to stop and reset the conversation
The app streams user audio in 0.5-second chunks, processes it, generates responses, and updates the conversation history accordingly.
| Building the Gradio App | https://gradio.app/guides/conversational-chatbot | Streaming - Conversational Chatbot Guide |
This guide demonstrates how to build a conversational chatbot application using Gradio and the mini omni model. You can adapt this framework to create various audio-based chatbot demos. To see the full application in action, visit the Hugging Face Spaces demo: https://huggingface.co/spaces/gradio/omni-mini
Feel free to experiment with different models, audio processing techniques, or user interface designs to create your own unique conversational AI experiences! | Conclusion | https://gradio.app/guides/conversational-chatbot | Streaming - Conversational Chatbot Guide |
Automatic speech recognition (ASR), the conversion of spoken speech to text, is a very important and thriving area of machine learning. ASR algorithms run on practically every smartphone, and are becoming increasingly embedded in professional workflows, such as digital assistants for nurses and doctors. Because ASR algorithms are designed to be used directly by customers and end users, it is important to validate that they are behaving as expected when confronted with a wide variety of speech patterns (different accents, pitches, and background audio conditions).
Using `gradio`, you can easily build a demo of your ASR model and share that with a testing team, or test it yourself by speaking through the microphone on your device.
This tutorial will show how to take a pretrained speech-to-text model and deploy it with a Gradio interface. We will start with a **_full-context_** model, in which the user speaks the entire audio before the prediction runs. Then we will adapt the demo to make it **_streaming_**, meaning that the audio model will convert speech as you speak.
Prerequisites
Make sure you have the `gradio` Python package already [installed](/getting_started). You will also need a pretrained speech recognition model. In this tutorial, we will build demos from 2 ASR libraries:
- Transformers (for this, `pip install torch transformers torchaudio`)
Make sure you have at least one of these installed so that you can follow along the tutorial. You will also need `ffmpeg` [installed on your system](https://www.ffmpeg.org/download.html), if you do not already have it, to process files from the microphone.
Here's how to build a real time speech recognition (ASR) app:
1. [Set up the Transformers ASR Model](1-set-up-the-transformers-asr-model)
2. [Create a Full-Context ASR Demo with Transformers](2-create-a-full-context-asr-demo-with-transformers)
3. [Create a Streaming ASR Demo with Transformers](3-create-a-streaming-asr-demo-with-transformers)
| Introduction | https://gradio.app/guides/real-time-speech-recognition | Streaming - Real Time Speech Recognition Guide |
First, you will need to have an ASR model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will start by using a pretrained ASR model from the model, `whisper`.
Here is the code to load `whisper` from Hugging Face `transformers`.
```python
from transformers import pipeline
p = pipeline("automatic-speech-recognition", model="openai/whisper-base.en")
```
That's it!
| 1. Set up the Transformers ASR Model | https://gradio.app/guides/real-time-speech-recognition | Streaming - Real Time Speech Recognition Guide |
We will start by creating a _full-context_ ASR demo, in which the user speaks the full audio before using the ASR model to run inference. This is very easy with Gradio -- we simply create a function around the `pipeline` object above.
We will use `gradio`'s built in `Audio` component, configured to take input from the user's microphone and return a filepath for the recorded audio. The output component will be a plain `Textbox`.
$code_asr
$demo_asr
The `transcribe` function takes a single parameter, `audio`, which is a numpy array of the audio the user recorded. The `pipeline` object expects this in float32 format, so we convert it first to float32, and then extract the transcribed text.
| 2. Create a Full-Context ASR Demo with Transformers | https://gradio.app/guides/real-time-speech-recognition | Streaming - Real Time Speech Recognition Guide |
To make this a *streaming* demo, we need to make these changes:
1. Set `streaming=True` in the `Audio` component
2. Set `live=True` in the `Interface`
3. Add a `state` to the interface to store the recorded audio of a user
Tip: You can also set `time_limit` and `stream_every` parameters in the interface. The `time_limit` caps the amount of time each user's stream can take. The default is 30 seconds so users won't be able to stream audio for more than 30 seconds. The `stream_every` parameter controls how frequently data is sent to your function. By default it is 0.5 seconds.
Take a look below.
$code_stream_asr
Notice that we now have a state variable because we need to track all the audio history. `transcribe` gets called whenever there is a new small chunk of audio, but we also need to keep track of all the audio spoken so far in the state. As the interface runs, the `transcribe` function gets called, with a record of all the previously spoken audio in the `stream` and the new chunk of audio as `new_chunk`. We return the new full audio to be stored back in its current state, and we also return the transcription. Here, we naively append the audio together and call the `transcriber` object on the entire audio. You can imagine more efficient ways of handling this, such as re-processing only the last 5 seconds of audio whenever a new chunk of audio is received.
$demo_stream_asr
Now the ASR model will run inference as you speak!
| 3. Create a Streaming ASR Demo with Transformers | https://gradio.app/guides/real-time-speech-recognition | Streaming - Real Time Speech Recognition Guide |
First, we'll install the following requirements in our system:
```
opencv-python
torch
transformers>=4.43.0
spaces
```
Then, we'll download the model from the Hugging Face Hub:
```python
from transformers import RTDetrForObjectDetection, RTDetrImageProcessor
image_processor = RTDetrImageProcessor.from_pretrained("PekingU/rtdetr_r50vd")
model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r50vd").to("cuda")
```
We're moving the model to the GPU. We'll be deploying our model to Hugging Face Spaces and running the inference in the [free ZeroGPU cluster](https://huggingface.co/zero-gpu-explorers).
| Setting up the Model | https://gradio.app/guides/object-detection-from-video | Streaming - Object Detection From Video Guide |
Our inference function will accept a video and a desired confidence threshold.
Object detection models identify many objects and assign a confidence score to each object. The lower the confidence, the higher the chance of a false positive. So we will let our users set the confidence threshold.
Our function will iterate over the frames in the video and run the RT-DETR model over each frame.
We will then draw the bounding boxes for each detected object in the frame and save the frame to a new output video.
The function will yield each output video in chunks of two seconds.
In order to keep inference times as low as possible on ZeroGPU (there is a time-based quota),
we will halve the original frames-per-second in the output video and resize the input frames to be half the original
size before running the model.
The code for the inference function is below - we'll go over it piece by piece.
```python
import spaces
import cv2
from PIL import Image
import torch
import time
import numpy as np
import uuid
from draw_boxes import draw_bounding_boxes
SUBSAMPLE = 2
@spaces.GPU
def stream_object_detection(video, conf_threshold):
cap = cv2.VideoCapture(video)
This means we will output mp4 videos
video_codec = cv2.VideoWriter_fourcc(*"mp4v") type: ignore
fps = int(cap.get(cv2.CAP_PROP_FPS))
desired_fps = fps // SUBSAMPLE
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) // 2
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) // 2
iterating, frame = cap.read()
n_frames = 0
Use UUID to create a unique video file
output_video_name = f"output_{uuid.uuid4()}.mp4"
Output Video
output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height)) type: ignore
batch = []
while iterating:
frame = cv2.resize( frame, (0,0), fx=0.5, fy=0.5)
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
if n_frames % SUBSAMPLE == 0:
batch.append(frame)
if len(batc | The Inference Function | https://gradio.app/guides/object-detection-from-video | Streaming - Object Detection From Video Guide |
frame = cv2.resize( frame, (0,0), fx=0.5, fy=0.5)
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
if n_frames % SUBSAMPLE == 0:
batch.append(frame)
if len(batch) == 2 * desired_fps:
inputs = image_processor(images=batch, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model(**inputs)
boxes = image_processor.post_process_object_detection(
outputs,
target_sizes=torch.tensor([(height, width)] * len(batch)),
threshold=conf_threshold)
for i, (array, box) in enumerate(zip(batch, boxes)):
pil_image = draw_bounding_boxes(Image.fromarray(array), box, model, conf_threshold)
frame = np.array(pil_image)
Convert RGB to BGR
frame = frame[:, :, ::-1].copy()
output_video.write(frame)
batch = []
output_video.release()
yield output_video_name
output_video_name = f"output_{uuid.uuid4()}.mp4"
output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height)) type: ignore
iterating, frame = cap.read()
n_frames += 1
```
1. **Reading from the Video**
One of the industry standards for creating videos in python is OpenCV so we will use it in this app.
The `cap` variable is how we will read from the input video. Whenever we call `cap.read()`, we are reading the next frame in the video.
In order to stream video in Gradio, we need to yield a different video file for each "chunk" of the output video.
We create the next video file to write to with the `output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height))` line. The `video_codec` is how we specify the type of video file. Only "mp4" and "ts" files are supported for video sreaming at the moment.
2. **The Inference Loop**
For each frame i | The Inference Function | https://gradio.app/guides/object-detection-from-video | Streaming - Object Detection From Video Guide |
dth, height))` line. The `video_codec` is how we specify the type of video file. Only "mp4" and "ts" files are supported for video sreaming at the moment.
2. **The Inference Loop**
For each frame in the video, we will resize it to be half the size. OpenCV reads files in `BGR` format, so will convert to the expected `RGB` format of transfomers. That's what the first two lines of the while loop are doing.
We take every other frame and add it to a `batch` list so that the output video is half the original FPS. When the batch covers two seconds of video, we will run the model. The two second threshold was chosen to keep the processing time of each batch small enough so that video is smoothly displayed in the server while not requiring too many separate forward passes. In order for video streaming to work properly in Gradio, the batch size should be at least 1 second.
We run the forward pass of the model and then use the `post_process_object_detection` method of the model to scale the detected bounding boxes to the size of the input frame.
We make use of a custom function to draw the bounding boxes (source [here](https://huggingface.co/spaces/gradio/rt-detr-object-detection/blob/main/draw_boxes.pyL14)). We then have to convert from `RGB` to `BGR` before writing back to the output video.
Once we have finished processing the batch, we create a new output video file for the next batch.
| The Inference Function | https://gradio.app/guides/object-detection-from-video | Streaming - Object Detection From Video Guide |
The UI code is pretty similar to other kinds of Gradio apps.
We'll use a standard two-column layout so that users can see the input and output videos side by side.
In order for streaming to work, we have to set `streaming=True` in the output video. Setting the video
to autoplay is not necessary but it's a better experience for users.
```python
import gradio as gr
with gr.Blocks() as app:
gr.HTML(
"""
<h1 style='text-align: center'>
Video Object Detection with <a href='https://huggingface.co/PekingU/rtdetr_r101vd_coco_o365' target='_blank'>RT-DETR</a>
</h1>
""")
with gr.Row():
with gr.Column():
video = gr.Video(label="Video Source")
conf_threshold = gr.Slider(
label="Confidence Threshold",
minimum=0.0,
maximum=1.0,
step=0.05,
value=0.30,
)
with gr.Column():
output_video = gr.Video(label="Processed Video", streaming=True, autoplay=True)
video.upload(
fn=stream_object_detection,
inputs=[video, conf_threshold],
outputs=[output_video],
)
```
| The Gradio Demo | https://gradio.app/guides/object-detection-from-video | Streaming - Object Detection From Video Guide |
You can check out our demo hosted on Hugging Face Spaces [here](https://huggingface.co/spaces/gradio/rt-detr-object-detection).
It is also embedded on this page below
$demo_rt-detr-object-detection | Conclusion | https://gradio.app/guides/object-detection-from-video | Streaming - Object Detection From Video Guide |
Modern voice applications should feel natural and responsive, moving beyond the traditional "click-to-record" pattern. By combining Groq's fast inference capabilities with automatic speech detection, we can create a more intuitive interaction model where users can simply start talking whenever they want to engage with the AI.
> Credits: VAD and Gradio code inspired by [WillHeld's Diva-audio-chat](https://huggingface.co/spaces/WillHeld/diva-audio-chat/tree/main).
In this tutorial, you will learn how to create a multimodal Gradio and Groq app that has automatic speech detection. You can also watch the full video tutorial which includes a demo of the application:
<iframe width="560" height="315" src="https://www.youtube.com/embed/azXaioGdm2Q" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
| Introduction | https://gradio.app/guides/automatic-voice-detection | Streaming - Automatic Voice Detection Guide |
Many voice apps currently work by the user clicking record, speaking, then stopping the recording. While this can be a powerful demo, the most natural mode of interaction with voice requires the app to dynamically detect when the user is speaking, so they can talk back and forth without having to continually click a record button.
Creating a natural interaction with voice and text requires a dynamic and low-latency response. Thus, we need both automatic voice detection and fast inference. With @ricky0123/vad-web powering speech detection and Groq powering the LLM, both of these requirements are met. Groq provides a lightning fast response, and Gradio allows for easy creation of impressively functional apps.
This tutorial shows you how to build a calorie tracking app where you speak to an AI that automatically detects when you start and stop your response, and provides its own text response back to guide you with questions that allow it to give a calorie estimate of your last meal.
| Background | https://gradio.app/guides/automatic-voice-detection | Streaming - Automatic Voice Detection Guide |
- **Gradio**: Provides the web interface and audio handling capabilities
- **@ricky0123/vad-web**: Handles voice activity detection
- **Groq**: Powers fast LLM inference for natural conversations
- **Whisper**: Transcribes speech to text
Setting Up the Environment
First, let’s install and import our essential libraries and set up a client for using the Groq API. Here’s how to do it:
`requirements.txt`
```
gradio
groq
numpy
soundfile
librosa
spaces
xxhash
datasets
```
`app.py`
```python
import groq
import gradio as gr
import soundfile as sf
from dataclasses import dataclass, field
import os
Initialize Groq client securely
api_key = os.environ.get("GROQ_API_KEY")
if not api_key:
raise ValueError("Please set the GROQ_API_KEY environment variable.")
client = groq.Client(api_key=api_key)
```
Here, we’re pulling in key libraries to interact with the Groq API, build a sleek UI with Gradio, and handle audio data. We’re accessing the Groq API key securely with a key stored in an environment variable, which is a security best practice for avoiding leaking the API key.
---
State Management for Seamless Conversations
We need a way to keep track of our conversation history, so the chatbot remembers past interactions, and manage other states like whether recording is currently active. To do this, let’s create an `AppState` class:
```python
@dataclass
class AppState:
conversation: list = field(default_factory=list)
stopped: bool = False
model_outs: Any = None
```
Our `AppState` class is a handy tool for managing conversation history and tracking whether recording is on or off. Each instance will have its own fresh list of conversations, making sure chat history is isolated to each session.
---
Transcribing Audio with Whisper on Groq
Next, we’ll create a function to transcribe the user’s audio input into text using Whisper, a powerful transcription model hosted on Groq. This transcription will also help us determine whether there’s meani | Key Components | https://gradio.app/guides/automatic-voice-detection | Streaming - Automatic Voice Detection Guide |
e’ll create a function to transcribe the user’s audio input into text using Whisper, a powerful transcription model hosted on Groq. This transcription will also help us determine whether there’s meaningful speech in the input. Here’s how:
```python
def transcribe_audio(client, file_name):
if file_name is None:
return None
try:
with open(file_name, "rb") as audio_file:
response = client.audio.transcriptions.with_raw_response.create(
model="whisper-large-v3-turbo",
file=("audio.wav", audio_file),
response_format="verbose_json",
)
completion = process_whisper_response(response.parse())
return completion
except Exception as e:
print(f"Error in transcription: {e}")
return f"Error in transcription: {str(e)}"
```
This function opens the audio file and sends it to Groq’s Whisper model for transcription, requesting detailed JSON output. verbose_json is needed to get information to determine if speech was included in the audio. We also handle any potential errors so our app doesn’t fully crash if there’s an issue with the API request.
```python
def process_whisper_response(completion):
"""
Process Whisper transcription response and return text or null based on no_speech_prob
Args:
completion: Whisper transcription response object
Returns:
str or None: Transcribed text if no_speech_prob <= 0.7, otherwise None
"""
if completion.segments and len(completion.segments) > 0:
no_speech_prob = completion.segments[0].get('no_speech_prob', 0)
print("No speech prob:", no_speech_prob)
if no_speech_prob > 0.7:
return None
return completion.text.strip()
return None
```
We also need to interpret the audio data response. The process_whisper_response function takes the resulting completion from Whisper and checks if the audio was j | Key Components | https://gradio.app/guides/automatic-voice-detection | Streaming - Automatic Voice Detection Guide |
ext.strip()
return None
```
We also need to interpret the audio data response. The process_whisper_response function takes the resulting completion from Whisper and checks if the audio was just background noise or had actual speaking that was transcribed. It uses a threshold of 0.7 to interpret the no_speech_prob, and will return None if there was no speech. Otherwise, it will return the text transcript of the conversational response from the human.
---
Adding Conversational Intelligence with LLM Integration
Our chatbot needs to provide intelligent, friendly responses that flow naturally. We’ll use a Groq-hosted Llama-3.2 for this:
```python
def generate_chat_completion(client, history):
messages = []
messages.append(
{
"role": "system",
"content": "In conversation with the user, ask questions to estimate and provide (1) total calories, (2) protein, carbs, and fat in grams, (3) fiber and sugar content. Only ask *one question at a time*. Be conversational and natural.",
}
)
for message in history:
messages.append(message)
try:
completion = client.chat.completions.create(
model="llama-3.2-11b-vision-preview",
messages=messages,
)
return completion.choices[0].message.content
except Exception as e:
return f"Error in generating chat completion: {str(e)}"
```
We’re defining a system prompt to guide the chatbot’s behavior, ensuring it asks one question at a time and keeps things conversational. This setup also includes error handling to ensure the app gracefully manages any issues.
---
Voice Activity Detection for Hands-Free Interaction
To make our chatbot hands-free, we’ll add Voice Activity Detection (VAD) to automatically detect when someone starts or stops speaking. Here’s how to implement it using ONNX in JavaScript:
```javascript
async function main() {
const script1 = document.createElement("script");
scrip | Key Components | https://gradio.app/guides/automatic-voice-detection | Streaming - Automatic Voice Detection Guide |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.