text
stringlengths 0
2k
| heading1
stringlengths 4
79
| source_page_url
stringclasses 183
values | source_page_title
stringclasses 183
values |
|---|---|---|---|
| β| β| β
[Code](code)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[ColorPicker](colorpicker)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[Dataframe](dataframe)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[Dataset](dataset)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[DateTime](datetime)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[DeepLinkButton](deeplinkbutton)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[Dialogue](dialogue)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[DownloadButton](downloadbutton)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[Dropdown](dropdown)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[DuplicateButton](duplicatebutton)| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[File](file)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[FileExplorer](fileexplorer)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[ImageEditor](imageeditor)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[Gallery](gallery)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[HighlightedText](highlightedtext)| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β|
|
Events
|
https://gradio.app/docs/gradio/introduction
|
Gradio - Introduction Docs
|
β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[HighlightedText](highlightedtext)| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[HTML](html)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[Image](image)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[ImageSlider](imageslider)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[JSON](json)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[Label](label)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[LoginButton](loginbutton)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[Markdown](markdown)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[Model3D](model3d)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[Textbox](textbox)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[MultimodalTextbox](multimodaltextbox)| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[BarPlot](barplot)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[LinePlot](lineplot)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[ScatterPlot](scatterplot)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[Navbar](navbar)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
|
Events
|
https://gradio.app/docs/gradio/introduction
|
Gradio - Introduction Docs
|
ScatterPlot](scatterplot)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[Navbar](navbar)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[Number](number)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[ParamViewer](paramviewer)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[Plot](plot)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[Radio](radio)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[Slider](slider)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[State](state)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[Timer](timer)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[UploadButton](uploadbutton)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[Video](video)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
[SimpleImage](simpleimage)| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β|
β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β| β
|
Events
|
https://gradio.app/docs/gradio/introduction
|
Gradio - Introduction Docs
|
Sets up an event listener that triggers a function when the specified
event(s) occur. This is especially useful when the same function should be
triggered by multiple events. Only a single API endpoint is generated for all
events in the triggers list.
|
Description
|
https://gradio.app/docs/gradio/on
|
Gradio - On Docs
|
import gradio as gr
with gr.Blocks() as demo:
with gr.Row():
input = gr.Textbox()
button = gr.Button("Submit")
output = gr.Textbox()
gr.on(
triggers=[button.click, input.submit],
fn=lambda x: x,
inputs=[input],
outputs=[output]
)
demo.launch()
|
Example Usage
|
https://gradio.app/docs/gradio/on
|
Gradio - On Docs
|
Parameters βΌ
triggers: list[EventListenerCallable] | EventListenerCallable | None
default `= None`
List of triggers to listen to, e.g. [btn.click, number.change]. If None, will
run on app load and changes to any inputs.
fn: Callable[..., Any] | None | Literal['decorator']
default `= "decorator"`
the function to call when this event is triggered. Often a machine learning
model's prediction function. Each parameter of the function corresponds to one
input component, and the function should return a single value or a tuple of
values, with each element in the tuple corresponding to one output component.
inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None
default `= None`
List of gradio.components to use as inputs. If the function takes no inputs,
this should be an empty list.
outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None
default `= None`
List of gradio.components to use as outputs. If the function returns no
outputs, this should be an empty list.
api_visibility: Literal['public', 'private', 'undocumented']
default `= "public"`
controls the visibility and accessibility of this endpoint. Can be "public"
(shown in API docs and callable by clients), "private" (hidden from API docs
and not callable by clients), or "undocumented" (hidden from API docs but
callable by clients and via gr.load). If fn is None, api_visibility will
automatically be set to "private".
api_name: str | None
default `= None`
defines how the endpoint appears in the API docs. Can be a string or None. If
set to a string, the endpoint will be exposed in the API docs with the given
name. If None (default), the name of the function will be used as the API
endpoint.
api_description: str | None | Literal[False]
default `= None`
scroll_to_output: bool
default `
|
Initialization
|
https://gradio.app/docs/gradio/on
|
Gradio - On Docs
|
e (default), the name of the function will be used as the API
endpoint.
api_description: str | None | Literal[False]
default `= None`
scroll_to_output: bool
default `= False`
If True, will scroll to output component on completion
show_progress: Literal['full', 'minimal', 'hidden']
default `= "full"`
how to show the progress animation while event is running: "full" shows a
spinner which covers the output component area as well as a runtime display in
the upper right corner, "minimal" only shows the runtime display, "hidden"
shows no progress animation at all,
show_progress_on: Component | list[Component] | None
default `= None`
Component or list of components to show the progress animation on. If None,
will show the progress animation on all of the output components.
queue: bool
default `= True`
If True, will place the request on the queue, if the queue has been enabled.
If False, will not put this event on the queue, even if the queue has been
enabled. If None, will use the queue setting of the gradio app.
batch: bool
default `= False`
If True, then the function should process a batch of inputs, meaning that it
should accept a list of input values for each parameter. The lists should be
of equal length (and be up to length `max_batch_size`). The function is then
*required* to return a tuple of lists (even if there is only 1 output
component), with each list in the tuple corresponding to one output component.
max_batch_size: int
default `= 4`
Maximum number of inputs to batch together if this is called from the queue
(only relevant if batch=True)
preprocess: bool
default `= True`
If False, will not run preprocessing of component data before running 'fn'
(e.g. leaving it as a base64 string if this method is called with the `Image`
component).
postprocess: bool
default `= True`
If False, will not run postproces
|
Initialization
|
https://gradio.app/docs/gradio/on
|
Gradio - On Docs
|
ata before running 'fn'
(e.g. leaving it as a base64 string if this method is called with the `Image`
component).
postprocess: bool
default `= True`
If False, will not run postprocessing of component data before returning 'fn'
output to the browser.
cancels: dict[str, Any] | list[dict[str, Any]] | None
default `= None`
A list of other events to cancel when this listener is triggered. For example,
setting cancels=[click_event] will cancel the click_event, where click_event
is the return value of another components .click method. Functions that have
not yet run (or generators that are iterating) will be cancelled, but
functions that are currently running will be allowed to finish.
trigger_mode: Literal['once', 'multiple', 'always_last'] | None
default `= None`
If "once" (default for all events except `.change()`) would not allow any
submissions while an event is pending. If set to "multiple", unlimited
submissions are allowed while pending, and "always_last" (default for
`.change()` and `.key_up()` events) would allow a second submission after the
pending event is complete.
js: str | Literal[True] | None
default `= None`
Optional frontend js method to run before running 'fn'. Input arguments for js
method are values of 'inputs', return should be a list of values for output
components.
concurrency_limit: int | None | Literal['default']
default `= "default"`
If set, this is the maximum number of this event that can be running
simultaneously. Can be set to None to mean no concurrency_limit (any number of
this event can be running simultaneously). Set to "default" to use the default
concurrency limit (defined by the `default_concurrency_limit` parameter in
`Blocks.queue()`, which itself is 1 by default).
concurrency_id: str | None
default `= None`
If set, this is the id of the concurrency group. Events with the same
concurrency_id will be limited by the lowest set c
|
Initialization
|
https://gradio.app/docs/gradio/on
|
Gradio - On Docs
|
is 1 by default).
concurrency_id: str | None
default `= None`
If set, this is the id of the concurrency group. Events with the same
concurrency_id will be limited by the lowest set concurrency_limit.
time_limit: int | None
default `= None`
The time limit for the function to run. Parameter only used for the
`.stream()` event.
stream_every: float
default `= 0.5`
The latency (in seconds) at which stream chunks are sent to the backend.
Defaults to 0.5 seconds. Parameter only used for the `.stream()` event.
key: int | str | tuple[int | str, ...] | None
default `= None`
validator: Callable | None
default `= None`
Optional validation function to run before the main function. If provided,
this function will be executed first with queue=False, and only if it
completes successfully will the main function be called. The validator
receives the same inputs as the main function and should return a
`gr.validate()` for each input value.
|
Initialization
|
https://gradio.app/docs/gradio/on
|
Gradio - On Docs
|
Button that triggers a Spaces Duplication, when the demo is on Hugging Face
Spaces. Does nothing locally.
|
Description
|
https://gradio.app/docs/gradio/duplicatebutton
|
Gradio - Duplicatebutton Docs
|
**As input component** : (Rarely used) the `str` corresponding to the
button label when the button is clicked
Your function should accept one of these types:
def predict(
value: str | None
)
...
**As output component** : string corresponding to the button label
Your function should return one of these types:
def predict(Β·Β·Β·) -> str | None
...
return value
|
Behavior
|
https://gradio.app/docs/gradio/duplicatebutton
|
Gradio - Duplicatebutton Docs
|
Parameters βΌ
value: str
default `= "Duplicate Space"`
default text for the button to display. If a function is provided, the
function will be called each time the app loads to set the initial value of
this component.
every: Timer | float | None
default `= None`
continuously calls `value` to recalculate it if `value` is a function (has no
effect otherwise). Can provide a Timer whose tick resets `value`, or a float
that provides the regular interval for the reset Timer.
inputs: Component | list[Component] | set[Component] | None
default `= None`
components that are used as inputs to calculate `value` if `value` is a
function (has no effect otherwise). `value` is recalculated any time the
inputs change.
variant: Literal['primary', 'secondary', 'stop', 'huggingface']
default `= "huggingface"`
sets the background and text color of the button. Use 'primary' for main call-
to-action buttons, 'secondary' for a more subdued style, 'stop' for a stop
button, 'huggingface' for a black background with white text, consistent with
Hugging Face's button styles.
size: Literal['sm', 'md', 'lg']
default `= "sm"`
size of the button. Can be "sm", "md", or "lg".
icon: str | Path | None
default `= None`
URL or path to the icon file to display within the button. If None, no icon
will be displayed.
link: str | None
default `= None`
URL to open when the button is clicked. If None, no link will be used.
link_target: Literal['_self', '_blank', '_parent', '_top']
default `= "_self"`
visible: bool | Literal['hidden']
default `= True`
If False, component will be hidden. If "hidden", component will be visually
hidden and not take up space in the layout but still exist in the DOM
interactive: bool
default `= True`
if False, the Button will be in a disabled state.
elem_id: str | None
default `= None`
an op
|
Initialization
|
https://gradio.app/docs/gradio/duplicatebutton
|
Gradio - Duplicatebutton Docs
|
he layout but still exist in the DOM
interactive: bool
default `= True`
if False, the Button will be in a disabled state.
elem_id: str | None
default `= None`
an optional string that is assigned as the id of this component in the HTML
DOM. Can be used for targeting CSS styles.
elem_classes: list[str] | str | None
default `= None`
an optional list of strings that are assigned as the classes of this component
in the HTML DOM. Can be used for targeting CSS styles.
render: bool
default `= True`
if False, component will not render be rendered in the Blocks context. Should
be used if the intention is to assign event listeners now but render the
component later.
key: int | str | tuple[int | str, ...] | None
default `= None`
in a gr.render, Components with the same key across re-renders are treated as
the same component, not a new component. Properties set in 'preserved_by_key'
are not reset across a re-render.
preserved_by_key: list[str] | str | None
default `= "value"`
A list of parameters from this component's constructor. Inside a gr.render()
function, if a component is re-rendered with the same key, these (and only
these) parameters will be preserved in the UI (if they have been changed by
the user or an event listener) instead of re-rendered based on the values
provided during constructor.
scale: int | None
default `= 0`
relative size compared to adjacent Components. For example if Components A and
B are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide
as B. Should be an integer. scale applies in Rows, and to top-level Components
in Blocks where fill_height=True.
min_width: int | None
default `= None`
minimum pixel width, will wrap if not sufficient screen space to satisfy this
value. If a certain scale value results in this Component being narrower than
min_width, the min_width parameter will be respect
|
Initialization
|
https://gradio.app/docs/gradio/duplicatebutton
|
Gradio - Duplicatebutton Docs
|
m pixel width, will wrap if not sufficient screen space to satisfy this
value. If a certain scale value results in this Component being narrower than
min_width, the min_width parameter will be respected first.
|
Initialization
|
https://gradio.app/docs/gradio/duplicatebutton
|
Gradio - Duplicatebutton Docs
|
Class| Interface String Shortcut| Initialization
---|---|---
`gradio.DuplicateButton`| "duplicatebutton"| Uses default values
|
Shortcuts
|
https://gradio.app/docs/gradio/duplicatebutton
|
Gradio - Duplicatebutton Docs
|
Description
Event listeners allow you to respond to user interactions with the UI
components you've defined in a Gradio Blocks app. When a user interacts with
an element, such as changing a slider value or uploading an image, a function
is called.
Supported Event Listeners
The DuplicateButton component supports the following event listeners. Each
event listener takes the same parameters, which are listed in the Event
Parameters table below.
Listener| Description
---|---
`DuplicateButton.click(fn, Β·Β·Β·)`| Triggered when the Button is clicked.
Event Parameters
Parameters βΌ
fn: Callable | None | Literal['decorator']
default `= "decorator"`
the function to call when this event is triggered. Often a machine learning
model's prediction function. Each parameter of the function corresponds to one
input component, and the function should return a single value or a tuple of
values, with each element in the tuple corresponding to one output component.
inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None
default `= None`
List of gradio.components to use as inputs. If the function takes no inputs,
this should be an empty list.
outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None
default `= None`
List of gradio.components to use as outputs. If the function returns no
outputs, this should be an empty list.
api_name: str | None
default `= None`
defines how the endpoint appears in the API docs. Can be a string or None. If
set to a string, the endpoint will be exposed in the API docs with the given
name. If None (default), the name of the function will be used as the API
endpoint.
api_description: str | None | Literal[False]
default `= None`
Description of the API endpoint. Can be a string, None, or False. If set to a
string, the endpoint will be exposed in the API
|
Event Listeners
|
https://gradio.app/docs/gradio/duplicatebutton
|
Gradio - Duplicatebutton Docs
|
api_description: str | None | Literal[False]
default `= None`
Description of the API endpoint. Can be a string, None, or False. If set to a
string, the endpoint will be exposed in the API docs with the given
description. If None, the function's docstring will be used as the API
endpoint description. If False, then no description will be displayed in the
API docs.
scroll_to_output: bool
default `= False`
If True, will scroll to output component on completion
show_progress: Literal['full', 'minimal', 'hidden']
default `= "full"`
how to show the progress animation while event is running: "full" shows a
spinner which covers the output component area as well as a runtime display in
the upper right corner, "minimal" only shows the runtime display, "hidden"
shows no progress animation at all
show_progress_on: Component | list[Component] | None
default `= None`
Component or list of components to show the progress animation on. If None,
will show the progress animation on all of the output components.
queue: bool
default `= True`
If True, will place the request on the queue, if the queue has been enabled.
If False, will not put this event on the queue, even if the queue has been
enabled. If None, will use the queue setting of the gradio app.
batch: bool
default `= False`
If True, then the function should process a batch of inputs, meaning that it
should accept a list of input values for each parameter. The lists should be
of equal length (and be up to length `max_batch_size`). The function is then
*required* to return a tuple of lists (even if there is only 1 output
component), with each list in the tuple corresponding to one output component.
max_batch_size: int
default `= 4`
Maximum number of inputs to batch together if this is called from the queue
(only relevant if batch=True)
preprocess: bool
default `= True`
If False, will not run p
|
Event Listeners
|
https://gradio.app/docs/gradio/duplicatebutton
|
Gradio - Duplicatebutton Docs
|
t
default `= 4`
Maximum number of inputs to batch together if this is called from the queue
(only relevant if batch=True)
preprocess: bool
default `= True`
If False, will not run preprocessing of component data before running 'fn'
(e.g. leaving it as a base64 string if this method is called with the `Image`
component).
postprocess: bool
default `= True`
If False, will not run postprocessing of component data before returning 'fn'
output to the browser.
cancels: dict[str, Any] | list[dict[str, Any]] | None
default `= None`
A list of other events to cancel when this listener is triggered. For example,
setting cancels=[click_event] will cancel the click_event, where click_event
is the return value of another components .click method. Functions that have
not yet run (or generators that are iterating) will be cancelled, but
functions that are currently running will be allowed to finish.
trigger_mode: Literal['once', 'multiple', 'always_last'] | None
default `= None`
If "once" (default for all events except `.change()`) would not allow any
submissions while an event is pending. If set to "multiple", unlimited
submissions are allowed while pending, and "always_last" (default for
`.change()` and `.key_up()` events) would allow a second submission after the
pending event is complete.
js: str | Literal[True] | None
default `= None`
Optional frontend js method to run before running 'fn'. Input arguments for js
method are values of 'inputs' and 'outputs', return should be a list of values
for output components.
concurrency_limit: int | None | Literal['default']
default `= "default"`
If set, this is the maximum number of this event that can be running
simultaneously. Can be set to None to mean no concurrency_limit (any number of
this event can be running simultaneously). Set to "default" to use the default
concurrency limit (defined by the `default_concurrency_limit` par
|
Event Listeners
|
https://gradio.app/docs/gradio/duplicatebutton
|
Gradio - Duplicatebutton Docs
|
set to None to mean no concurrency_limit (any number of
this event can be running simultaneously). Set to "default" to use the default
concurrency limit (defined by the `default_concurrency_limit` parameter in
`Blocks.queue()`, which itself is 1 by default).
concurrency_id: str | None
default `= None`
If set, this is the id of the concurrency group. Events with the same
concurrency_id will be limited by the lowest set concurrency_limit.
api_visibility: Literal['public', 'private', 'undocumented']
default `= "public"`
controls the visibility and accessibility of this endpoint. Can be "public"
(shown in API docs and callable by clients), "private" (hidden from API docs
and not callable by clients), or "undocumented" (hidden from API docs but
callable by clients and via gr.load). If fn is None, api_visibility will
automatically be set to "private".
time_limit: int | None
default `= None`
stream_every: float
default `= 0.5`
key: int | str | tuple[int | str, ...] | None
default `= None`
A unique key for this event listener to be used in @gr.render(). If set, this
value identifies an event as identical across re-renders when the key is
identical.
validator: Callable | None
default `= None`
Optional validation function to run before the main function. If provided,
this function will be executed first with queue=False, and only if it
completes successfully will the main function be called. The validator
receives the same inputs as the main function and should return a
`gr.validate()` for each input value.
|
Event Listeners
|
https://gradio.app/docs/gradio/duplicatebutton
|
Gradio - Duplicatebutton Docs
|
**Prerequisite**: Gradio requires [Python 3.10 or higher](https://www.python.org/downloads/).
We recommend installing Gradio using `pip`, which is included by default in Python. Run this in your terminal or command prompt:
```bash
pip install --upgrade gradio
```
Tip: It is best to install Gradio in a virtual environment. Detailed installation instructions for all common operating systems <a href="https://www.gradio.app/main/guides/installing-gradio-in-a-virtual-environment">are provided here</a>.
|
Installation
|
https://gradio.app/guides/quickstart
|
Getting Started - Quickstart Guide
|
You can run Gradio in your favorite code editor, Jupyter notebook, Google Colab, or anywhere else you write Python. Let's write your first Gradio app:
$code_hello_world_4
Tip: We shorten the imported name from <code>gradio</code> to <code>gr</code>. This is a widely adopted convention for better readability of code.
Now, run your code. If you've written the Python code in a file named `app.py`, then you would run `python app.py` from the terminal.
The demo below will open in a browser on [http://localhost:7860](http://localhost:7860) if running from a file. If you are running within a notebook, the demo will appear embedded within the notebook.
$demo_hello_world_4
Type your name in the textbox on the left, drag the slider, and then press the Submit button. You should see a friendly greeting on the right.
Tip: When developing locally, you can run your Gradio app in <strong>hot reload mode</strong>, which automatically reloads the Gradio app whenever you make changes to the file. To do this, simply type in <code>gradio</code> before the name of the file instead of <code>python</code>. In the example above, you would type: `gradio app.py` in your terminal. You can also enable <strong>vibe mode</strong> by using the <code>--vibe</code> flag, e.g. <code>gradio --vibe app.py</code>, which provides an in-browser chat that can be used to write or edit your Gradio app using natural language. Learn more in the <a href="https://www.gradio.app/guides/developing-faster-with-reload-mode">Hot Reloading Guide</a>.
**Understanding the `Interface` Class**
You'll notice that in order to make your first demo, you created an instance of the `gr.Interface` class. The `Interface` class is designed to create demos for machine learning models which accept one or more inputs, and return one or more outputs.
The `Interface` class has three core arguments:
- `fn`: the function to wrap a user interface (UI) around
- `inputs`: the Gradio component(s) to use for the input. The num
|
Building Your First Demo
|
https://gradio.app/guides/quickstart
|
Getting Started - Quickstart Guide
|
turn one or more outputs.
The `Interface` class has three core arguments:
- `fn`: the function to wrap a user interface (UI) around
- `inputs`: the Gradio component(s) to use for the input. The number of components should match the number of arguments in your function.
- `outputs`: the Gradio component(s) to use for the output. The number of components should match the number of return values from your function.
The `fn` argument is very flexible -- you can pass *any* Python function that you want to wrap with a UI. In the example above, we saw a relatively simple function, but the function could be anything from a music generator to a tax calculator to the prediction function of a pretrained machine learning model.
The `inputs` and `outputs` arguments take one or more Gradio components. As we'll see, Gradio includes more than [30 built-in components](https://www.gradio.app/docs/gradio/introduction) (such as the `gr.Textbox()`, `gr.Image()`, and `gr.HTML()` components) that are designed for machine learning applications.
Tip: For the `inputs` and `outputs` arguments, you can pass in the name of these components as a string (`"textbox"`) or an instance of the class (`gr.Textbox()`).
If your function accepts more than one argument, as is the case above, pass a list of input components to `inputs`, with each input component corresponding to one of the arguments of the function, in order. The same holds true if your function returns more than one value: simply pass in a list of components to `outputs`. This flexibility makes the `Interface` class a very powerful way to create demos.
We'll dive deeper into the `gr.Interface` on our series on [building Interfaces](https://www.gradio.app/main/guides/the-interface-class).
|
Building Your First Demo
|
https://gradio.app/guides/quickstart
|
Getting Started - Quickstart Guide
|
What good is a beautiful demo if you can't share it? Gradio lets you easily share a machine learning demo without having to worry about the hassle of hosting on a web server. Simply set `share=True` in `launch()`, and a publicly accessible URL will be created for your demo. Let's revisit our example demo, but change the last line as follows:
```python
import gradio as gr
def greet(name):
return "Hello " + name + "!"
demo = gr.Interface(fn=greet, inputs="textbox", outputs="textbox")
demo.launch(share=True) Share your demo with just 1 extra parameter π
```
When you run this code, a public URL will be generated for your demo in a matter of seconds, something like:
π `https://a23dsf231adb.gradio.live`
Now, anyone around the world can try your Gradio demo from their browser, while the machine learning model and all computation continues to run locally on your computer.
To learn more about sharing your demo, read our dedicated guide on [sharing your Gradio application](https://www.gradio.app/guides/sharing-your-app).
|
Sharing Your Demo
|
https://gradio.app/guides/quickstart
|
Getting Started - Quickstart Guide
|
So far, we've been discussing the `Interface` class, which is a high-level class that lets you build demos quickly with Gradio. But what else does Gradio include?
Custom Demos with `gr.Blocks`
Gradio offers a low-level approach for designing web apps with more customizable layouts and data flows with the `gr.Blocks` class. Blocks supports things like controlling where components appear on the page, handling multiple data flows and more complex interactions (e.g. outputs can serve as inputs to other functions), and updating properties/visibility of components based on user interaction β still all in Python.
You can build very custom and complex applications using `gr.Blocks()`. For example, the popular image generation [Automatic1111 Web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is built using Gradio Blocks. We dive deeper into the `gr.Blocks` on our series on [building with Blocks](https://www.gradio.app/guides/blocks-and-event-listeners).
Chatbots with `gr.ChatInterface`
Gradio includes another high-level class, `gr.ChatInterface`, which is specifically designed to create Chatbot UIs. Similar to `Interface`, you supply a function and Gradio creates a fully working Chatbot UI. If you're interested in creating a chatbot, you can jump straight to [our dedicated guide on `gr.ChatInterface`](https://www.gradio.app/guides/creating-a-chatbot-fast).
The Gradio Python & JavaScript Ecosystem
That's the gist of the core `gradio` Python library, but Gradio is actually so much more! It's an entire ecosystem of Python and JavaScript libraries that let you build machine learning applications, or query them programmatically, in Python or JavaScript. Here are other related parts of the Gradio ecosystem:
* [Gradio Python Client](https://www.gradio.app/guides/getting-started-with-the-python-client) (`gradio_client`): query any Gradio app programmatically in Python.
* [Gradio JavaScript Client](https://www.gradio.app/guides/getting-started-with-
|
An Overview of Gradio
|
https://gradio.app/guides/quickstart
|
Getting Started - Quickstart Guide
|
.app/guides/getting-started-with-the-python-client) (`gradio_client`): query any Gradio app programmatically in Python.
* [Gradio JavaScript Client](https://www.gradio.app/guides/getting-started-with-the-js-client) (`@gradio/client`): query any Gradio app programmatically in JavaScript.
* [Hugging Face Spaces](https://huggingface.co/spaces): the most popular place to host Gradio applications β for free!
|
An Overview of Gradio
|
https://gradio.app/guides/quickstart
|
Getting Started - Quickstart Guide
|
Keep learning about Gradio sequentially using the Gradio Guides, which include explanations as well as example code and embedded interactive demos. Next up: [let's dive deeper into the Interface class](https://www.gradio.app/guides/the-interface-class).
Or, if you already know the basics and are looking for something specific, you can search the more [technical API documentation](https://www.gradio.app/docs/).
|
What's Next?
|
https://gradio.app/guides/quickstart
|
Getting Started - Quickstart Guide
|
You can also build Gradio applications without writing any code. Simply type `gradio sketch` into your terminal to open up an editor that lets you define and modify Gradio components, adjust their layouts, add events, all through a web editor. Or [use this hosted version of Gradio Sketch, running on Hugging Face Spaces](https://huggingface.co/spaces/aliabid94/Sketch).
|
Gradio Sketch
|
https://gradio.app/guides/quickstart
|
Getting Started - Quickstart Guide
|
Start by installing all the dependencies. Add the following lines to a `requirements.txt` file and run `pip install -r requirements.txt`:
```bash
opencv-python
fastrtc
onnxruntime-gpu
```
We'll use the ONNX runtime to speed up YOLOv10 inference. This guide assumes you have access to a GPU. If you don't, change `onnxruntime-gpu` to `onnxruntime`. Without a GPU, the model will run slower, resulting in a laggy demo.
We'll use OpenCV for image manipulation and the [WebRTC](https://webrtc.org/) protocol to achieve near-zero latency.
**Note**: If you want to deploy this app on any cloud provider, you'll need to use your Hugging Face token to connect to a TURN server. Learn more in this [guide](https://fastrtc.org/deployment/). If you're not familiar with TURN servers, consult this [guide](https://www.twilio.com/docs/stun-turn/faqfaq-what-is-nat).
|
Setting up
|
https://gradio.app/guides/object-detection-from-webcam-with-webrtc
|
Streaming - Object Detection From Webcam With Webrtc Guide
|
We'll download the YOLOv10 model from the Hugging Face hub and instantiate a custom inference class to use this model.
The implementation of the inference class isn't covered in this guide, but you can find the source code [here](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n/blob/main/inference.pyL9) if you're interested. This implementation borrows heavily from this [github repository](https://github.com/ibaiGorordo/ONNX-YOLOv8-Object-Detection).
We're using the `yolov10-n` variant because it has the lowest latency. See the [Performance](https://github.com/THU-MIG/yolov10?tab=readme-ov-fileperformance) section of the README in the YOLOv10 GitHub repository.
```python
from huggingface_hub import hf_hub_download
from inference import YOLOv10
model_file = hf_hub_download(
repo_id="onnx-community/yolov10n", filename="onnx/model.onnx"
)
model = YOLOv10(model_file)
def detection(image, conf_threshold=0.3):
image = cv2.resize(image, (model.input_width, model.input_height))
new_image = model.detect_objects(image, conf_threshold)
return new_image
```
Our inference function, `detection`, accepts a numpy array from the webcam and a desired confidence threshold. Object detection models like YOLO identify many objects and assign a confidence score to each. The lower the confidence, the higher the chance of a false positive. We'll let users adjust the confidence threshold.
The function returns a numpy array corresponding to the same input image with all detected objects in bounding boxes.
|
The Inference Function
|
https://gradio.app/guides/object-detection-from-webcam-with-webrtc
|
Streaming - Object Detection From Webcam With Webrtc Guide
|
The Gradio demo is straightforward, but we'll implement a few specific features:
1. Use the `WebRTC` custom component to ensure input and output are sent to/from the server with WebRTC.
2. The [WebRTC](https://github.com/freddyaboulton/gradio-webrtc) component will serve as both an input and output component.
3. Utilize the `time_limit` parameter of the `stream` event. This parameter sets a processing time for each user's stream. In a multi-user setting, such as on Spaces, we'll stop processing the current user's stream after this period and move on to the next.
We'll also apply custom CSS to center the webcam and slider on the page.
```python
import gradio as gr
from fastrtc import WebRTC
css = """.my-group {max-width: 600px !important; max-height: 600px !important;}
.my-column {display: flex !important; justify-content: center !important; align-items: center !important;}"""
with gr.Blocks(css=css) as demo:
gr.HTML(
"""
<h1 style='text-align: center'>
YOLOv10 Webcam Stream (Powered by WebRTC β‘οΈ)
</h1>
"""
)
with gr.Column(elem_classes=["my-column"]):
with gr.Group(elem_classes=["my-group"]):
image = WebRTC(label="Stream", rtc_configuration=rtc_configuration)
conf_threshold = gr.Slider(
label="Confidence Threshold",
minimum=0.0,
maximum=1.0,
step=0.05,
value=0.30,
)
image.stream(
fn=detection, inputs=[image, conf_threshold], outputs=[image], time_limit=10
)
if __name__ == "__main__":
demo.launch()
```
|
The Gradio Demo
|
https://gradio.app/guides/object-detection-from-webcam-with-webrtc
|
Streaming - Object Detection From Webcam With Webrtc Guide
|
Our app is hosted on Hugging Face Spaces [here](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n).
You can use this app as a starting point to build real-time image applications with Gradio. Don't hesitate to open issues in the space or in the [FastRTC GitHub repo](https://github.com/gradio-app/fastrtc) if you have any questions or encounter problems.
|
Conclusion
|
https://gradio.app/guides/object-detection-from-webcam-with-webrtc
|
Streaming - Object Detection From Webcam With Webrtc Guide
|
First, we'll install the following requirements in our system:
```
opencv-python
torch
transformers>=4.43.0
spaces
```
Then, we'll download the model from the Hugging Face Hub:
```python
from transformers import RTDetrForObjectDetection, RTDetrImageProcessor
image_processor = RTDetrImageProcessor.from_pretrained("PekingU/rtdetr_r50vd")
model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r50vd").to("cuda")
```
We're moving the model to the GPU. We'll be deploying our model to Hugging Face Spaces and running the inference in the [free ZeroGPU cluster](https://huggingface.co/zero-gpu-explorers).
|
Setting up the Model
|
https://gradio.app/guides/object-detection-from-video
|
Streaming - Object Detection From Video Guide
|
Our inference function will accept a video and a desired confidence threshold.
Object detection models identify many objects and assign a confidence score to each object. The lower the confidence, the higher the chance of a false positive. So we will let our users set the confidence threshold.
Our function will iterate over the frames in the video and run the RT-DETR model over each frame.
We will then draw the bounding boxes for each detected object in the frame and save the frame to a new output video.
The function will yield each output video in chunks of two seconds.
In order to keep inference times as low as possible on ZeroGPU (there is a time-based quota),
we will halve the original frames-per-second in the output video and resize the input frames to be half the original
size before running the model.
The code for the inference function is below - we'll go over it piece by piece.
```python
import spaces
import cv2
from PIL import Image
import torch
import time
import numpy as np
import uuid
from draw_boxes import draw_bounding_boxes
SUBSAMPLE = 2
@spaces.GPU
def stream_object_detection(video, conf_threshold):
cap = cv2.VideoCapture(video)
This means we will output mp4 videos
video_codec = cv2.VideoWriter_fourcc(*"mp4v") type: ignore
fps = int(cap.get(cv2.CAP_PROP_FPS))
desired_fps = fps // SUBSAMPLE
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) // 2
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) // 2
iterating, frame = cap.read()
n_frames = 0
Use UUID to create a unique video file
output_video_name = f"output_{uuid.uuid4()}.mp4"
Output Video
output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height)) type: ignore
batch = []
while iterating:
frame = cv2.resize( frame, (0,0), fx=0.5, fy=0.5)
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
if n_frames % SUBSAMPLE == 0:
batch.append(frame)
if len(batc
|
The Inference Function
|
https://gradio.app/guides/object-detection-from-video
|
Streaming - Object Detection From Video Guide
|
frame = cv2.resize( frame, (0,0), fx=0.5, fy=0.5)
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
if n_frames % SUBSAMPLE == 0:
batch.append(frame)
if len(batch) == 2 * desired_fps:
inputs = image_processor(images=batch, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model(**inputs)
boxes = image_processor.post_process_object_detection(
outputs,
target_sizes=torch.tensor([(height, width)] * len(batch)),
threshold=conf_threshold)
for i, (array, box) in enumerate(zip(batch, boxes)):
pil_image = draw_bounding_boxes(Image.fromarray(array), box, model, conf_threshold)
frame = np.array(pil_image)
Convert RGB to BGR
frame = frame[:, :, ::-1].copy()
output_video.write(frame)
batch = []
output_video.release()
yield output_video_name
output_video_name = f"output_{uuid.uuid4()}.mp4"
output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height)) type: ignore
iterating, frame = cap.read()
n_frames += 1
```
1. **Reading from the Video**
One of the industry standards for creating videos in python is OpenCV so we will use it in this app.
The `cap` variable is how we will read from the input video. Whenever we call `cap.read()`, we are reading the next frame in the video.
In order to stream video in Gradio, we need to yield a different video file for each "chunk" of the output video.
We create the next video file to write to with the `output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height))` line. The `video_codec` is how we specify the type of video file. Only "mp4" and "ts" files are supported for video sreaming at the moment.
2. **The Inference Loop**
For each frame i
|
The Inference Function
|
https://gradio.app/guides/object-detection-from-video
|
Streaming - Object Detection From Video Guide
|
dth, height))` line. The `video_codec` is how we specify the type of video file. Only "mp4" and "ts" files are supported for video sreaming at the moment.
2. **The Inference Loop**
For each frame in the video, we will resize it to be half the size. OpenCV reads files in `BGR` format, so will convert to the expected `RGB` format of transfomers. That's what the first two lines of the while loop are doing.
We take every other frame and add it to a `batch` list so that the output video is half the original FPS. When the batch covers two seconds of video, we will run the model. The two second threshold was chosen to keep the processing time of each batch small enough so that video is smoothly displayed in the server while not requiring too many separate forward passes. In order for video streaming to work properly in Gradio, the batch size should be at least 1 second.
We run the forward pass of the model and then use the `post_process_object_detection` method of the model to scale the detected bounding boxes to the size of the input frame.
We make use of a custom function to draw the bounding boxes (source [here](https://huggingface.co/spaces/gradio/rt-detr-object-detection/blob/main/draw_boxes.pyL14)). We then have to convert from `RGB` to `BGR` before writing back to the output video.
Once we have finished processing the batch, we create a new output video file for the next batch.
|
The Inference Function
|
https://gradio.app/guides/object-detection-from-video
|
Streaming - Object Detection From Video Guide
|
The UI code is pretty similar to other kinds of Gradio apps.
We'll use a standard two-column layout so that users can see the input and output videos side by side.
In order for streaming to work, we have to set `streaming=True` in the output video. Setting the video
to autoplay is not necessary but it's a better experience for users.
```python
import gradio as gr
with gr.Blocks() as app:
gr.HTML(
"""
<h1 style='text-align: center'>
Video Object Detection with <a href='https://huggingface.co/PekingU/rtdetr_r101vd_coco_o365' target='_blank'>RT-DETR</a>
</h1>
""")
with gr.Row():
with gr.Column():
video = gr.Video(label="Video Source")
conf_threshold = gr.Slider(
label="Confidence Threshold",
minimum=0.0,
maximum=1.0,
step=0.05,
value=0.30,
)
with gr.Column():
output_video = gr.Video(label="Processed Video", streaming=True, autoplay=True)
video.upload(
fn=stream_object_detection,
inputs=[video, conf_threshold],
outputs=[output_video],
)
```
|
The Gradio Demo
|
https://gradio.app/guides/object-detection-from-video
|
Streaming - Object Detection From Video Guide
|
You can check out our demo hosted on Hugging Face Spaces [here](https://huggingface.co/spaces/gradio/rt-detr-object-detection).
It is also embedded on this page below
$demo_rt-detr-object-detection
|
Conclusion
|
https://gradio.app/guides/object-detection-from-video
|
Streaming - Object Detection From Video Guide
|
Automatic speech recognition (ASR), the conversion of spoken speech to text, is a very important and thriving area of machine learning. ASR algorithms run on practically every smartphone, and are becoming increasingly embedded in professional workflows, such as digital assistants for nurses and doctors. Because ASR algorithms are designed to be used directly by customers and end users, it is important to validate that they are behaving as expected when confronted with a wide variety of speech patterns (different accents, pitches, and background audio conditions).
Using `gradio`, you can easily build a demo of your ASR model and share that with a testing team, or test it yourself by speaking through the microphone on your device.
This tutorial will show how to take a pretrained speech-to-text model and deploy it with a Gradio interface. We will start with a **_full-context_** model, in which the user speaks the entire audio before the prediction runs. Then we will adapt the demo to make it **_streaming_**, meaning that the audio model will convert speech as you speak.
Prerequisites
Make sure you have the `gradio` Python package already [installed](/getting_started). You will also need a pretrained speech recognition model. In this tutorial, we will build demos from 2 ASR libraries:
- Transformers (for this, `pip install torch transformers torchaudio`)
Make sure you have at least one of these installed so that you can follow along the tutorial. You will also need `ffmpeg` [installed on your system](https://www.ffmpeg.org/download.html), if you do not already have it, to process files from the microphone.
Here's how to build a real time speech recognition (ASR) app:
1. [Set up the Transformers ASR Model](1-set-up-the-transformers-asr-model)
2. [Create a Full-Context ASR Demo with Transformers](2-create-a-full-context-asr-demo-with-transformers)
3. [Create a Streaming ASR Demo with Transformers](3-create-a-streaming-asr-demo-with-transformers)
|
Introduction
|
https://gradio.app/guides/real-time-speech-recognition
|
Streaming - Real Time Speech Recognition Guide
|
First, you will need to have an ASR model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will start by using a pretrained ASR model from the model, `whisper`.
Here is the code to load `whisper` from Hugging Face `transformers`.
```python
from transformers import pipeline
p = pipeline("automatic-speech-recognition", model="openai/whisper-base.en")
```
That's it!
|
1. Set up the Transformers ASR Model
|
https://gradio.app/guides/real-time-speech-recognition
|
Streaming - Real Time Speech Recognition Guide
|
We will start by creating a _full-context_ ASR demo, in which the user speaks the full audio before using the ASR model to run inference. This is very easy with Gradio -- we simply create a function around the `pipeline` object above.
We will use `gradio`'s built in `Audio` component, configured to take input from the user's microphone and return a filepath for the recorded audio. The output component will be a plain `Textbox`.
$code_asr
$demo_asr
The `transcribe` function takes a single parameter, `audio`, which is a numpy array of the audio the user recorded. The `pipeline` object expects this in float32 format, so we convert it first to float32, and then extract the transcribed text.
|
2. Create a Full-Context ASR Demo with Transformers
|
https://gradio.app/guides/real-time-speech-recognition
|
Streaming - Real Time Speech Recognition Guide
|
To make this a *streaming* demo, we need to make these changes:
1. Set `streaming=True` in the `Audio` component
2. Set `live=True` in the `Interface`
3. Add a `state` to the interface to store the recorded audio of a user
Tip: You can also set `time_limit` and `stream_every` parameters in the interface. The `time_limit` caps the amount of time each user's stream can take. The default is 30 seconds so users won't be able to stream audio for more than 30 seconds. The `stream_every` parameter controls how frequently data is sent to your function. By default it is 0.5 seconds.
Take a look below.
$code_stream_asr
Notice that we now have a state variable because we need to track all the audio history. `transcribe` gets called whenever there is a new small chunk of audio, but we also need to keep track of all the audio spoken so far in the state. As the interface runs, the `transcribe` function gets called, with a record of all the previously spoken audio in the `stream` and the new chunk of audio as `new_chunk`. We return the new full audio to be stored back in its current state, and we also return the transcription. Here, we naively append the audio together and call the `transcriber` object on the entire audio. You can imagine more efficient ways of handling this, such as re-processing only the last 5 seconds of audio whenever a new chunk of audio is received.
$demo_stream_asr
Now the ASR model will run inference as you speak!
|
3. Create a Streaming ASR Demo with Transformers
|
https://gradio.app/guides/real-time-speech-recognition
|
Streaming - Real Time Speech Recognition Guide
|
Just like the classic Magic 8 Ball, a user should ask it a question orally and then wait for a response. Under the hood, we'll use Whisper to transcribe the audio and then use an LLM to generate a magic-8-ball-style answer. Finally, we'll use Parler TTS to read the response aloud.
|
The Overview
|
https://gradio.app/guides/streaming-ai-generated-audio
|
Streaming - Streaming Ai Generated Audio Guide
|
First let's define the UI and put placeholders for all the python logic.
```python
import gradio as gr
with gr.Blocks() as block:
gr.HTML(
f"""
<h1 style='text-align: center;'> Magic 8 Ball π± </h1>
<h3 style='text-align: center;'> Ask a question and receive wisdom </h3>
<p style='text-align: center;'> Powered by <a href="https://github.com/huggingface/parler-tts"> Parler-TTS</a>
"""
)
with gr.Group():
with gr.Row():
audio_out = gr.Audio(label="Spoken Answer", streaming=True, autoplay=True)
answer = gr.Textbox(label="Answer")
state = gr.State()
with gr.Row():
audio_in = gr.Audio(label="Speak your question", sources="microphone", type="filepath")
audio_in.stop_recording(generate_response, audio_in, [state, answer, audio_out])\
.then(fn=read_response, inputs=state, outputs=[answer, audio_out])
block.launch()
```
We're placing the output Audio and Textbox components and the input Audio component in separate rows. In order to stream the audio from the server, we'll set `streaming=True` in the output Audio component. We'll also set `autoplay=True` so that the audio plays as soon as it's ready.
We'll be using the Audio input component's `stop_recording` event to trigger our application's logic when a user stops recording from their microphone.
We're separating the logic into two parts. First, `generate_response` will take the recorded audio, transcribe it and generate a response with an LLM. We're going to store the response in a `gr.State` variable that then gets passed to the `read_response` function that generates the audio.
We're doing this in two parts because only `read_response` will require a GPU. Our app will run on Hugging Faces [ZeroGPU](https://huggingface.co/zero-gpu-explorers) which has time-based quotas. Since generating the response can be done with Hugging Face's Inference API, we shouldn't include that code in our GPU func
|
The UI
|
https://gradio.app/guides/streaming-ai-generated-audio
|
Streaming - Streaming Ai Generated Audio Guide
|
GPU](https://huggingface.co/zero-gpu-explorers) which has time-based quotas. Since generating the response can be done with Hugging Face's Inference API, we shouldn't include that code in our GPU function as it will needlessly use our GPU quota.
|
The UI
|
https://gradio.app/guides/streaming-ai-generated-audio
|
Streaming - Streaming Ai Generated Audio Guide
|
As mentioned above, we'll use [Hugging Face's Inference API](https://huggingface.co/docs/huggingface_hub/guides/inference) to transcribe the audio and generate a response from an LLM. After instantiating the client, I use the `automatic_speech_recognition` method (this automatically uses Whisper running on Hugging Face's Inference Servers) to transcribe the audio. Then I pass the question to an LLM (Mistal-7B-Instruct) to generate a response. We are prompting the LLM to act like a magic 8 ball with the system message.
Our `generate_response` function will also send empty updates to the output textbox and audio components (returning `None`).
This is because I want the Gradio progress tracker to be displayed over the components but I don't want to display the answer until the audio is ready.
```python
from huggingface_hub import InferenceClient
client = InferenceClient(token=os.getenv("HF_TOKEN"))
def generate_response(audio):
gr.Info("Transcribing Audio", duration=5)
question = client.automatic_speech_recognition(audio).text
messages = [{"role": "system", "content": ("You are a magic 8 ball."
"Someone will present to you a situation or question and your job "
"is to answer with a cryptic adage or proverb such as "
"'curiosity killed the cat' or 'The early bird gets the worm'."
"Keep your answers short and do not include the phrase 'Magic 8 Ball' in your response. If the question does not make sense or is off-topic, say 'Foolish questions get foolish answers.'"
"For example, 'Magic 8 Ball, should I get a dog?', 'A dog is ready for you but are you ready for the dog?'")},
{"role": "user", "content": f"Magic 8 Ball please answer this question - {question}"}]
response = client.chat_completion(messages,
|
The Logic
|
https://gradio.app/guides/streaming-ai-generated-audio
|
Streaming - Streaming Ai Generated Audio Guide
|
for you but are you ready for the dog?'")},
{"role": "user", "content": f"Magic 8 Ball please answer this question - {question}"}]
response = client.chat_completion(messages, max_tokens=64, seed=random.randint(1, 5000),
model="mistralai/Mistral-7B-Instruct-v0.3")
response = response.choices[0].message.content.replace("Magic 8 Ball", "").replace(":", "")
return response, None, None
```
Now that we have our text response, we'll read it aloud with Parler TTS. The `read_response` function will be a python generator that yields the next chunk of audio as it's ready.
We'll be using the [Mini v0.1](https://huggingface.co/parler-tts/parler_tts_mini_v0.1) for the feature extraction but the [Jenny fine tuned version](https://huggingface.co/parler-tts/parler-tts-mini-jenny-30H) for the voice. This is so that the voice is consistent across generations.
Streaming audio with transformers requires a custom Streamer class. You can see the implementation [here](https://huggingface.co/spaces/gradio/magic-8-ball/blob/main/streamer.py). Additionally, we'll convert the output to bytes so that it can be streamed faster from the backend.
```python
from streamer import ParlerTTSStreamer
from transformers import AutoTokenizer, AutoFeatureExtractor, set_seed
import numpy as np
import spaces
import torch
from threading import Thread
device = "cuda:0" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu"
torch_dtype = torch.float16 if device != "cpu" else torch.float32
repo_id = "parler-tts/parler_tts_mini_v0.1"
jenny_repo_id = "ylacombe/parler-tts-mini-jenny-30H"
model = ParlerTTSForConditionalGeneration.from_pretrained(
jenny_repo_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True
).to(device)
tokenizer = AutoTokenizer.from_pretrained(repo_id)
feature_extractor = AutoFeatureExtractor.from_pretrained(repo_id)
sampling_rate = model.audio_encoder.config.sampling_rate
f
|
The Logic
|
https://gradio.app/guides/streaming-ai-generated-audio
|
Streaming - Streaming Ai Generated Audio Guide
|
sage=True
).to(device)
tokenizer = AutoTokenizer.from_pretrained(repo_id)
feature_extractor = AutoFeatureExtractor.from_pretrained(repo_id)
sampling_rate = model.audio_encoder.config.sampling_rate
frame_rate = model.audio_encoder.config.frame_rate
@spaces.GPU
def read_response(answer):
play_steps_in_s = 2.0
play_steps = int(frame_rate * play_steps_in_s)
description = "Jenny speaks at an average pace with a calm delivery in a very confined sounding environment with clear audio quality."
description_tokens = tokenizer(description, return_tensors="pt").to(device)
streamer = ParlerTTSStreamer(model, device=device, play_steps=play_steps)
prompt = tokenizer(answer, return_tensors="pt").to(device)
generation_kwargs = dict(
input_ids=description_tokens.input_ids,
prompt_input_ids=prompt.input_ids,
streamer=streamer,
do_sample=True,
temperature=1.0,
min_new_tokens=10,
)
set_seed(42)
thread = Thread(target=model.generate, kwargs=generation_kwargs)
thread.start()
for new_audio in streamer:
print(f"Sample of length: {round(new_audio.shape[0] / sampling_rate, 2)} seconds")
yield answer, numpy_to_mp3(new_audio, sampling_rate=sampling_rate)
```
|
The Logic
|
https://gradio.app/guides/streaming-ai-generated-audio
|
Streaming - Streaming Ai Generated Audio Guide
|
You can see our final application [here](https://huggingface.co/spaces/gradio/magic-8-ball)!
|
Conclusion
|
https://gradio.app/guides/streaming-ai-generated-audio
|
Streaming - Streaming Ai Generated Audio Guide
|
Modern voice applications should feel natural and responsive, moving beyond the traditional "click-to-record" pattern. By combining Groq's fast inference capabilities with automatic speech detection, we can create a more intuitive interaction model where users can simply start talking whenever they want to engage with the AI.
> Credits: VAD and Gradio code inspired by [WillHeld's Diva-audio-chat](https://huggingface.co/spaces/WillHeld/diva-audio-chat/tree/main).
In this tutorial, you will learn how to create a multimodal Gradio and Groq app that has automatic speech detection. You can also watch the full video tutorial which includes a demo of the application:
<iframe width="560" height="315" src="https://www.youtube.com/embed/azXaioGdm2Q" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
Introduction
|
https://gradio.app/guides/automatic-voice-detection
|
Streaming - Automatic Voice Detection Guide
|
Many voice apps currently work by the user clicking record, speaking, then stopping the recording. While this can be a powerful demo, the most natural mode of interaction with voice requires the app to dynamically detect when the user is speaking, so they can talk back and forth without having to continually click a record button.
Creating a natural interaction with voice and text requires a dynamic and low-latency response. Thus, we need both automatic voice detection and fast inference. With @ricky0123/vad-web powering speech detection and Groq powering the LLM, both of these requirements are met. Groq provides a lightning fast response, and Gradio allows for easy creation of impressively functional apps.
This tutorial shows you how to build a calorie tracking app where you speak to an AI that automatically detects when you start and stop your response, and provides its own text response back to guide you with questions that allow it to give a calorie estimate of your last meal.
|
Background
|
https://gradio.app/guides/automatic-voice-detection
|
Streaming - Automatic Voice Detection Guide
|
- **Gradio**: Provides the web interface and audio handling capabilities
- **@ricky0123/vad-web**: Handles voice activity detection
- **Groq**: Powers fast LLM inference for natural conversations
- **Whisper**: Transcribes speech to text
Setting Up the Environment
First, letβs install and import our essential libraries and set up a client for using the Groq API. Hereβs how to do it:
`requirements.txt`
```
gradio
groq
numpy
soundfile
librosa
spaces
xxhash
datasets
```
`app.py`
```python
import groq
import gradio as gr
import soundfile as sf
from dataclasses import dataclass, field
import os
Initialize Groq client securely
api_key = os.environ.get("GROQ_API_KEY")
if not api_key:
raise ValueError("Please set the GROQ_API_KEY environment variable.")
client = groq.Client(api_key=api_key)
```
Here, weβre pulling in key libraries to interact with the Groq API, build a sleek UI with Gradio, and handle audio data. Weβre accessing the Groq API key securely with a key stored in an environment variable, which is a security best practice for avoiding leaking the API key.
---
State Management for Seamless Conversations
We need a way to keep track of our conversation history, so the chatbot remembers past interactions, and manage other states like whether recording is currently active. To do this, letβs create an `AppState` class:
```python
@dataclass
class AppState:
conversation: list = field(default_factory=list)
stopped: bool = False
model_outs: Any = None
```
Our `AppState` class is a handy tool for managing conversation history and tracking whether recording is on or off. Each instance will have its own fresh list of conversations, making sure chat history is isolated to each session.
---
Transcribing Audio with Whisper on Groq
Next, weβll create a function to transcribe the userβs audio input into text using Whisper, a powerful transcription model hosted on Groq. This transcription will also help us determine whether thereβs meani
|
Key Components
|
https://gradio.app/guides/automatic-voice-detection
|
Streaming - Automatic Voice Detection Guide
|
eβll create a function to transcribe the userβs audio input into text using Whisper, a powerful transcription model hosted on Groq. This transcription will also help us determine whether thereβs meaningful speech in the input. Hereβs how:
```python
def transcribe_audio(client, file_name):
if file_name is None:
return None
try:
with open(file_name, "rb") as audio_file:
response = client.audio.transcriptions.with_raw_response.create(
model="whisper-large-v3-turbo",
file=("audio.wav", audio_file),
response_format="verbose_json",
)
completion = process_whisper_response(response.parse())
return completion
except Exception as e:
print(f"Error in transcription: {e}")
return f"Error in transcription: {str(e)}"
```
This function opens the audio file and sends it to Groqβs Whisper model for transcription, requesting detailed JSON output. verbose_json is needed to get information to determine if speech was included in the audio. We also handle any potential errors so our app doesnβt fully crash if thereβs an issue with the API request.
```python
def process_whisper_response(completion):
"""
Process Whisper transcription response and return text or null based on no_speech_prob
Args:
completion: Whisper transcription response object
Returns:
str or None: Transcribed text if no_speech_prob <= 0.7, otherwise None
"""
if completion.segments and len(completion.segments) > 0:
no_speech_prob = completion.segments[0].get('no_speech_prob', 0)
print("No speech prob:", no_speech_prob)
if no_speech_prob > 0.7:
return None
return completion.text.strip()
return None
```
We also need to interpret the audio data response. The process_whisper_response function takes the resulting completion from Whisper and checks if the audio was j
|
Key Components
|
https://gradio.app/guides/automatic-voice-detection
|
Streaming - Automatic Voice Detection Guide
|
ext.strip()
return None
```
We also need to interpret the audio data response. The process_whisper_response function takes the resulting completion from Whisper and checks if the audio was just background noise or had actual speaking that was transcribed. It uses a threshold of 0.7 to interpret the no_speech_prob, and will return None if there was no speech. Otherwise, it will return the text transcript of the conversational response from the human.
---
Adding Conversational Intelligence with LLM Integration
Our chatbot needs to provide intelligent, friendly responses that flow naturally. Weβll use a Groq-hosted Llama-3.2 for this:
```python
def generate_chat_completion(client, history):
messages = []
messages.append(
{
"role": "system",
"content": "In conversation with the user, ask questions to estimate and provide (1) total calories, (2) protein, carbs, and fat in grams, (3) fiber and sugar content. Only ask *one question at a time*. Be conversational and natural.",
}
)
for message in history:
messages.append(message)
try:
completion = client.chat.completions.create(
model="llama-3.2-11b-vision-preview",
messages=messages,
)
return completion.choices[0].message.content
except Exception as e:
return f"Error in generating chat completion: {str(e)}"
```
Weβre defining a system prompt to guide the chatbotβs behavior, ensuring it asks one question at a time and keeps things conversational. This setup also includes error handling to ensure the app gracefully manages any issues.
---
Voice Activity Detection for Hands-Free Interaction
To make our chatbot hands-free, weβll add Voice Activity Detection (VAD) to automatically detect when someone starts or stops speaking. Hereβs how to implement it using ONNX in JavaScript:
```javascript
async function main() {
const script1 = document.createElement("script");
scrip
|
Key Components
|
https://gradio.app/guides/automatic-voice-detection
|
Streaming - Automatic Voice Detection Guide
|
ly detect when someone starts or stops speaking. Hereβs how to implement it using ONNX in JavaScript:
```javascript
async function main() {
const script1 = document.createElement("script");
script1.src = "https://cdn.jsdelivr.net/npm/onnxruntime-web@1.14.0/dist/ort.js";
document.head.appendChild(script1)
const script2 = document.createElement("script");
script2.onload = async () => {
console.log("vad loaded");
var record = document.querySelector('.record-button');
record.textContent = "Just Start Talking!"
const myvad = await vad.MicVAD.new({
onSpeechStart: () => {
var record = document.querySelector('.record-button');
var player = document.querySelector('streaming-out')
if (record != null && (player == null || player.paused)) {
record.click();
}
},
onSpeechEnd: (audio) => {
var stop = document.querySelector('.stop-button');
if (stop != null) {
stop.click();
}
}
})
myvad.start()
}
script2.src = "https://cdn.jsdelivr.net/npm/@ricky0123/vad-web@0.0.7/dist/bundle.min.js";
}
```
This script loads our VAD model and sets up functions to start and stop recording automatically. When the user starts speaking, it triggers the recording, and when they stop, it ends the recording.
---
Building a User Interface with Gradio
Now, letβs create an intuitive and visually appealing user interface with Gradio. This interface will include an audio input for capturing voice, a chat window for displaying responses, and state management to keep things synchronized.
```python
with gr.Blocks() as demo:
with gr.Row():
input_audio = gr.Audio(
label="Input Audio",
sources=["microphone"],
type="numpy",
streaming=False,
waveform_options=gr.WaveformOptions(waveform_color="B83A4B"),
)
with gr.Row():
chatbot = gr.Chatbot(label="Conversation")
state = g
|
Key Components
|
https://gradio.app/guides/automatic-voice-detection
|
Streaming - Automatic Voice Detection Guide
|
",
streaming=False,
waveform_options=gr.WaveformOptions(waveform_color="B83A4B"),
)
with gr.Row():
chatbot = gr.Chatbot(label="Conversation")
state = gr.State(value=AppState())
demo.launch(theme=theme, js=js)
```
In this code block, weβre using Gradioβs `Blocks` API to create an interface with an audio input, a chat display, and an application state manager. The color customization for the waveform adds a nice visual touch.
---
Handling Recording and Responses
Finally, letβs link the recording and response components to ensure the app reacts smoothly to user inputs and provides responses in real-time.
```python
stream = input_audio.start_recording(
process_audio,
[input_audio, state],
[input_audio, state],
)
respond = input_audio.stop_recording(
response, [state, input_audio], [state, chatbot]
)
```
These lines set up event listeners for starting and stopping the recording, processing the audio input, and generating responses. By linking these events, we create a cohesive experience where users can simply talk, and the chatbot handles the rest.
---
|
Key Components
|
https://gradio.app/guides/automatic-voice-detection
|
Streaming - Automatic Voice Detection Guide
|
1. When you open the app, the VAD system automatically initializes and starts listening for speech
2. As soon as you start talking, it triggers the recording automatically
3. When you stop speaking, the recording ends and:
- The audio is transcribed using Whisper
- The transcribed text is sent to the LLM
- The LLM generates a response about calorie tracking
- The response is displayed in the chat interface
4. This creates a natural back-and-forth conversation where you can simply talk about your meals and get instant feedback on nutritional content
This app demonstrates how to create a natural voice interface that feels responsive and intuitive. By combining Groq's fast inference with automatic speech detection, we've eliminated the need for manual recording controls while maintaining high-quality interactions. The result is a practical calorie tracking assistant that users can simply talk to as naturally as they would to a human nutritionist.
Link to GitHub repository: [Groq Gradio Basics](https://github.com/bklieger-groq/gradio-groq-basics/tree/main/calorie-tracker)
|
Summary
|
https://gradio.app/guides/automatic-voice-detection
|
Streaming - Automatic Voice Detection Guide
|
The next generation of AI user interfaces is moving towards audio-native experiences. Users will be able to speak to chatbots and receive spoken responses in return. Several models have been built under this paradigm, including GPT-4o and [mini omni](https://github.com/gpt-omni/mini-omni).
In this guide, we'll walk you through building your own conversational chat application using mini omni as an example. You can see a demo of the finished app below:
<video src="https://github.com/user-attachments/assets/db36f4db-7535-49f1-a2dd-bd36c487ebdf" controls
height="600" width="600" style="display: block; margin: auto;" autoplay="true" loop="true">
</video>
|
Introduction
|
https://gradio.app/guides/conversational-chatbot
|
Streaming - Conversational Chatbot Guide
|
Our application will enable the following user experience:
1. Users click a button to start recording their message
2. The app detects when the user has finished speaking and stops recording
3. The user's audio is passed to the omni model, which streams back a response
4. After omni mini finishes speaking, the user's microphone is reactivated
5. All previous spoken audio, from both the user and omni, is displayed in a chatbot component
Let's dive into the implementation details.
|
Application Overview
|
https://gradio.app/guides/conversational-chatbot
|
Streaming - Conversational Chatbot Guide
|
We'll stream the user's audio from their microphone to the server and determine if the user has stopped speaking on each new chunk of audio.
Here's our `process_audio` function:
```python
import numpy as np
from utils import determine_pause
def process_audio(audio: tuple, state: AppState):
if state.stream is None:
state.stream = audio[1]
state.sampling_rate = audio[0]
else:
state.stream = np.concatenate((state.stream, audio[1]))
pause_detected = determine_pause(state.stream, state.sampling_rate, state)
state.pause_detected = pause_detected
if state.pause_detected and state.started_talking:
return gr.Audio(recording=False), state
return None, state
```
This function takes two inputs:
1. The current audio chunk (a tuple of `(sampling_rate, numpy array of audio)`)
2. The current application state
We'll use the following `AppState` dataclass to manage our application state:
```python
from dataclasses import dataclass
@dataclass
class AppState:
stream: np.ndarray | None = None
sampling_rate: int = 0
pause_detected: bool = False
stopped: bool = False
conversation: list = []
```
The function concatenates new audio chunks to the existing stream and checks if the user has stopped speaking. If a pause is detected, it returns an update to stop recording. Otherwise, it returns `None` to indicate no changes.
The implementation of the `determine_pause` function is specific to the omni-mini project and can be found [here](https://huggingface.co/spaces/gradio/omni-mini/blob/eb027808c7bfe5179b46d9352e3fa1813a45f7c3/app.pyL98).
|
Processing User Audio
|
https://gradio.app/guides/conversational-chatbot
|
Streaming - Conversational Chatbot Guide
|
After processing the user's audio, we need to generate and stream the chatbot's response. Here's our `response` function:
```python
import io
import tempfile
from pydub import AudioSegment
def response(state: AppState):
if not state.pause_detected and not state.started_talking:
return None, AppState()
audio_buffer = io.BytesIO()
segment = AudioSegment(
state.stream.tobytes(),
frame_rate=state.sampling_rate,
sample_width=state.stream.dtype.itemsize,
channels=(1 if len(state.stream.shape) == 1 else state.stream.shape[1]),
)
segment.export(audio_buffer, format="wav")
with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as f:
f.write(audio_buffer.getvalue())
state.conversation.append({"role": "user",
"content": {"path": f.name,
"mime_type": "audio/wav"}})
output_buffer = b""
for mp3_bytes in speaking(audio_buffer.getvalue()):
output_buffer += mp3_bytes
yield mp3_bytes, state
with tempfile.NamedTemporaryFile(suffix=".mp3", delete=False) as f:
f.write(output_buffer)
state.conversation.append({"role": "assistant",
"content": {"path": f.name,
"mime_type": "audio/mp3"}})
yield None, AppState(conversation=state.conversation)
```
This function:
1. Converts the user's audio to a WAV file
2. Adds the user's message to the conversation history
3. Generates and streams the chatbot's response using the `speaking` function
4. Saves the chatbot's response as an MP3 file
5. Adds the chatbot's response to the conversation history
Note: The implementation of the `speaking` function is specific to the omni-mini project and can be found [here](https://huggingface.co/spaces/gradio/omni-mini/blob/main/app.pyL116).
|
Generating the Response
|
https://gradio.app/guides/conversational-chatbot
|
Streaming - Conversational Chatbot Guide
|
Now let's put it all together using Gradio's Blocks API:
```python
import gradio as gr
def start_recording_user(state: AppState):
if not state.stopped:
return gr.Audio(recording=True)
with gr.Blocks() as demo:
with gr.Row():
with gr.Column():
input_audio = gr.Audio(
label="Input Audio", sources="microphone", type="numpy"
)
with gr.Column():
chatbot = gr.Chatbot(label="Conversation")
output_audio = gr.Audio(label="Output Audio", streaming=True, autoplay=True)
state = gr.State(value=AppState())
stream = input_audio.stream(
process_audio,
[input_audio, state],
[input_audio, state],
stream_every=0.5,
time_limit=30,
)
respond = input_audio.stop_recording(
response,
[state],
[output_audio, state]
)
respond.then(lambda s: s.conversation, [state], [chatbot])
restart = output_audio.stop(
start_recording_user,
[state],
[input_audio]
)
cancel = gr.Button("Stop Conversation", variant="stop")
cancel.click(lambda: (AppState(stopped=True), gr.Audio(recording=False)), None,
[state, input_audio], cancels=[respond, restart])
if __name__ == "__main__":
demo.launch()
```
This setup creates a user interface with:
- An input audio component for recording user messages
- A chatbot component to display the conversation history
- An output audio component for the chatbot's responses
- A button to stop and reset the conversation
The app streams user audio in 0.5-second chunks, processes it, generates responses, and updates the conversation history accordingly.
|
Building the Gradio App
|
https://gradio.app/guides/conversational-chatbot
|
Streaming - Conversational Chatbot Guide
|
This guide demonstrates how to build a conversational chatbot application using Gradio and the mini omni model. You can adapt this framework to create various audio-based chatbot demos. To see the full application in action, visit the Hugging Face Spaces demo: https://huggingface.co/spaces/gradio/omni-mini
Feel free to experiment with different models, audio processing techniques, or user interface designs to create your own unique conversational AI experiences!
|
Conclusion
|
https://gradio.app/guides/conversational-chatbot
|
Streaming - Conversational Chatbot Guide
|
If the state is something that should be accessible to all function calls and all users, you can create a variable outside the function call and access it inside the function. For example, you may load a large model outside the function and use it inside the function so that every function call does not need to reload the model.
$code_score_tracker
In the code above, the `scores` array is shared between all users. If multiple users are accessing this demo, their scores will all be added to the same list, and the returned top 3 scores will be collected from this shared reference.
|
Global State
|
https://gradio.app/guides/interface-state
|
Building Interfaces - Interface State Guide
|
Another type of data persistence Gradio supports is session state, where data persists across multiple submits within a page session. However, data is _not_ shared between different users of your model. To store data in a session state, you need to do three things:
1. Pass in an extra parameter into your function, which represents the state of the interface.
2. At the end of the function, return the updated value of the state as an extra return value.
3. Add the `'state'` input and `'state'` output components when creating your `Interface`
Here's a simple app to illustrate session state - this app simply stores users previous submissions and displays them back to the user:
$code_interface_state
$demo_interface_state
Notice how the state persists across submits within each page, but if you load this demo in another tab (or refresh the page), the demos will not share chat history. Here, we could not store the submission history in a global variable, otherwise the submission history would then get jumbled between different users.
The initial value of the `State` is `None` by default. If you pass a parameter to the `value` argument of `gr.State()`, it is used as the default value of the state instead.
Note: the `Interface` class only supports a single session state variable (though it can be a list with multiple elements). For more complex use cases, you can use Blocks, [which supports multiple `State` variables](/guides/state-in-blocks/). Alternatively, if you are building a chatbot that maintains user state, consider using the `ChatInterface` abstraction, [which manages state automatically](/guides/creating-a-chatbot-fast).
|
Session State
|
https://gradio.app/guides/interface-state
|
Building Interfaces - Interface State Guide
|
Adding examples to an Interface is as easy as providing a list of lists to the `examples`
keyword argument.
Each sublist is a data sample, where each element corresponds to an input of the prediction function.
The inputs must be ordered in the same order as the prediction function expects them.
If your interface only has one input component, then you can provide your examples as a regular list instead of a list of lists.
Loading Examples from a Directory
You can also specify a path to a directory containing your examples. If your Interface takes only a single file-type input, e.g. an image classifier, you can simply pass a directory filepath to the `examples=` argument, and the `Interface` will load the images in the directory as examples.
In the case of multiple inputs, this directory must
contain a log.csv file with the example values.
In the context of the calculator demo, we can set `examples='/demo/calculator/examples'` and in that directory we include the following `log.csv` file:
```csv
num,operation,num2
5,"add",3
4,"divide",2
5,"multiply",3
```
This can be helpful when browsing flagged data. Simply point to the flagged directory and the `Interface` will load the examples from the flagged data.
Providing Partial Examples
Sometimes your app has many input components, but you would only like to provide examples for a subset of them. In order to exclude some inputs from the examples, pass `None` for all data samples corresponding to those particular components.
|
Providing Examples
|
https://gradio.app/guides/more-on-examples
|
Building Interfaces - More On Examples Guide
|
You may wish to provide some cached examples of your model for users to quickly try out, in case your model takes a while to run normally.
If `cache_examples=True`, your Gradio app will run all of the examples and save the outputs when you call the `launch()` method. This data will be saved in a directory called `gradio_cached_examples` in your working directory by default. You can also set this directory with the `GRADIO_EXAMPLES_CACHE` environment variable, which can be either an absolute path or a relative path to your working directory.
Whenever a user clicks on an example, the output will automatically be populated in the app now, using data from this cached directory instead of actually running the function. This is useful so users can quickly try out your model without adding any load!
Alternatively, you can set `cache_examples="lazy"`. This means that each particular example will only get cached after it is first used (by any user) in the Gradio app. This is helpful if your prediction function is long-running and you do not want to wait a long time for your Gradio app to start.
Keep in mind once the cache is generated, it will not be updated automatically in future launches. If the examples or function logic change, delete the cache folder to clear the cache and rebuild it with another `launch()`.
|
Caching examples
|
https://gradio.app/guides/more-on-examples
|
Building Interfaces - More On Examples Guide
|
To create a demo that has both the input and the output components, you simply need to set the values of the `inputs` and `outputs` parameter in `Interface()`. Here's an example demo of a simple image filter:
$code_sepia_filter
$demo_sepia_filter
|
Standard demos
|
https://gradio.app/guides/four-kinds-of-interfaces
|
Building Interfaces - Four Kinds Of Interfaces Guide
|
What about demos that only contain outputs? In order to build such a demo, you simply set the value of the `inputs` parameter in `Interface()` to `None`. Here's an example demo of a mock image generation model:
$code_fake_gan_no_input
$demo_fake_gan_no_input
|
Output-only demos
|
https://gradio.app/guides/four-kinds-of-interfaces
|
Building Interfaces - Four Kinds Of Interfaces Guide
|
Similarly, to create a demo that only contains inputs, set the value of `outputs` parameter in `Interface()` to be `None`. Here's an example demo that saves any uploaded image to disk:
$code_save_file_no_output
$demo_save_file_no_output
|
Input-only demos
|
https://gradio.app/guides/four-kinds-of-interfaces
|
Building Interfaces - Four Kinds Of Interfaces Guide
|
A demo that has a single component as both the input and the output. It can simply be created by setting the values of the `inputs` and `outputs` parameter as the same component. Here's an example demo of a text generation model:
$code_unified_demo_text_generation
$demo_unified_demo_text_generation
It may be the case that none of the 4 cases fulfill your exact needs. In this case, you need to use the `gr.Blocks()` approach!
|
Unified demos
|
https://gradio.app/guides/four-kinds-of-interfaces
|
Building Interfaces - Four Kinds Of Interfaces Guide
|
Gradio includes more than 30 pre-built components (as well as many [community-built _custom components_](https://www.gradio.app/custom-components/gallery)) that can be used as inputs or outputs in your demo. These components correspond to common data types in machine learning and data science, e.g. the `gr.Image` component is designed to handle input or output images, the `gr.Label` component displays classification labels and probabilities, the `gr.LinePlot` component displays line plots, and so on.
|
Gradio Components
|
https://gradio.app/guides/the-interface-class
|
Building Interfaces - The Interface Class Guide
|
We used the default versions of the `gr.Textbox` and `gr.Slider`, but what if you want to change how the UI components look or behave?
Let's say you want to customize the slider to have values from 1 to 10, with a default of 2. And you wanted to customize the output text field β you want it to be larger and have a label.
If you use the actual classes for `gr.Textbox` and `gr.Slider` instead of the string shortcuts, you have access to much more customizability through component attributes.
$code_hello_world_2
$demo_hello_world_2
|
Components Attributes
|
https://gradio.app/guides/the-interface-class
|
Building Interfaces - The Interface Class Guide
|
Suppose you had a more complex function, with multiple outputs as well. In the example below, we define a function that takes a string, boolean, and number, and returns a string and number.
$code_hello_world_3
$demo_hello_world_3
Just as each component in the `inputs` list corresponds to one of the parameters of the function, in order, each component in the `outputs` list corresponds to one of the values returned by the function, in order.
|
Multiple Input and Output Components
|
https://gradio.app/guides/the-interface-class
|
Building Interfaces - The Interface Class Guide
|
Gradio supports many types of components, such as `Image`, `DataFrame`, `Video`, or `Label`. Let's try an image-to-image function to get a feel for these!
$code_sepia_filter
$demo_sepia_filter
When using the `Image` component as input, your function will receive a NumPy array with the shape `(height, width, 3)`, where the last dimension represents the RGB values. We'll return an image as well in the form of a NumPy array.
Gradio handles the preprocessing and postprocessing to convert images to NumPy arrays and vice versa. You can also control the preprocessing performed with the `type=` keyword argument. For example, if you wanted your function to take a file path to an image instead of a NumPy array, the input `Image` component could be written as:
```python
gr.Image(type="filepath")
```
You can read more about the built-in Gradio components and how to customize them in the [Gradio docs](https://gradio.app/docs).
|
An Image Example
|
https://gradio.app/guides/the-interface-class
|
Building Interfaces - The Interface Class Guide
|
You can provide example data that a user can easily load into `Interface`. This can be helpful to demonstrate the types of inputs the model expects, as well as to provide a way to explore your dataset in conjunction with your model. To load example data, you can provide a **nested list** to the `examples=` keyword argument of the Interface constructor. Each sublist within the outer list represents a data sample, and each element within the sublist represents an input for each input component. The format of example data for each component is specified in the [Docs](https://gradio.app/docscomponents).
$code_calculator
$demo_calculator
You can load a large dataset into the examples to browse and interact with the dataset through Gradio. The examples will be automatically paginated (you can configure this through the `examples_per_page` argument of `Interface`).
Continue learning about examples in the [More On Examples](https://gradio.app/guides/more-on-examples) guide.
|
Example Inputs
|
https://gradio.app/guides/the-interface-class
|
Building Interfaces - The Interface Class Guide
|
In the previous example, you may have noticed the `title=` and `description=` keyword arguments in the `Interface` constructor that helps users understand your app.
There are three arguments in the `Interface` constructor to specify where this content should go:
- `title`: which accepts text and can display it at the very top of interface, and also becomes the page title.
- `description`: which accepts text, markdown or HTML and places it right under the title.
- `article`: which also accepts text, markdown or HTML and places it below the interface.

Another useful keyword argument is `label=`, which is present in every `Component`. This modifies the label text at the top of each `Component`. You can also add the `info=` keyword argument to form elements like `Textbox` or `Radio` to provide further information on their usage.
```python
gr.Number(label='Age', info='In years, must be greater than 0')
```
|
Descriptive Content
|
https://gradio.app/guides/the-interface-class
|
Building Interfaces - The Interface Class Guide
|
If your prediction function takes many inputs, you may want to hide some of them within a collapsed accordion to avoid cluttering the UI. The `Interface` class takes an `additional_inputs` argument which is similar to `inputs` but any input components included here are not visible by default. The user must click on the accordion to show these components. The additional inputs are passed into the prediction function, in order, after the standard inputs.
You can customize the appearance of the accordion by using the optional `additional_inputs_accordion` argument, which accepts a string (in which case, it becomes the label of the accordion), or an instance of the `gr.Accordion()` class (e.g. this lets you control whether the accordion is open or closed by default).
Here's an example:
$code_interface_with_additional_inputs
$demo_interface_with_additional_inputs
|
Additional Inputs within an Accordion
|
https://gradio.app/guides/the-interface-class
|
Building Interfaces - The Interface Class Guide
|
You can make interfaces automatically refresh by setting `live=True` in the interface. Now the interface will recalculate as soon as the user input changes.
$code_calculator_live
$demo_calculator_live
Note there is no submit button, because the interface resubmits automatically on change.
|
Live Interfaces
|
https://gradio.app/guides/reactive-interfaces
|
Building Interfaces - Reactive Interfaces Guide
|
Some components have a "streaming" mode, such as `Audio` component in microphone mode, or the `Image` component in webcam mode. Streaming means data is sent continuously to the backend and the `Interface` function is continuously being rerun.
The difference between `gr.Audio(source='microphone')` and `gr.Audio(source='microphone', streaming=True)`, when both are used in `gr.Interface(live=True)`, is that the first `Component` will automatically submit data and run the `Interface` function when the user stops recording, whereas the second `Component` will continuously send data and run the `Interface` function _during_ recording.
Here is example code of streaming images from the webcam.
$code_stream_frames
Streaming can also be done in an output component. A `gr.Audio(streaming=True)` output component can take a stream of audio data yielded piece-wise by a generator function and combines them into a single audio file. For a detailed example, see our guide on performing [automatic speech recognition](/guides/real-time-speech-recognition) with Gradio.
|
Streaming Components
|
https://gradio.app/guides/reactive-interfaces
|
Building Interfaces - Reactive Interfaces Guide
|
If you're using LLMs in your workflow, adding this server will augment them with just the right context on gradio - which makes your experience a lot faster and smoother.
<video src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/mcp-docs.mp4" style="width:100%" controls preload> </video>
The server is running on Spaces and was launched entirely using Gradio, you can see all the code [here](https://huggingface.co/spaces/gradio/docs-mcp). For more on building an mcp server with gradio, see the [previous guide](./building-an-mcp-client-with-gradio).
|
Why an MCP Server?
|
https://gradio.app/guides/using-docs-mcp
|
Mcp - Using Docs Mcp Guide
|
For clients that support streamable HTTP (e.g. Cursor, Windsurf, Cline), simply add the following configuration to your MCP config:
```json
{
"mcpServers": {
"gradio": {
"url": "https://gradio-docs-mcp.hf.space/gradio_api/mcp/"
}
}
}
```
We've included step-by-step instructions for Cursor below, but you can consult the docs for Windsurf [here](https://docs.windsurf.com/windsurf/mcp), and Cline [here](https://docs.cline.bot/mcp-servers/configuring-mcp-servers) which are similar to set up.
Cursor
1. Make sure you're using the latest version of Cursor, and go to Cursor > Settings > Cursor Settings > MCP
2. Click on '+ Add new global MCP server'
3. Copy paste this json into the file that opens and then save it.
```json
{
"mcpServers": {
"gradio": {
"url": "https://gradio-docs-mcp.hf.space/gradio_api/mcp/"
}
}
}
```
4. That's it! You should see the tools load and the status go green in the settings page. You may have to click the refresh icon or wait a few seconds.

Claude Desktop
1. Since Claude Desktop only supports stdio, you will need to [install Node.js](https://nodejs.org/en/download/) to get this to work.
2. Make sure you're using the latest version of Claude Desktop, and go to Claude > Settings > Developer > Edit Config
3. Open the file with your favorite editor and copy paste this json, then save the file.
```json
{
"mcpServers": {
"gradio": {
"command": "npx",
"args": [
"mcp-remote",
"https://gradio-docs-mcp.hf.space/gradio_api/mcp/"
]
}
}
}
```
4. Quit and re-open Claude Desktop, and you should be good to go. You should see it loaded in the Search and Tools icon or on the developer settings page.

|
Installing in the Clients
|
https://gradio.app/guides/using-docs-mcp
|
Mcp - Using Docs Mcp Guide
|
There are currently only two tools in the server: `gradio_docs_mcp_load_gradio_docs` and `gradio_docs_mcp_search_gradio_docs`.
1. `gradio_docs_mcp_load_gradio_docs`: This tool takes no arguments and will load an /llms.txt style summary of Gradio's latest, full documentation. Very useful context the LLM can parse before answering questions or generating code.
2. `gradio_docs_mcp_search_gradio_docs`: This tool takes a query as an argument and will run embedding search on Gradio's docs, guides, and demos to return the most useful context for the LLM to parse.
|
Tools
|
https://gradio.app/guides/using-docs-mcp
|
Mcp - Using Docs Mcp Guide
|
As of version 5.36.0, Gradio now comes with a built-in MCP server that can upload files to a running Gradio application. In the `View API` page of the server, you should see the following code snippet if any of the tools require file inputs:
<img src="https://huggingface.co/datasets/freddyaboulton/bucket/resolve/main/MCPConnectionDocs.png">
The command to start the MCP server takes two arguments:
- The URL (or Hugging Face space id) of the gradio application to upload the files to. In this case, `http://127.0.0.1:7860`.
- The local directory on your computer with which the server is allowed to upload files from (`<UPLOAD_DIRECTORY>`). For security, please make this directory as narrow as possible to prevent unintended file uploads.
As stated in the image, you need to install [uv](https://docs.astral.sh/uv/getting-started/installation/) (a python package manager that can run python scripts) before connecting from your MCP client.
If you have gradio installed locally and you don't want to install uv, you can replace the `uvx` command with the path to gradio binary. It should look like this:
```json
"upload-files": {
"command": "<absoluate-path-to-gradio>",
"args": [
"upload-mcp",
"http://localhost:7860/",
"/Users/freddyboulton/Pictures"
]
}
```
After connecting to the upload server, your LLM agent will know when to upload files for you automatically!
<img src="https://huggingface.co/datasets/freddyaboulton/bucket/resolve/main/Ghibliafy.png">
|
Using the File Upload MCP Server
|
https://gradio.app/guides/file-upload-mcp
|
Mcp - File Upload Mcp Guide
|
In this guide, we've covered how you can connect to the Upload File MCP Server so that your agent can upload files before using Gradio MCP servers. Remember to set the `<UPLOAD_DIRECTORY>` as small as possible to prevent unintended file uploads!
|
Conclusion
|
https://gradio.app/guides/file-upload-mcp
|
Mcp - File Upload Mcp Guide
|
An MCP (Model Control Protocol) server is a standardized way to expose tools so that they can be used by LLMs. A tool can provide an LLM functionality that it does not have natively, such as the ability to generate images or calculate the prime factors of a number.
|
What is an MCP Server?
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
LLMs are famously not great at counting the number of letters in a word (e.g. the number of "r"-s in "strawberry"). But what if we equip them with a tool to help? Let's start by writing a simple Gradio app that counts the number of letters in a word or phrase:
$code_letter_counter
Notice that we have: (1) included a detailed docstring for our function, and (2) set `mcp_server=True` in `.launch()`. This is all that's needed for your Gradio app to serve as an MCP server! Now, when you run this app, it will:
1. Start the regular Gradio web interface
2. Start the MCP server
3. Print the MCP server URL in the console
The MCP server will be accessible at:
```
http://your-server:port/gradio_api/mcp/
```
Gradio automatically converts the `letter_counter` function into an MCP tool that can be used by LLMs. The docstring of the function and the type hints of arguments will be used to generate the description of the tool and its parameters. The name of the function will be used as the name of your tool. Any initial values you provide to your input components (e.g. "strawberry" and "r" in the `gr.Textbox` components above) will be used as the default values if your LLM doesn't specify a value for that particular input parameter.
Now, all you need to do is add this URL endpoint to your MCP Client (e.g. Claude Desktop, Cursor, or Cline), which typically means pasting this config in the settings:
```
{
"mcpServers": {
"gradio": {
"url": "http://your-server:port/gradio_api/mcp/"
}
}
}
```
(By the way, you can find the exact config to copy-paste by going to the "View API" link in the footer of your Gradio app, and then clicking on "MCP").

|
Example: Counting Letters in a Word
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
1. **Tool Conversion**: Each API endpoint in your Gradio app is automatically converted into an MCP tool with a corresponding name, description, and input schema. To view the tools and schemas, visit http://your-server:port/gradio_api/mcp/schema or go to the "View API" link in the footer of your Gradio app, and then click on "MCP".
2. **Environment variable support**. There are two ways to enable the MCP server functionality:
* Using the `mcp_server` parameter, as shown above:
```python
demo.launch(mcp_server=True)
```
* Using environment variables:
```bash
export GRADIO_MCP_SERVER=True
```
3. **File Handling**: The Gradio MCP server automatically handles file data conversions, including:
- Processing image files and returning them in the correct format
- Managing temporary file storage
By default, the Gradio MCP server accepts input images and files as full URLs ("http://..." or "https:/..."). For convenience, an additional STDIO-based MCP server is also generated, which can be used to upload files to any remote Gradio app and which returns a URL that can be used for subsequent tool calls.
4. **Hosted MCP Servers on σ π€ Spaces**: You can publish your Gradio application for free on Hugging Face Spaces, which will allow you to have a free hosted MCP server. Here's an example of such a Space: https://huggingface.co/spaces/abidlabs/mcp-tools. Notice that you can add this config to your MCP Client to start using the tools from this Space immediately:
```
{
"mcpServers": {
"gradio": {
"url": "https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/"
}
}
}
```
<video src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/mcp_guide1.mp4" style="width:100%" controls preload> </video>
|
Key features of the Gradio <> MCP Integration
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
If there's an existing Space that you'd like to use an MCP server, you'll need to do three things:
1. First, [duplicate the Space](https://huggingface.co/docs/hub/en/spaces-more-ways-to-createduplicating-a-space) if it is not your own Space. This will allow you to make changes to the app. If the Space requires a GPU, set the hardware of the duplicated Space to be same as the original Space. You can make it either a public Space or a private Space, since it is possible to use either as an MCP server, as described below.
2. Then, add docstrings to the functions that you'd like the LLM to be able to call as a tool. The docstring should be in the same format as the example code above.
3. Finally, add `mcp_server=True` in `.launch()`.
That's it!
|
Converting an Existing Space
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
You can use either a public Space or a private Space as an MCP server. If you'd like to use a private Space as an MCP server (or a ZeroGPU Space with your own quota), then you will need to provide your [Hugging Face token](https://huggingface.co/settings/token) when you make your request. To do this, simply add it as a header in your config like this:
```
{
"mcpServers": {
"gradio": {
"url": "https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/",
"headers": {
"Authorization": "Bearer <YOUR-HUGGING-FACE-TOKEN>"
}
}
}
}
```
|
Private Spaces
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
You may wish to authenticate users more precisely or let them provide other kinds of credentials or tokens in order to provide a custom experience for different users.
Gradio allows you to access the underlying `starlette.Request` that has made the tool call, which means that you can access headers, originating IP address, or any other information that is part of the network request. To do this, simply add a parameter in your function of the type `gr.Request`, and Gradio will automatically inject the request object as the parameter.
Here's an example:
```py
import gradio as gr
def echo_headers(x, request: gr.Request):
return str(dict(request.headers))
gr.Interface(echo_headers, "textbox", "textbox").launch(mcp_server=True)
```
This MCP server will simply ignore the user's input and echo back all of the headers from a user's request. One can build more complex apps using the same idea. See the [docs on `gr.Request`](https://www.gradio.app/main/docs/gradio/request) for more information (note that only the core Starlette attributes of the `gr.Request` object will be present, attributes such as Gradio's `.session_hash` will not be present).
Using the gr.Header class
A common pattern in MCP server development is to use authentication headers to call services on behalf of your users. Instead of using a `gr.Request` object like in the example above, you can use a `gr.Header` argument. Gradio will automatically extract that header from the incoming request (if it exists) and pass it to your function.
In the example below, the `X-API-Token` header is extracted from the incoming request and passed in as the `x_api_token` argument to `make_api_request_on_behalf_of_user`.
The benefit of using `gr.Header` is that the MCP connection docs will automatically display the headers you need to supply when connecting to the server! See the image below:
```python
import gradio as gr
def make_api_request_on_behalf_of_user(prompt: str, x_api_token: gr.Header):
"""M
|
Authentication and Credentials
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
the headers you need to supply when connecting to the server! See the image below:
```python
import gradio as gr
def make_api_request_on_behalf_of_user(prompt: str, x_api_token: gr.Header):
"""Make a request to everyone's favorite API.
Args:
prompt: The prompt to send to the API.
Returns:
The response from the API.
Raises:
AssertionError: If the API token is not valid.
"""
return "Hello from the API" if not x_api_token else "Hello from the API with token!"
demo = gr.Interface(
make_api_request_on_behalf_of_user,
[
gr.Textbox(label="Prompt"),
],
gr.Textbox(label="Response"),
)
demo.launch(mcp_server=True)
```

Sending Progress Updates
The Gradio MCP server automatically sends progress updates to your MCP Client based on the queue in the Gradio application. If you'd like to send custom progress updates, you can do so using the same mechanism as you would use to display progress updates in the UI of your Gradio app: by using the `gr.Progress` class!
Here's an example of how to do this:
$code_mcp_progress
[Here are the docs](https://www.gradio.app/docs/gradio/progress) for the `gr.Progress` class, which can also automatically track `tqdm` calls.
Note: by default, progress notifications are enabled for all MCP tools, even if the corresponding Gradio functions do not include a `gr.Progress`. However, this can add some overhead to the MCP tool (typically ~500ms). To disable progress notification, you can set `queue=False` in your Gradio event handler to skip the overhead related to subscribing to the queue's progress updates.
|
Authentication and Credentials
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
Gradio automatically sets the tool name based on the name of your function, and the description from the docstring of your function. But you may want to change how the description appears to your LLM. You can do this by using the `api_description` parameter in `Interface`, `ChatInterface`, or any event listener. This parameter takes three different kinds of values:
* `None` (default): the tool description is automatically created from the docstring of the function (or its parent's docstring if it does not have a docstring but inherits from a method that does.)
* `False`: no tool description appears to the LLM.
* `str`: an arbitrary string to use as the tool description.
In addition to modifying the tool descriptions, you can also toggle which tools appear to the LLM. You can do this by setting the `show_api` parameter, which is by default `True`. Setting it to `False` hides the endpoint from the API docs and from the MCP server. If you expose multiple tools, users of your app will also be able to toggle which tools they'd like to add to their MCP server by checking boxes in the "view MCP or API" panel.
Here's an example that shows the `api_description` and `show_api` parameters in actions:
$code_mcp_tools
|
Modifying Tool Descriptions
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
In addition to tools (which execute functions generally and are the default for any function exposed through the Gradio MCP integration), MCP supports two other important primitives: **resources** (for exposing data) and **prompts** (for defining reusable templates). Gradio provides decorators to easily create MCP servers with all three capabilities.
Creating MCP Resources
Use the `@gr.mcp.resource` decorator on any function to expose data through your Gradio app. Resources can be static (always available at a fixed URI) or templated (with parameters in the URI).
$code_mcp_resources_and_prompts
In this example:
- The `get_greeting` function is exposed as a resource with a URI template `greeting://{name}`
- When an MCP client requests `greeting://Alice`, it receives "Hello, Alice!"
- Resources can also return images and other types of files or binary data. In order to return non-text data, you should specify the `mime_type` parameter in `@gr.mcp.resource()` and return a Base64 string from your function.
Creating MCP Prompts
Prompts help standardize how users interact with your tools. They're especially useful for complex workflows that require specific formatting or multiple steps.
The `greet_user` function in the example above is decorated with `@gr.mcp.prompt()`, which:
- Makes it available as a prompt template in MCP clients
- Accepts parameters (`name` and `style`) to customize the output
- Returns a structured prompt that guides the LLM's behavior
|
MCP Resources and Prompts
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
So far, all of our MCP tools, resources, or prompts have corresponded to event listeners in the UI. This works well for functions that directly update the UI, but may not work if you wish to expose a "pure logic" function that should return raw data (e.g. a JSON object) without directly causing a UI update.
In order to expose such an MCP tool, you can create a pure Gradio API endpoint using `gr.api` (see [full docs here](https://www.gradio.app/main/docs/gradio/api)). Here's an example of creating an MCP tool that slices a list:
$code_mcp_tool_only
Note that if you use this approach, your function signature must be fully typed, including the return value, as these signature are used to determine the typing information for the MCP tool.
|
Adding MCP-Only Functions
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
In some cases, you may decide not to use Gradio's built-in integration and instead manually create an FastMCP Server that calls a Gradio app. This approach is useful when you want to:
- Store state / identify users between calls instead of treating every tool call completely independently
- Start the Gradio app MCP server when a tool is called (if you are running multiple Gradio apps locally and want to save memory / GPU)
This is very doable thanks to the [Gradio Python Client](https://www.gradio.app/guides/getting-started-with-the-python-client) and the [MCP Python SDK](https://github.com/modelcontextprotocol/python-sdk)'s `FastMCP` class. Here's an example of creating a custom MCP server that connects to various Gradio apps hosted on [HuggingFace Spaces](https://huggingface.co/spaces) using the `stdio` protocol:
```python
from mcp.server.fastmcp import FastMCP
from gradio_client import Client
import sys
import io
import json
mcp = FastMCP("gradio-spaces")
clients = {}
def get_client(space_id: str) -> Client:
"""Get or create a Gradio client for the specified space."""
if space_id not in clients:
clients[space_id] = Client(space_id)
return clients[space_id]
@mcp.tool()
async def generate_image(prompt: str, space_id: str = "ysharma/SanaSprint") -> str:
"""Generate an image using Flux.
Args:
prompt: Text prompt describing the image to generate
space_id: HuggingFace Space ID to use
"""
client = get_client(space_id)
result = client.predict(
prompt=prompt,
model_size="1.6B",
seed=0,
randomize_seed=True,
width=1024,
height=1024,
guidance_scale=4.5,
num_inference_steps=2,
api_name="/infer"
)
return result
@mcp.tool()
async def run_dia_tts(prompt: str, space_id: str = "ysharma/Dia-1.6B") -> str:
"""Text-to-Speech Synthesis.
Args:
prompt: Text prompt describing the co
|
Gradio with FastMCP
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
return result
@mcp.tool()
async def run_dia_tts(prompt: str, space_id: str = "ysharma/Dia-1.6B") -> str:
"""Text-to-Speech Synthesis.
Args:
prompt: Text prompt describing the conversation between speakers S1, S2
space_id: HuggingFace Space ID to use
"""
client = get_client(space_id)
result = client.predict(
text_input=f"""{prompt}""",
audio_prompt_input=None,
max_new_tokens=3072,
cfg_scale=3,
temperature=1.3,
top_p=0.95,
cfg_filter_top_k=30,
speed_factor=0.94,
api_name="/generate_audio"
)
return result
if __name__ == "__main__":
import sys
import io
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8')
mcp.run(transport='stdio')
```
This server exposes two tools:
1. `run_dia_tts` - Generates a conversation for the given transcript in the form of `[S1]first-sentence. [S2]second-sentence. [S1]...`
2. `generate_image` - Generates images using a fast text-to-image model
To use this MCP Server with Claude Desktop (as MCP Client):
1. Save the code to a file (e.g., `gradio_mcp_server.py`)
2. Install the required dependencies: `pip install mcp gradio-client`
3. Configure Claude Desktop to use your server by editing the configuration file at `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows):
```json
{
"mcpServers": {
"gradio-spaces": {
"command": "python",
"args": [
"/absolute/path/to/gradio_mcp_server.py"
]
}
}
}
```
4. Restart Claude Desktop
Now, when you ask Claude about generating an image or transcribing audio, it can use your Gradio-powered tools to accomplish these tasks.
|
Gradio with FastMCP
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
use your Gradio-powered tools to accomplish these tasks.
|
Gradio with FastMCP
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
The MCP protocol is still in its infancy and you might see issues connecting to an MCP Server that you've built. We generally recommend using the [MCP Inspector Tool](https://github.com/modelcontextprotocol/inspector) to try connecting and debugging your MCP Server.
Here are some things that may help:
**1. Ensure that you've provided type hints and valid docstrings for your functions**
As mentioned earlier, Gradio reads the docstrings for your functions and the type hints of input arguments to generate the description of the tool and parameters. A valid function and docstring looks like this (note the "Args:" block with indented parameter names underneath):
```py
def image_orientation(image: Image.Image) -> str:
"""
Returns whether image is portrait or landscape.
Args:
image (Image.Image): The image to check.
"""
return "Portrait" if image.height > image.width else "Landscape"
```
Note: You can preview the schema that is created for your MCP server by visiting the `http://your-server:port/gradio_api/mcp/schema` URL.
**2. Try accepting input arguments as `str`**
Some MCP Clients do not recognize parameters that are numeric or other complex types, but all of the MCP Clients that we've tested accept `str` input parameters. When in doubt, change your input parameter to be a `str` and then cast to a specific type in the function, as in this example:
```py
def prime_factors(n: str):
"""
Compute the prime factorization of a positive integer.
Args:
n (str): The integer to factorize. Must be greater than 1.
"""
n_int = int(n)
if n_int <= 1:
raise ValueError("Input must be an integer greater than 1.")
factors = []
while n_int % 2 == 0:
factors.append(2)
n_int //= 2
divisor = 3
while divisor * divisor <= n_int:
while n_int % divisor == 0:
factors.append(divisor)
n_int //= divisor
divisor += 2
if n_int > 1:
factors.
|
Troubleshooting your MCP Servers
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
= 3
while divisor * divisor <= n_int:
while n_int % divisor == 0:
factors.append(divisor)
n_int //= divisor
divisor += 2
if n_int > 1:
factors.append(n_int)
return factors
```
**3. Ensure that your MCP Client Supports Streamable HTTP**
Some MCP Clients do not yet support streamable HTTP-based MCP Servers. In those cases, you can use a tool such as [mcp-remote](https://github.com/geelen/mcp-remote). First install [Node.js](https://nodejs.org/en/download/). Then, add the following to your own MCP Client config:
```
{
"mcpServers": {
"gradio": {
"command": "npx",
"args": [
"mcp-remote",
"http://your-server:port/gradio_api/mcp/"
]
}
}
}
```
**4. Restart your MCP Client and MCP Server**
Some MCP Clients require you to restart them every time you update the MCP configuration. Other times, if the connection between the MCP Client and servers breaks, you might need to restart the MCP server. If all else fails, try restarting both your MCP Client and MCP Servers!
|
Troubleshooting your MCP Servers
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
The Model Context Protocol (MCP) standardizes how applications provide context to LLMs. It allows Claude to interact with external tools, like image generators, file systems, or APIs, etc.
|
What is MCP?
|
https://gradio.app/guides/building-an-mcp-client-with-gradio
|
Mcp - Building An Mcp Client With Gradio Guide
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.