text stringlengths 0 2k | heading1 stringlengths 4 79 | source_page_url stringclasses 183 values | source_page_title stringclasses 183 values |
|---|---|---|---|
**Using Plot as an input component.**
How Plot will pass its value to your function:
Type: `PlotData | None`
(Rarely used) passes the data displayed in the plot as an PlotData dataclass,
which includes the plot information as a JSON string, as well as the type of
chart and the plotting library.
Example Code
import gradio as gr
def predict(
value: PlotData | None
):
process value from the Plot component
return "prediction"
interface = gr.Interface(predict, gr.Plot(), gr.Textbox())
interface.launch()
**Using Plot as an output component**
How Plot expects you to return a value:
Type: `Any`
Expects plot data in one of these formats: a matplotlib.Figure, bokeh.Model,
plotly.Figure, or altair.Chart object.
Example Code
import gradio as gr
def predict(text) -> Any
process value to return to the Plot component
return value
interface = gr.Interface(predict, gr.Textbox(), gr.Plot())
interface.launch()
| Behavior | https://gradio.app/docs/gradio/plot | Gradio - Plot Docs |
Parameters ▼
value: Any | None
default `= None`
Optionally, supply a default plot object to display, must be a matplotlib,
plotly, altair, or bokeh figure, or a callable. If a function is provided, the
function will be called each time the app loads to set the initial value of
this component.
format: str
default `= "webp"`
File format in which to send matplotlib plots to the front end, such as 'jpg'
or 'png'.
label: str | I18nData | None
default `= None`
the label for this component. Appears above the component and is also used as
the header if there are a table of examples for this component. If None and
used in a `gr.Interface`, the label will be the name of the parameter this
component is assigned to.
every: Timer | float | None
default `= None`
Continously calls `value` to recalculate it if `value` is a function (has no
effect otherwise). Can provide a Timer whose tick resets `value`, or a float
that provides the regular interval for the reset Timer.
inputs: Component | list[Component] | set[Component] | None
default `= None`
Components that are used as inputs to calculate `value` if `value` is a
function (has no effect otherwise). `value` is recalculated any time the
inputs change.
show_label: bool | None
default `= None`
if True, will display label.
container: bool
default `= True`
If True, will place the component in a container - providing some extra
padding around the border.
scale: int | None
default `= None`
relative size compared to adjacent Components. For example if Components A and
B are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide
as B. Should be an integer. scale applies in Rows, and to top-level Components
in Blocks where fill_height=True.
min_width: int
default `= 160`
minimum pixel width, will wrap if not sufficient screen space to satisfy this
value. If a cert | Initialization | https://gradio.app/docs/gradio/plot | Gradio - Plot Docs |
o top-level Components
in Blocks where fill_height=True.
min_width: int
default `= 160`
minimum pixel width, will wrap if not sufficient screen space to satisfy this
value. If a certain scale value results in this Component being narrower than
min_width, the min_width parameter will be respected first.
visible: bool | Literal['hidden']
default `= True`
If False, component will be hidden. If "hidden", component will be visually
hidden and not take up space in the layout but still exist in the DOM
elem_id: str | None
default `= None`
An optional string that is assigned as the id of this component in the HTML
DOM. Can be used for targeting CSS styles.
elem_classes: list[str] | str | None
default `= None`
An optional list of strings that are assigned as the classes of this component
in the HTML DOM. Can be used for targeting CSS styles.
render: bool
default `= True`
If False, component will not render be rendered in the Blocks context. Should
be used if the intention is to assign event listeners now but render the
component later.
key: int | str | tuple[int | str, ...] | None
default `= None`
in a gr.render, Components with the same key across re-renders are treated as
the same component, not a new component. Properties set in 'preserved_by_key'
are not reset across a re-render.
preserved_by_key: list[str] | str | None
default `= "value"`
A list of parameters from this component's constructor. Inside a gr.render()
function, if a component is re-rendered with the same key, these (and only
these) parameters will be preserved in the UI (if they have been changed by
the user or an event listener) instead of re-rendered based on the values
provided during constructor.
buttons: list[Button] | None
default `= None`
A list of gr.Button() instances to show in the top right corner of the
component. Custom buttons will appear in the toolba | Initialization | https://gradio.app/docs/gradio/plot | Gradio - Plot Docs |
constructor.
buttons: list[Button] | None
default `= None`
A list of gr.Button() instances to show in the top right corner of the
component. Custom buttons will appear in the toolbar with their configured
icon and/or label, and clicking them will trigger any .click() events
registered on the button.
| Initialization | https://gradio.app/docs/gradio/plot | Gradio - Plot Docs |
Shortcuts
gradio.Plot
Interface String Shortcut `"plot"`
Initialization Uses default values
| Shortcuts | https://gradio.app/docs/gradio/plot | Gradio - Plot Docs |
blocks_kinematicsstock_forecast
| Demos | https://gradio.app/docs/gradio/plot | Gradio - Plot Docs |
Description
Event listeners allow you to respond to user interactions with the UI
components you've defined in a Gradio Blocks app. When a user interacts with
an element, such as changing a slider value or uploading an image, a function
is called.
Supported Event Listeners
The Plot component supports the following event listeners. Each event listener
takes the same parameters, which are listed in the Event Parameters table
below.
Listeners
Plot.change(fn, ···)
Triggered when the value of the Plot changes either because of user input
(e.g. a user types in a textbox) OR because of a function update (e.g. an
image receives a value from the output of an event trigger). See `.input()`
for a listener that is only triggered by user input.
Event Parameters
Parameters ▼
fn: Callable | None | Literal['decorator']
default `= "decorator"`
the function to call when this event is triggered. Often a machine learning
model's prediction function. Each parameter of the function corresponds to one
input component, and the function should return a single value or a tuple of
values, with each element in the tuple corresponding to one output component.
inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None
default `= None`
List of gradio.components to use as inputs. If the function takes no inputs,
this should be an empty list.
outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None
default `= None`
List of gradio.components to use as outputs. If the function returns no
outputs, this should be an empty list.
api_name: str | None
default `= None`
defines how the endpoint appears in the API docs. Can be a string or None. If
set to a string, the endpoint will be exposed in the API docs with the given
name. If None (default), the name of the function will be used as the API
endpoint.
| Event Listeners | https://gradio.app/docs/gradio/plot | Gradio - Plot Docs |
PI docs. Can be a string or None. If
set to a string, the endpoint will be exposed in the API docs with the given
name. If None (default), the name of the function will be used as the API
endpoint.
api_description: str | None | Literal[False]
default `= None`
Description of the API endpoint. Can be a string, None, or False. If set to a
string, the endpoint will be exposed in the API docs with the given
description. If None, the function's docstring will be used as the API
endpoint description. If False, then no description will be displayed in the
API docs.
scroll_to_output: bool
default `= False`
If True, will scroll to output component on completion
show_progress: Literal['full', 'minimal', 'hidden']
default `= "full"`
how to show the progress animation while event is running: "full" shows a
spinner which covers the output component area as well as a runtime display in
the upper right corner, "minimal" only shows the runtime display, "hidden"
shows no progress animation at all
show_progress_on: Component | list[Component] | None
default `= None`
Component or list of components to show the progress animation on. If None,
will show the progress animation on all of the output components.
queue: bool
default `= True`
If True, will place the request on the queue, if the queue has been enabled.
If False, will not put this event on the queue, even if the queue has been
enabled. If None, will use the queue setting of the gradio app.
batch: bool
default `= False`
If True, then the function should process a batch of inputs, meaning that it
should accept a list of input values for each parameter. The lists should be
of equal length (and be up to length `max_batch_size`). The function is then
*required* to return a tuple of lists (even if there is only 1 output
component), with each list in the tuple corresponding to one output component.
max_batch_size: | Event Listeners | https://gradio.app/docs/gradio/plot | Gradio - Plot Docs |
he function is then
*required* to return a tuple of lists (even if there is only 1 output
component), with each list in the tuple corresponding to one output component.
max_batch_size: int
default `= 4`
Maximum number of inputs to batch together if this is called from the queue
(only relevant if batch=True)
preprocess: bool
default `= True`
If False, will not run preprocessing of component data before running 'fn'
(e.g. leaving it as a base64 string if this method is called with the `Image`
component).
postprocess: bool
default `= True`
If False, will not run postprocessing of component data before returning 'fn'
output to the browser.
cancels: dict[str, Any] | list[dict[str, Any]] | None
default `= None`
A list of other events to cancel when this listener is triggered. For example,
setting cancels=[click_event] will cancel the click_event, where click_event
is the return value of another components .click method. Functions that have
not yet run (or generators that are iterating) will be cancelled, but
functions that are currently running will be allowed to finish.
trigger_mode: Literal['once', 'multiple', 'always_last'] | None
default `= None`
If "once" (default for all events except `.change()`) would not allow any
submissions while an event is pending. If set to "multiple", unlimited
submissions are allowed while pending, and "always_last" (default for
`.change()` and `.key_up()` events) would allow a second submission after the
pending event is complete.
js: str | Literal[True] | None
default `= None`
Optional frontend js method to run before running 'fn'. Input arguments for js
method are values of 'inputs' and 'outputs', return should be a list of values
for output components.
concurrency_limit: int | None | Literal['default']
default `= "default"`
If set, this is the maximum number of this event that can be running
simultaneously. Can | Event Listeners | https://gradio.app/docs/gradio/plot | Gradio - Plot Docs |
output components.
concurrency_limit: int | None | Literal['default']
default `= "default"`
If set, this is the maximum number of this event that can be running
simultaneously. Can be set to None to mean no concurrency_limit (any number of
this event can be running simultaneously). Set to "default" to use the default
concurrency limit (defined by the `default_concurrency_limit` parameter in
`Blocks.queue()`, which itself is 1 by default).
concurrency_id: str | None
default `= None`
If set, this is the id of the concurrency group. Events with the same
concurrency_id will be limited by the lowest set concurrency_limit.
api_visibility: Literal['public', 'private', 'undocumented']
default `= "public"`
controls the visibility and accessibility of this endpoint. Can be "public"
(shown in API docs and callable by clients), "private" (hidden from API docs
and not callable by clients), or "undocumented" (hidden from API docs but
callable by clients and via gr.load). If fn is None, api_visibility will
automatically be set to "private".
time_limit: int | None
default `= None`
stream_every: float
default `= 0.5`
key: int | str | tuple[int | str, ...] | None
default `= None`
A unique key for this event listener to be used in @gr.render(). If set, this
value identifies an event as identical across re-renders when the key is
identical.
validator: Callable | None
default `= None`
Optional validation function to run before the main function. If provided,
this function will be executed first with queue=False, and only if it
completes successfully will the main function be called. The validator
receives the same inputs as the main function and should return a
`gr.validate()` for each input value.
[Plot Component For Maps](../../guides/plot-component-for-maps/)
| Event Listeners | https://gradio.app/docs/gradio/plot | Gradio - Plot Docs |
lidate()` for each input value.
[Plot Component For Maps](../../guides/plot-component-for-maps/)
| Event Listeners | https://gradio.app/docs/gradio/plot | Gradio - Plot Docs |
Button that clears the value of a component or a list of components when
clicked. It is instantiated with the list of components to clear.
| Description | https://gradio.app/docs/gradio/clearbutton | Gradio - Clearbutton Docs |
**Using ClearButton as an input component.**
How ClearButton will pass its value to your function:
Type: `str | None`
(Rarely used) the `str` corresponding to the button label when the button is
clicked
Example Code
import gradio as gr
def predict(
value: str | None
):
process value from the ClearButton component
return "prediction"
interface = gr.Interface(predict, gr.ClearButton(), gr.Textbox())
interface.launch()
**Using ClearButton as an output component**
How ClearButton expects you to return a value:
Type: `str | None`
string corresponding to the button label
Example Code
import gradio as gr
def predict(text) -> str | None
process value to return to the ClearButton component
return value
interface = gr.Interface(predict, gr.Textbox(), gr.ClearButton())
interface.launch()
| Behavior | https://gradio.app/docs/gradio/clearbutton | Gradio - Clearbutton Docs |
Parameters ▼
components: None | list[Component] | Component
default `= None`
value: str
default `= "Clear"`
default text for the button to display. If a function is provided, the
function will be called each time the app loads to set the initial value of
this component.
every: Timer | float | None
default `= None`
continuously calls `value` to recalculate it if `value` is a function (has no
effect otherwise). Can provide a Timer whose tick resets `value`, or a float
that provides the regular interval for the reset Timer.
inputs: Component | list[Component] | set[Component] | None
default `= None`
components that are used as inputs to calculate `value` if `value` is a
function (has no effect otherwise). `value` is recalculated any time the
inputs change.
variant: Literal['primary', 'secondary', 'stop']
default `= "secondary"`
sets the background and text color of the button. Use 'primary' for main call-
to-action buttons, 'secondary' for a more subdued style, 'stop' for a stop
button, 'huggingface' for a black background with white text, consistent with
Hugging Face's button styles.
size: Literal['sm', 'md', 'lg']
default `= "lg"`
size of the button. Can be "sm", "md", or "lg".
icon: str | Path | None
default `= None`
URL or path to the icon file to display within the button. If None, no icon
will be displayed.
link: str | None
default `= None`
URL to open when the button is clicked. If None, no link will be used.
link_target: Literal['_self', '_blank', '_parent', '_top']
default `= "_self"`
visible: bool | Literal['hidden']
default `= True`
If False, component will be hidden. If "hidden", component will be visually
hidden and not take up space in the layout but still exist in the DOM
interactive: bool
default `= True`
if False, the Button will be in a disabled state.
| Initialization | https://gradio.app/docs/gradio/clearbutton | Gradio - Clearbutton Docs |
ent will be visually
hidden and not take up space in the layout but still exist in the DOM
interactive: bool
default `= True`
if False, the Button will be in a disabled state.
elem_id: str | None
default `= None`
an optional string that is assigned as the id of this component in the HTML
DOM. Can be used for targeting CSS styles.
elem_classes: list[str] | str | None
default `= None`
an optional list of strings that are assigned as the classes of this component
in the HTML DOM. Can be used for targeting CSS styles.
render: bool
default `= True`
if False, component will not render be rendered in the Blocks context. Should
be used if the intention is to assign event listeners now but render the
component later.
key: int | str | tuple[int | str, ...] | None
default `= None`
in a gr.render, Components with the same key across re-renders are treated as
the same component, not a new component. Properties set in 'preserved_by_key'
are not reset across a re-render.
preserved_by_key: list[str] | str | None
default `= "value"`
A list of parameters from this component's constructor. Inside a gr.render()
function, if a component is re-rendered with the same key, these (and only
these) parameters will be preserved in the UI (if they have been changed by
the user or an event listener) instead of re-rendered based on the values
provided during constructor.
scale: int | None
default `= None`
relative size compared to adjacent Components. For example if Components A and
B are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide
as B. Should be an integer. scale applies in Rows, and to top-level Components
in Blocks where fill_height=True.
min_width: int | None
default `= None`
minimum pixel width, will wrap if not sufficient screen space to satisfy this
value. If a certain scale value results in this Component being narrowe | Initialization | https://gradio.app/docs/gradio/clearbutton | Gradio - Clearbutton Docs |
min_width: int | None
default `= None`
minimum pixel width, will wrap if not sufficient screen space to satisfy this
value. If a certain scale value results in this Component being narrower than
min_width, the min_width parameter will be respected first.
api_name: str | None
default `= None`
api_visibility: Literal['public', 'private', 'undocumented']
default `= "undocumented"`
| Initialization | https://gradio.app/docs/gradio/clearbutton | Gradio - Clearbutton Docs |
Shortcuts
gradio.ClearButton
Interface String Shortcut `"clearbutton"`
Initialization Uses default values
| Shortcuts | https://gradio.app/docs/gradio/clearbutton | Gradio - Clearbutton Docs |
Description
Event listeners allow you to respond to user interactions with the UI
components you've defined in a Gradio Blocks app. When a user interacts with
an element, such as changing a slider value or uploading an image, a function
is called.
Supported Event Listeners
The ClearButton component supports the following event listeners. Each event
listener takes the same parameters, which are listed in the Event Parameters
table below.
Listeners
ClearButton.add(fn, ···)
Adds a component or list of components to the list of components that will be
cleared when the button is clicked.
ClearButton.click(fn, ···)
Triggered when the Button is clicked.
Event Parameters
Parameters ▼
components: None | Component | list[Component]
| Event Listeners | https://gradio.app/docs/gradio/clearbutton | Gradio - Clearbutton Docs |
Creates a navigation bar component for multipage Gradio apps. The navbar
component allows customizing the appearance of the navbar for that page. Only
one Navbar component can exist per page in a Blocks app, and it can be placed
anywhere within the page.
The Navbar component is designed to control the appearance of the navigation
bar in multipage applications. When present in a Blocks app, its properties
override the default navbar behavior.
| Description | https://gradio.app/docs/gradio/navbar | Gradio - Navbar Docs |
**Using Navbar as an input component.**
How Navbar will pass its value to your function:
Type: `list[tuple[str, str]] | None`
The preprocessed input data sent to the user's function in the backend.
Example Code
import gradio as gr
def predict(
value: list[tuple[str, str]] | None
):
process value from the Navbar component
return "prediction"
interface = gr.Interface(predict, gr.Navbar(), gr.Textbox())
interface.launch()
**Using Navbar as an output component**
How Navbar expects you to return a value:
Type: `list[tuple[str, str]] | None`
The output data received by the component from the user's function in the
backend.
Example Code
import gradio as gr
def predict(text) -> list[tuple[str, str]] | None
process value to return to the Navbar component
return value
interface = gr.Interface(predict, gr.Textbox(), gr.Navbar())
interface.launch()
| Behavior | https://gradio.app/docs/gradio/navbar | Gradio - Navbar Docs |
Parameters ▼
value: list[tuple[str, str]] | None
default `= None`
If a list of tuples of (page_name, page_path) are provided, these additional
pages will be added to the navbar alongside the existing pages defined in the
Blocks app. The page_path can be either a relative path for internal Gradio
app pages (e.g., "analytics") or an absolute URL for external links (e.g.,
"https://twitter.com/username"). Otherwise, only the pages defined using the
`Blocks.route` method will be displayed. Example: [("Dashboard", "dashboard"),
("About", "https://twitter.com/abidlabs")]
visible: bool
default `= True`
If True, the navbar will be visible. If False, the navbar will be hidden.
main_page_name: str | Literal[False]
default `= "Home"`
The title to display in the navbar for the main page of the Gradio. If False,
the main page will not be displayed in the navbar.
elem_id: str | None
default `= None`
An optional string that is assigned as the id of this component in the HTML
DOM. Can be used for targeting CSS styles.
elem_classes: list[str] | str | None
default `= None`
An optional list of strings that are assigned as the classes of this component
in the HTML DOM. Can be used for targeting CSS styles.
render: bool
default `= True`
If False, component will not render be rendered in the Blocks context. Should
be used if the intention is to assign event listeners now but render the
component later.
key: int | str | tuple[int | str, ...] | None
default `= None`
in a gr.render, Components with the same key across re-renders are treated as
the same component, not a new component.
| Initialization | https://gradio.app/docs/gradio/navbar | Gradio - Navbar Docs |
Shortcuts
gradio.Navbar
Interface String Shortcut `"navbar"`
Initialization Uses default values
| Shortcuts | https://gradio.app/docs/gradio/navbar | Gradio - Navbar Docs |
Description
Event listeners allow you to respond to user interactions with the UI
components you've defined in a Gradio Blocks app. When a user interacts with
an element, such as changing a slider value or uploading an image, a function
is called.
Supported Event Listeners
The Navbar component supports the following event listeners. Each event
listener takes the same parameters, which are listed in the Event Parameters
table below.
Listeners
Navbar.change(fn, ···)
Triggered when the value of the Navbar changes either because of user input
(e.g. a user types in a textbox) OR because of a function update (e.g. an
image receives a value from the output of an event trigger). See `.input()`
for a listener that is only triggered by user input.
Event Parameters
Parameters ▼
fn: Callable | None | Literal['decorator']
default `= "decorator"`
the function to call when this event is triggered. Often a machine learning
model's prediction function. Each parameter of the function corresponds to one
input component, and the function should return a single value or a tuple of
values, with each element in the tuple corresponding to one output component.
inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None
default `= None`
List of gradio.components to use as inputs. If the function takes no inputs,
this should be an empty list.
outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None
default `= None`
List of gradio.components to use as outputs. If the function returns no
outputs, this should be an empty list.
api_name: str | None
default `= None`
defines how the endpoint appears in the API docs. Can be a string or None. If
set to a string, the endpoint will be exposed in the API docs with the given
name. If None (default), the name of the function will be used as the API
endpoi | Event Listeners | https://gradio.app/docs/gradio/navbar | Gradio - Navbar Docs |
the API docs. Can be a string or None. If
set to a string, the endpoint will be exposed in the API docs with the given
name. If None (default), the name of the function will be used as the API
endpoint.
api_description: str | None | Literal[False]
default `= None`
Description of the API endpoint. Can be a string, None, or False. If set to a
string, the endpoint will be exposed in the API docs with the given
description. If None, the function's docstring will be used as the API
endpoint description. If False, then no description will be displayed in the
API docs.
scroll_to_output: bool
default `= False`
If True, will scroll to output component on completion
show_progress: Literal['full', 'minimal', 'hidden']
default `= "full"`
how to show the progress animation while event is running: "full" shows a
spinner which covers the output component area as well as a runtime display in
the upper right corner, "minimal" only shows the runtime display, "hidden"
shows no progress animation at all
show_progress_on: Component | list[Component] | None
default `= None`
Component or list of components to show the progress animation on. If None,
will show the progress animation on all of the output components.
queue: bool
default `= True`
If True, will place the request on the queue, if the queue has been enabled.
If False, will not put this event on the queue, even if the queue has been
enabled. If None, will use the queue setting of the gradio app.
batch: bool
default `= False`
If True, then the function should process a batch of inputs, meaning that it
should accept a list of input values for each parameter. The lists should be
of equal length (and be up to length `max_batch_size`). The function is then
*required* to return a tuple of lists (even if there is only 1 output
component), with each list in the tuple corresponding to one output component.
max_batch | Event Listeners | https://gradio.app/docs/gradio/navbar | Gradio - Navbar Docs |
e`). The function is then
*required* to return a tuple of lists (even if there is only 1 output
component), with each list in the tuple corresponding to one output component.
max_batch_size: int
default `= 4`
Maximum number of inputs to batch together if this is called from the queue
(only relevant if batch=True)
preprocess: bool
default `= True`
If False, will not run preprocessing of component data before running 'fn'
(e.g. leaving it as a base64 string if this method is called with the `Image`
component).
postprocess: bool
default `= True`
If False, will not run postprocessing of component data before returning 'fn'
output to the browser.
cancels: dict[str, Any] | list[dict[str, Any]] | None
default `= None`
A list of other events to cancel when this listener is triggered. For example,
setting cancels=[click_event] will cancel the click_event, where click_event
is the return value of another components .click method. Functions that have
not yet run (or generators that are iterating) will be cancelled, but
functions that are currently running will be allowed to finish.
trigger_mode: Literal['once', 'multiple', 'always_last'] | None
default `= None`
If "once" (default for all events except `.change()`) would not allow any
submissions while an event is pending. If set to "multiple", unlimited
submissions are allowed while pending, and "always_last" (default for
`.change()` and `.key_up()` events) would allow a second submission after the
pending event is complete.
js: str | Literal[True] | None
default `= None`
Optional frontend js method to run before running 'fn'. Input arguments for js
method are values of 'inputs' and 'outputs', return should be a list of values
for output components.
concurrency_limit: int | None | Literal['default']
default `= "default"`
If set, this is the maximum number of this event that can be running
simultaneously | Event Listeners | https://gradio.app/docs/gradio/navbar | Gradio - Navbar Docs |
es
for output components.
concurrency_limit: int | None | Literal['default']
default `= "default"`
If set, this is the maximum number of this event that can be running
simultaneously. Can be set to None to mean no concurrency_limit (any number of
this event can be running simultaneously). Set to "default" to use the default
concurrency limit (defined by the `default_concurrency_limit` parameter in
`Blocks.queue()`, which itself is 1 by default).
concurrency_id: str | None
default `= None`
If set, this is the id of the concurrency group. Events with the same
concurrency_id will be limited by the lowest set concurrency_limit.
api_visibility: Literal['public', 'private', 'undocumented']
default `= "public"`
controls the visibility and accessibility of this endpoint. Can be "public"
(shown in API docs and callable by clients), "private" (hidden from API docs
and not callable by clients), or "undocumented" (hidden from API docs but
callable by clients and via gr.load). If fn is None, api_visibility will
automatically be set to "private".
time_limit: int | None
default `= None`
stream_every: float
default `= 0.5`
key: int | str | tuple[int | str, ...] | None
default `= None`
A unique key for this event listener to be used in @gr.render(). If set, this
value identifies an event as identical across re-renders when the key is
identical.
validator: Callable | None
default `= None`
Optional validation function to run before the main function. If provided,
this function will be executed first with queue=False, and only if it
completes successfully will the main function be called. The validator
receives the same inputs as the main function and should return a
`gr.validate()` for each input value.
[Multipage Apps](../../guides/multipage-apps/)
| Event Listeners | https://gradio.app/docs/gradio/navbar | Gradio - Navbar Docs |
`gr.validate()` for each input value.
[Multipage Apps](../../guides/multipage-apps/)
| Event Listeners | https://gradio.app/docs/gradio/navbar | Gradio - Navbar Docs |
Gradio features a built-in theming engine that lets you customize the look
and feel of your app. You can choose from a variety of themes, or create your
own. To do so, pass the `theme=` kwarg to the `Blocks` or `Interface`
constructor. For example:
with gr.Blocks(theme=gr.themes.Soft()) as demo:
...
Gradio comes with a set of prebuilt themes which you can load from
`gr.themes.*`. These are:
* — `gr.themes.Base()`
* — `gr.themes.Default()`
* — `gr.themes.Glass()`
* — `gr.themes.Monochrome()`
* — `gr.themes.Soft()`
Each of these themes set values for hundreds of CSS variables. You can use
prebuilt themes as a starting point for your own custom themes, or you can
create your own themes from scratch. Let’s take a look at each approach.
| Introduction | https://gradio.app/docs/gradio/themes | Gradio - Themes Docs |
The easiest way to build a theme is using the Theme Builder. To launch the
Theme Builder locally, run the following code:
import gradio as gr
gr.themes.builder()
You can use the Theme Builder running on Spaces above, though it runs much
faster when you launch it locally via `gr.themes.builder()`.
As you edit the values in the Theme Builder, the app will preview updates
in real time. You can download the code to generate the theme you’ve created
so you can use it in any Gradio app.
In the rest of the guide, we will cover building themes programmatically.
| Using the Theme Builder | https://gradio.app/docs/gradio/themes | Gradio - Themes Docs |
Constructor
Although each theme has hundreds of CSS variables, the values for most
these variables are drawn from 8 core variables which can be set through the
constructor of each prebuilt theme. Modifying these 8 arguments allows you to
quickly change the look and feel of your app.
| Extending Themes via the | https://gradio.app/docs/gradio/themes | Gradio - Themes Docs |
The first 3 constructor arguments set the colors of the theme and are
`gradio.themes.Color` objects. Internally, these Color objects hold brightness
values for the palette of a single hue, ranging from 50, 100, 200…, 800, 900,
950. Other CSS variables are derived from these 3 colors.
The 3 color constructor arguments are:
* — `primary_hue`: This is the color draws attention in your theme. In the default theme, this is set to `gradio.themes.colors.orange`.
* — `secondary_hue`: This is the color that is used for secondary elements in your theme. In the default theme, this is set to `gradio.themes.colors.blue`.
* — `neutral_hue`: This is the color that is used for text and other neutral elements in your theme. In the default theme, this is set to `gradio.themes.colors.gray`.
You could modify these values using their string shortcuts, such as
with gr.Blocks(theme=gr.themes.Default(primary_hue="red", secondary_hue="pink")) as demo:
...
or you could use the `Color` objects directly, like this:
with gr.Blocks(theme=gr.themes.Default(primary_hue=gr.themes.colors.red, secondary_hue=gr.themes.colors.pink)) as demo:
...
Predefined colors are:
* — `slate`
* — `gray`
* — `zinc`
* — `neutral`
* — `stone`
* — `red`
* — `orange`
* — `amber`
* — `yellow`
* — `lime`
* — `green`
* — `emerald`
* — `teal`
* — `cyan`
* — `sky`
* — `blue`
* — `indigo`
* — `violet`
* — `purple`
* — `fuchsia`
* — `pink`
* — `rose`
You could also create your own custom `Color` objects and pass them in.
| Core Colors | https://gradio.app/docs/gradio/themes | Gradio - Themes Docs |
The next 3 constructor arguments set the sizing of the theme and are
`gradio.themes.Size` objects. Internally, these Size objects hold pixel size
values that range from `xxs` to `xxl`. Other CSS variables are derived from
these 3 sizes.
* — `spacing_size`: This sets the padding within and spacing between elements. In the default theme, this is set to `gradio.themes.sizes.spacing_md`.
* — `radius_size`: This sets the roundedness of corners of elements. In the default theme, this is set to `gradio.themes.sizes.radius_md`.
* — `text_size`: This sets the font size of text. In the default theme, this is set to `gradio.themes.sizes.text_md`.
You could modify these values using their string shortcuts, such as
with gr.Blocks(theme=gr.themes.Default(spacing_size="sm", radius_size="none")) as demo:
...
or you could use the `Size` objects directly, like this:
with gr.Blocks(theme=gr.themes.Default(spacing_size=gr.themes.sizes.spacing_sm, radius_size=gr.themes.sizes.radius_none)) as demo:
...
The predefined size objects are:
* — `radius_none`
* — `radius_sm`
* — `radius_md`
* — `radius_lg`
* — `spacing_sm`
* — `spacing_md`
* — `spacing_lg`
* — `text_sm`
* — `text_md`
* — `text_lg`
You could also create your own custom `Size` objects and pass them in.
| Core Sizing | https://gradio.app/docs/gradio/themes | Gradio - Themes Docs |
The final 2 constructor arguments set the fonts of the theme. You can pass
a list of fonts to each of these arguments to specify fallbacks. If you
provide a string, it will be loaded as a system font. If you provide a
`gradio.themes.GoogleFont`, the font will be loaded from Google Fonts.
* — `font`: This sets the primary font of the theme. In the default theme, this is set to `gradio.themes.GoogleFont("Source Sans Pro")`.
* — `font_mono`: This sets the monospace font of the theme. In the default theme, this is set to `gradio.themes.GoogleFont("IBM Plex Mono")`.
You could modify these values such as the following:
with gr.Blocks(theme=gr.themes.Default(font=[gr.themes.GoogleFont("Inconsolata"), "Arial", "sans-serif"])) as demo:
...
| Core Fonts | https://gradio.app/docs/gradio/themes | Gradio - Themes Docs |
You can also modify the values of CSS variables after the theme has been
loaded. To do so, use the `.set()` method of the theme object to get access to
the CSS variables. For example:
theme = gr.themes.Default(primary_hue="blue").set(
loader_color="FF0000",
slider_color="FF0000",
)
with gr.Blocks(theme=theme) as demo:
...
In the example above, we’ve set the `loader_color` and `slider_color`
variables to `FF0000`, despite the overall `primary_color` using the blue
color palette. You can set any CSS variable that is defined in the theme in
this manner.
Your IDE type hinting should help you navigate these variables. Since there
are so many CSS variables, let’s take a look at how these variables are named
and organized.
| Extending Themes via `.set()` | https://gradio.app/docs/gradio/themes | Gradio - Themes Docs |
Conventions
CSS variable names can get quite long, like
`button_primary_background_fill_hover_dark`! However they follow a common
naming convention that makes it easy to understand what they do and to find
the variable you’re looking for. Separated by underscores, the variable name
is made up of:
* — 1. The target element, such as `button`, `slider`, or `block`.
* — 2. The target element type or sub-element, such as `button_primary`, or `block_label`.
* — 3. The property, such as `button_primary_background_fill`, or `block_label_border_width`.
* — 4. Any relevant state, such as `button_primary_background_fill_hover`.
* — 5. If the value is different in dark mode, the suffix `_dark`. For example, `input_border_color_focus_dark`.
Of course, many CSS variable names are shorter than this, such as
`table_border_color`, or `input_shadow`.
| CSS Variable Naming | https://gradio.app/docs/gradio/themes | Gradio - Themes Docs |
Though there are hundreds of CSS variables, they do not all have to have
individual values. They draw their values by referencing a set of core
variables and referencing each other. This allows us to only have to modify a
few variables to change the look and feel of the entire theme, while also
getting finer control of individual elements that we may want to modify.
Referencing Core Variables
To reference one of the core constructor variables, precede the variable
name with an asterisk. To reference a core color, use the `*primary_`,
`*secondary_`, or `*neutral_` prefix, followed by the brightness value. For
example:
theme = gr.themes.Default(primary_hue="blue").set(
button_primary_background_fill="*primary_200",
button_primary_background_fill_hover="*primary_300",
)
In the example above, we’ve set the `button_primary_background_fill` and
`button_primary_background_fill_hover` variables to `*primary_200` and
`*primary_300`. These variables will be set to the 200 and 300 brightness
values of the blue primary color palette, respectively.
Similarly, to reference a core size, use the `*spacing_`, `*radius_`, or
`*text_` prefix, followed by the size value. For example:
theme = gr.themes.Default(radius_size="md").set(
button_primary_border_radius="*radius_xl",
)
In the example above, we’ve set the `button_primary_border_radius` variable
to `*radius_xl`. This variable will be set to the `xl` setting of the medium
radius size range.
| CSS Variable Organization | https://gradio.app/docs/gradio/themes | Gradio - Themes Docs |
Variables can also reference each other. For example, look at the example
below:
theme = gr.themes.Default().set(
button_primary_background_fill="FF0000",
button_primary_background_fill_hover="FF0000",
button_primary_border="FF0000",
)
Having to set these values to a common color is a bit tedious. Instead, we
can reference the `button_primary_background_fill` variable in the
`button_primary_background_fill_hover` and `button_primary_border` variables,
using a `*` prefix.
theme = gr.themes.Default().set(
button_primary_background_fill="FF0000",
button_primary_background_fill_hover="*button_primary_background_fill",
button_primary_border="*button_primary_background_fill",
)
Now, if we change the `button_primary_background_fill` variable, the
`button_primary_background_fill_hover` and `button_primary_border` variables
will automatically update as well.
This is particularly useful if you intend to share your theme - it makes it
easy to modify the theme without having to change every variable.
Note that dark mode variables automatically reference each other. For
example:
theme = gr.themes.Default().set(
button_primary_background_fill="FF0000",
button_primary_background_fill_dark="AAAAAA",
button_primary_border="*button_primary_background_fill",
button_primary_border_dark="*button_primary_background_fill_dark",
)
`button_primary_border_dark` will draw its value from
`button_primary_background_fill_dark`, because dark mode always draw from the
dark version of the variable.
| Referencing Other Variables | https://gradio.app/docs/gradio/themes | Gradio - Themes Docs |
Let’s say you want to create a theme from scratch! We’ll go through it step
by step - you can also see the source of prebuilt themes in the gradio source
repo for reference - [here’s the source](https://github.com/gradio-
app/gradio/blob/main/gradio/themes/monochrome.py) for the Monochrome theme.
Our new theme class will inherit from `gradio.themes.Base`, a theme that
sets a lot of convenient defaults. Let’s make a simple demo that creates a
dummy theme called Seafoam, and make a simple app that uses it.
$code_theme_new_step_1
The Base theme is very barebones, and uses `gr.themes.Blue` as it primary
color - you’ll note the primary button and the loading animation are both blue
as a result. Let’s change the defaults core arguments of our app. We’ll
overwrite the constructor and pass new defaults for the core constructor
arguments.
We’ll use `gr.themes.Emerald` as our primary color, and set secondary and
neutral hues to `gr.themes.Blue`. We’ll make our text larger using `text_lg`.
We’ll use `Quicksand` as our default font, loaded from Google Fonts.
$code_theme_new_step_2
See how the primary button and the loading animation are now green? These CSS
variables are tied to the `primary_hue` variable.
Let’s modify the theme a bit more directly. We’ll call the `set()` method to
overwrite CSS variable values explicitly. We can use any CSS logic, and
reference our core constructor arguments using the `*` prefix.
$code_theme_new_step_3
Look how fun our theme looks now! With just a few variable changes, our theme
looks completely different.
You may find it helpful to explore the [source code of the other prebuilt
themes](https://github.com/gradio-app/gradio/blob/main/gradio/themes) to see
how they modified the base theme. You can also find your browser’s Inspector
useful to select elements from the UI and see what CSS variables are being
used in the styles panel.
Sharing Themes
Once you have created a theme, you can upload it to the HuggingFace Hub to | Creating a Full Theme | https://gradio.app/docs/gradio/themes | Gradio - Themes Docs |
ctor
useful to select elements from the UI and see what CSS variables are being
used in the styles panel.
Sharing Themes
Once you have created a theme, you can upload it to the HuggingFace Hub to let
others view it, use it, and build off of it!
| Creating a Full Theme | https://gradio.app/docs/gradio/themes | Gradio - Themes Docs |
There are two ways to upload a theme, via the theme class instance or the
command line. We will cover both of them with the previously created `seafoam`
theme.
* Via the class instance
Each theme instance has a method called `push_to_hub` we can use to upload a
theme to the HuggingFace hub.
seafoam.push_to_hub(repo_name="seafoam",
version="0.0.1",
hf_token="<token>")
* Via the command line
First save the theme to disk
seafoam.dump(filename="seafoam.json")
Then use the `upload_theme` command:
upload_theme\
"seafoam.json"\
"seafoam"\
--version "0.0.1"\
--hf_token "<token>"
In order to upload a theme, you must have a HuggingFace account and pass your
[Access Token](https://huggingface.co/docs/huggingface_hub/quick-startlogin)
as the `hf_token` argument. However, if you log in via the [HuggingFace
command line](https://huggingface.co/docs/huggingface_hub/quick-startlogin)
(which comes installed with `gradio`), you can omit the `hf_token` argument.
The `version` argument lets you specify a valid [semantic
version](https://www.geeksforgeeks.org/introduction-semantic-versioning/)
string for your theme. That way your users are able to specify which version
of your theme they want to use in their apps. This also lets you publish
updates to your theme without worrying about changing how previously created
apps look. The `version` argument is optional. If omitted, the next patch
version is automatically applied.
| Uploading a Theme | https://gradio.app/docs/gradio/themes | Gradio - Themes Docs |
By calling `push_to_hub` or `upload_theme`, the theme assets will be stored in
a [HuggingFace space](https://huggingface.co/docs/hub/spaces-overview).
The theme preview for our seafoam theme is here: [seafoam
preview](https://huggingface.co/spaces/gradio/seafoam).
| Theme Previews | https://gradio.app/docs/gradio/themes | Gradio - Themes Docs |
The [Theme Gallery](https://huggingface.co/spaces/gradio/theme-gallery) shows
all the public gradio themes. After publishing your theme, it will
automatically show up in the theme gallery after a couple of minutes.
You can sort the themes by the number of likes on the space and from most to
least recently created as well as toggling themes between light and dark mode.
| Discovering Themes | https://gradio.app/docs/gradio/themes | Gradio - Themes Docs |
To use a theme from the hub, use the `from_hub` method on the `ThemeClass` and
pass it to your app:
my_theme = gr.Theme.from_hub("gradio/seafoam")
with gr.Blocks(theme=my_theme) as demo:
....
You can also pass the theme string directly to `Blocks` or `Interface`
(`gr.Blocks(theme="gradio/seafoam")`)
You can pin your app to an upstream theme version by using semantic versioning
expressions.
For example, the following would ensure the theme we load from the `seafoam`
repo was between versions `0.0.1` and `0.1.0`:
with gr.Blocks(theme="gradio/seafoam@>=0.0.1,<0.1.0") as demo:
....
Enjoy creating your own themes! If you make one you’re proud of, please share
it with the world by uploading it to the hub! If you tag us on
[Twitter](https://twitter.com/gradio) we can give your theme a shout out!
| Downloading | https://gradio.app/docs/gradio/themes | Gradio - Themes Docs |
Creates a Dialogue component for displaying or collecting multi-speaker
conversations. This component can be used as input to allow users to enter
dialogue involving multiple speakers, or as output to display diarized speech,
such as the result of a transcription or speaker identification model. Each
message can be associated with a specific speaker, making it suitable for use
cases like conversations, interviews, or meetings.
| Description | https://gradio.app/docs/gradio/dialogue | Gradio - Dialogue Docs |
**Using Dialogue as an input component.**
How Dialogue will pass its value to your function:
Type: `str | list[dict[str, str]]`
Returns the dialogue as a string or list of dictionaries.
Example Code
import gradio as gr
def predict(
value: str | list[dict[str, str]]
):
process value from the Dialogue component
return "prediction"
interface = gr.Interface(predict, gr.Dialogue(), gr.Textbox())
interface.launch()
**Using Dialogue as an output component**
How Dialogue expects you to return a value:
Type: `list[dict[str, str]] | str | None`
Expects a string or a list of dictionaries of dialogue lines, where each
dictionary contains 'speaker' and 'text' keys, or a string.
Example Code
import gradio as gr
def predict(text) -> list[dict[str, str]] | str | None
process value to return to the Dialogue component
return value
interface = gr.Interface(predict, gr.Textbox(), gr.Dialogue())
interface.launch()
| Behavior | https://gradio.app/docs/gradio/dialogue | Gradio - Dialogue Docs |
Parameters ▼
value: list[dict[str, str]] | Callable | None
default `= None`
Value of the dialogue. It is a list of dictionaries, each containing a
'speaker' key and a 'text' key. If a function is provided, the function will
be called each time the app loads to set the initial value of this component.
type: Literal['list', 'text']
default `= "text"`
The type of the component, either "list" for a multi-speaker dialogue
consisting of dictionaries with 'speaker' and 'text' keys or "text" for a
single text input. Defaults to "text".
speakers: list[str] | None
default `= None`
The different speakers allowed in the dialogue. If `None` or an empty list, no
speakers will be displayed. Instead, the component will be a standard textarea
that optionally supports `tags` autocompletion.
formatter: Callable | None
default `= None`
A function that formats the dialogue line dictionary, e.g. {"speaker":
"Speaker 1", "text": "Hello, how are you?"} into a string, e.g. "Speaker 1:
Hello, how are you?". This function is run on user input and the resulting
string is passed into the prediction function.
unformatter: Callable | None
default `= None`
A function that parses a formatted dialogue string back into a dialogue line
dictionary. Should take a single string line and return a dictionary with
'speaker' and 'text' keys. If not provided, the default unformatter will
attempt to parse the default formatter pattern.
tags: list[str] | None
default `= None`
The different tags allowed in the dialogue. Tags are displayed in an
autocomplete menu below the input textbox when the user starts typing `:`. Use
the exact tag name expected by the AI model or inference function.
separator: str
default `= " "`
The separator between the different dialogue lines used to join the formatted
dialogue lines into a single string. It should be unambiguous. For example, a
newline character | Initialization | https://gradio.app/docs/gradio/dialogue | Gradio - Dialogue Docs |
tor: str
default `= " "`
The separator between the different dialogue lines used to join the formatted
dialogue lines into a single string. It should be unambiguous. For example, a
newline character or tab character.
color_map: dict[str, str] | None
default `= None`
A dictionary mapping speaker names to colors. The colors may be specified as
hex codes or by their names. For example: {"Speaker 1": "red", "Speaker 2":
"FFEE22"}. If not provided, default colors will be assigned to speakers. This
is only used if `interactive` is False.
label: str | None
default `= "Dialogue"`
the label for this component, displayed above the component if `show_label` is
`True` and is also used as the header if there are a table of examples for
this component. If None and used in a `gr.Interface`, the label will be the
name of the parameter this component corresponds to.
info: str | None
default `= "Type colon (:) in the dialogue line to see the available tags"`
placeholder: str | None
default `= None`
placeholder hint to provide behind textarea.
show_label: bool | None
default `= None`
if True, will display the label. If False, the copy button is hidden as well
as well as the label.
container: bool
default `= True`
if True, will place the component in a container - providing some extra
padding around the border.
scale: int | None
default `= None`
relative size compared to adjacent Components. For example if Components A and
B are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide
as B. Should be an integer. scale applies in Rows, and to top-level Components
in Blocks where fill_height=True.
min_width: int
default `= 160`
minimum pixel width, will wrap if not sufficient screen space to satisfy this
value. If a certain scale value results in this Component being narrower than
min_width, the min_width parameter will be respec | Initialization | https://gradio.app/docs/gradio/dialogue | Gradio - Dialogue Docs |
um pixel width, will wrap if not sufficient screen space to satisfy this
value. If a certain scale value results in this Component being narrower than
min_width, the min_width parameter will be respected first.
interactive: bool | None
default `= None`
if True, will be rendered as an editable textbox; if False, editing will be
disabled. If not provided, this is inferred based on whether the component is
used as an input or output.
visible: bool | Literal['hidden']
default `= True`
If False, component will be hidden. If "hidden", component will be visually
hidden and not take up space in the layout but still exist in the DOM
elem_id: str | None
default `= None`
An optional string that is assigned as the id of this component in the HTML
DOM. Can be used for targeting CSS styles.
autofocus: bool
default `= False`
If True, will focus on the textbox when the page loads. Use this carefully, as
it can cause usability issues for sighted and non-sighted users.
autoscroll: bool
default `= True`
If True, will automatically scroll to the bottom of the textbox when the value
changes, unless the user scrolls up. If False, will not scroll to the bottom
of the textbox when the value changes.
elem_classes: list[str] | str | None
default `= None`
An optional list of strings that are assigned as the classes of this component
in the HTML DOM. Can be used for targeting CSS styles.
render: bool
default `= True`
If False, component will not render be rendered in the Blocks context. Should
be used if the intention is to assign event listeners now but render the
component later.
key: int | str | None
default `= None`
if assigned, will be used to assume identity across a re-render. Components
that have the same key across a re-render will have their value preserved.
max_lines: int | None
default `= None`
maximum number of lines allo | Initialization | https://gradio.app/docs/gradio/dialogue | Gradio - Dialogue Docs |
identity across a re-render. Components
that have the same key across a re-render will have their value preserved.
max_lines: int | None
default `= None`
maximum number of lines allowed in the dialogue.
buttons: list[Literal['copy'] | Button] | None
default `= None`
A list of buttons to show for the component. Valid options are "copy" or a
gr.Button() instance. The "copy" button allows the user to copy the text in
the textbox. Custom gr.Button() instances will appear in the toolbar with
their configured icon and/or label, and clicking them will trigger any
.click() events registered on the button. By default, no buttons are shown.
submit_btn: str | bool | None
default `= False`
If False, will not show a submit button. If True, will show a submit button
with an icon. If a string, will use that string as the submit button text.
ui_mode: Literal['dialogue', 'text', 'both']
default `= "both"`
Determines the user interface mode of the component. Can be "dialogue"
(displays dialogue lines), "text" (displays a single text input), or "both"
(displays both dialogue lines and a text input). Defaults to "both".
| Initialization | https://gradio.app/docs/gradio/dialogue | Gradio - Dialogue Docs |
Shortcuts
gradio.Dialogue
Interface String Shortcut `"dialogue"`
Initialization Uses default values
| Shortcuts | https://gradio.app/docs/gradio/dialogue | Gradio - Dialogue Docs |
dia_dialogue_demo
| Demos | https://gradio.app/docs/gradio/dialogue | Gradio - Dialogue Docs |
Description
Event listeners allow you to respond to user interactions with the UI
components you've defined in a Gradio Blocks app. When a user interacts with
an element, such as changing a slider value or uploading an image, a function
is called.
Supported Event Listeners
The Dialogue component supports the following event listeners. Each event
listener takes the same parameters, which are listed in the Event Parameters
table below.
Listeners
Dialogue.change(fn, ···)
Triggered when the value of the Dialogue changes either because of user input
(e.g. a user types in a textbox) OR because of a function update (e.g. an
image receives a value from the output of an event trigger). See `.input()`
for a listener that is only triggered by user input.
Dialogue.input(fn, ···)
This listener is triggered when the user changes the value of the Dialogue.
Dialogue.submit(fn, ···)
This listener is triggered when the user presses the Enter key while the
Dialogue is focused.
Event Parameters
Parameters ▼
fn: Callable | None | Literal['decorator']
default `= "decorator"`
the function to call when this event is triggered. Often a machine learning
model's prediction function. Each parameter of the function corresponds to one
input component, and the function should return a single value or a tuple of
values, with each element in the tuple corresponding to one output component.
inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None
default `= None`
List of gradio.components to use as inputs. If the function takes no inputs,
this should be an empty list.
outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None
default `= None`
List of gradio.components to use as outputs. If the function returns no
outputs, this should be an empty list.
api_name: str | No | Event Listeners | https://gradio.app/docs/gradio/dialogue | Gradio - Dialogue Docs |
et[Component | BlockContext] | None
default `= None`
List of gradio.components to use as outputs. If the function returns no
outputs, this should be an empty list.
api_name: str | None
default `= None`
defines how the endpoint appears in the API docs. Can be a string or None. If
set to a string, the endpoint will be exposed in the API docs with the given
name. If None (default), the name of the function will be used as the API
endpoint.
api_description: str | None | Literal[False]
default `= None`
Description of the API endpoint. Can be a string, None, or False. If set to a
string, the endpoint will be exposed in the API docs with the given
description. If None, the function's docstring will be used as the API
endpoint description. If False, then no description will be displayed in the
API docs.
scroll_to_output: bool
default `= False`
If True, will scroll to output component on completion
show_progress: Literal['full', 'minimal', 'hidden']
default `= "full"`
how to show the progress animation while event is running: "full" shows a
spinner which covers the output component area as well as a runtime display in
the upper right corner, "minimal" only shows the runtime display, "hidden"
shows no progress animation at all
show_progress_on: Component | list[Component] | None
default `= None`
Component or list of components to show the progress animation on. If None,
will show the progress animation on all of the output components.
queue: bool
default `= True`
If True, will place the request on the queue, if the queue has been enabled.
If False, will not put this event on the queue, even if the queue has been
enabled. If None, will use the queue setting of the gradio app.
batch: bool
default `= False`
If True, then the function should process a batch of inputs, meaning that it
should accept a list of input values for each parameter. The lists shou | Event Listeners | https://gradio.app/docs/gradio/dialogue | Gradio - Dialogue Docs |
pp.
batch: bool
default `= False`
If True, then the function should process a batch of inputs, meaning that it
should accept a list of input values for each parameter. The lists should be
of equal length (and be up to length `max_batch_size`). The function is then
*required* to return a tuple of lists (even if there is only 1 output
component), with each list in the tuple corresponding to one output component.
max_batch_size: int
default `= 4`
Maximum number of inputs to batch together if this is called from the queue
(only relevant if batch=True)
preprocess: bool
default `= True`
If False, will not run preprocessing of component data before running 'fn'
(e.g. leaving it as a base64 string if this method is called with the `Image`
component).
postprocess: bool
default `= True`
If False, will not run postprocessing of component data before returning 'fn'
output to the browser.
cancels: dict[str, Any] | list[dict[str, Any]] | None
default `= None`
A list of other events to cancel when this listener is triggered. For example,
setting cancels=[click_event] will cancel the click_event, where click_event
is the return value of another components .click method. Functions that have
not yet run (or generators that are iterating) will be cancelled, but
functions that are currently running will be allowed to finish.
trigger_mode: Literal['once', 'multiple', 'always_last'] | None
default `= None`
If "once" (default for all events except `.change()`) would not allow any
submissions while an event is pending. If set to "multiple", unlimited
submissions are allowed while pending, and "always_last" (default for
`.change()` and `.key_up()` events) would allow a second submission after the
pending event is complete.
js: str | Literal[True] | None
default `= None`
Optional frontend js method to run before running 'fn'. Input arguments for js
method are values o | Event Listeners | https://gradio.app/docs/gradio/dialogue | Gradio - Dialogue Docs |
fter the
pending event is complete.
js: str | Literal[True] | None
default `= None`
Optional frontend js method to run before running 'fn'. Input arguments for js
method are values of 'inputs' and 'outputs', return should be a list of values
for output components.
concurrency_limit: int | None | Literal['default']
default `= "default"`
If set, this is the maximum number of this event that can be running
simultaneously. Can be set to None to mean no concurrency_limit (any number of
this event can be running simultaneously). Set to "default" to use the default
concurrency limit (defined by the `default_concurrency_limit` parameter in
`Blocks.queue()`, which itself is 1 by default).
concurrency_id: str | None
default `= None`
If set, this is the id of the concurrency group. Events with the same
concurrency_id will be limited by the lowest set concurrency_limit.
api_visibility: Literal['public', 'private', 'undocumented']
default `= "public"`
controls the visibility and accessibility of this endpoint. Can be "public"
(shown in API docs and callable by clients), "private" (hidden from API docs
and not callable by clients), or "undocumented" (hidden from API docs but
callable by clients and via gr.load). If fn is None, api_visibility will
automatically be set to "private".
time_limit: int | None
default `= None`
stream_every: float
default `= 0.5`
key: int | str | tuple[int | str, ...] | None
default `= None`
A unique key for this event listener to be used in @gr.render(). If set, this
value identifies an event as identical across re-renders when the key is
identical.
validator: Callable | None
default `= None`
Optional validation function to run before the main function. If provided,
this function will be executed first with queue=False, and only if it
completes successfully will the main function be called. The validator
receives | Event Listeners | https://gradio.app/docs/gradio/dialogue | Gradio - Dialogue Docs |
ion to run before the main function. If provided,
this function will be executed first with queue=False, and only if it
completes successfully will the main function be called. The validator
receives the same inputs as the main function and should return a
`gr.validate()` for each input value.
| Event Listeners | https://gradio.app/docs/gradio/dialogue | Gradio - Dialogue Docs |
**Prerequisite**: Gradio requires [Python 3.10 or higher](https://www.python.org/downloads/).
We recommend installing Gradio using `pip`, which is included by default in Python. Run this in your terminal or command prompt:
```bash
pip install --upgrade gradio
```
Tip: It is best to install Gradio in a virtual environment. Detailed installation instructions for all common operating systems <a href="https://www.gradio.app/main/guides/installing-gradio-in-a-virtual-environment">are provided here</a>.
| Installation | https://gradio.app/guides/quickstart | Getting Started - Quickstart Guide |
You can run Gradio in your favorite code editor, Jupyter notebook, Google Colab, or anywhere else you write Python. Let's write your first Gradio app:
$code_hello_world_4
Tip: We shorten the imported name from <code>gradio</code> to <code>gr</code>. This is a widely adopted convention for better readability of code.
Now, run your code. If you've written the Python code in a file named `app.py`, then you would run `python app.py` from the terminal.
The demo below will open in a browser on [http://localhost:7860](http://localhost:7860) if running from a file. If you are running within a notebook, the demo will appear embedded within the notebook.
$demo_hello_world_4
Type your name in the textbox on the left, drag the slider, and then press the Submit button. You should see a friendly greeting on the right.
Tip: When developing locally, you can run your Gradio app in <strong>hot reload mode</strong>, which automatically reloads the Gradio app whenever you make changes to the file. To do this, simply type in <code>gradio</code> before the name of the file instead of <code>python</code>. In the example above, you would type: `gradio app.py` in your terminal. You can also enable <strong>vibe mode</strong> by using the <code>--vibe</code> flag, e.g. <code>gradio --vibe app.py</code>, which provides an in-browser chat that can be used to write or edit your Gradio app using natural language. Learn more in the <a href="https://www.gradio.app/guides/developing-faster-with-reload-mode">Hot Reloading Guide</a>.
**Understanding the `Interface` Class**
You'll notice that in order to make your first demo, you created an instance of the `gr.Interface` class. The `Interface` class is designed to create demos for machine learning models which accept one or more inputs, and return one or more outputs.
The `Interface` class has three core arguments:
- `fn`: the function to wrap a user interface (UI) around
- `inputs`: the Gradio component(s) to use for the input. The num | Building Your First Demo | https://gradio.app/guides/quickstart | Getting Started - Quickstart Guide |
turn one or more outputs.
The `Interface` class has three core arguments:
- `fn`: the function to wrap a user interface (UI) around
- `inputs`: the Gradio component(s) to use for the input. The number of components should match the number of arguments in your function.
- `outputs`: the Gradio component(s) to use for the output. The number of components should match the number of return values from your function.
The `fn` argument is very flexible -- you can pass *any* Python function that you want to wrap with a UI. In the example above, we saw a relatively simple function, but the function could be anything from a music generator to a tax calculator to the prediction function of a pretrained machine learning model.
The `inputs` and `outputs` arguments take one or more Gradio components. As we'll see, Gradio includes more than [30 built-in components](https://www.gradio.app/docs/gradio/introduction) (such as the `gr.Textbox()`, `gr.Image()`, and `gr.HTML()` components) that are designed for machine learning applications.
Tip: For the `inputs` and `outputs` arguments, you can pass in the name of these components as a string (`"textbox"`) or an instance of the class (`gr.Textbox()`).
If your function accepts more than one argument, as is the case above, pass a list of input components to `inputs`, with each input component corresponding to one of the arguments of the function, in order. The same holds true if your function returns more than one value: simply pass in a list of components to `outputs`. This flexibility makes the `Interface` class a very powerful way to create demos.
We'll dive deeper into the `gr.Interface` on our series on [building Interfaces](https://www.gradio.app/main/guides/the-interface-class).
| Building Your First Demo | https://gradio.app/guides/quickstart | Getting Started - Quickstart Guide |
What good is a beautiful demo if you can't share it? Gradio lets you easily share a machine learning demo without having to worry about the hassle of hosting on a web server. Simply set `share=True` in `launch()`, and a publicly accessible URL will be created for your demo. Let's revisit our example demo, but change the last line as follows:
```python
import gradio as gr
def greet(name):
return "Hello " + name + "!"
demo = gr.Interface(fn=greet, inputs="textbox", outputs="textbox")
demo.launch(share=True) Share your demo with just 1 extra parameter 🚀
```
When you run this code, a public URL will be generated for your demo in a matter of seconds, something like:
👉 `https://a23dsf231adb.gradio.live`
Now, anyone around the world can try your Gradio demo from their browser, while the machine learning model and all computation continues to run locally on your computer.
To learn more about sharing your demo, read our dedicated guide on [sharing your Gradio application](https://www.gradio.app/guides/sharing-your-app).
| Sharing Your Demo | https://gradio.app/guides/quickstart | Getting Started - Quickstart Guide |
So far, we've been discussing the `Interface` class, which is a high-level class that lets you build demos quickly with Gradio. But what else does Gradio include?
Custom Demos with `gr.Blocks`
Gradio offers a low-level approach for designing web apps with more customizable layouts and data flows with the `gr.Blocks` class. Blocks supports things like controlling where components appear on the page, handling multiple data flows and more complex interactions (e.g. outputs can serve as inputs to other functions), and updating properties/visibility of components based on user interaction — still all in Python.
You can build very custom and complex applications using `gr.Blocks()`. For example, the popular image generation [Automatic1111 Web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is built using Gradio Blocks. We dive deeper into the `gr.Blocks` on our series on [building with Blocks](https://www.gradio.app/guides/blocks-and-event-listeners).
Chatbots with `gr.ChatInterface`
Gradio includes another high-level class, `gr.ChatInterface`, which is specifically designed to create Chatbot UIs. Similar to `Interface`, you supply a function and Gradio creates a fully working Chatbot UI. If you're interested in creating a chatbot, you can jump straight to [our dedicated guide on `gr.ChatInterface`](https://www.gradio.app/guides/creating-a-chatbot-fast).
The Gradio Python & JavaScript Ecosystem
That's the gist of the core `gradio` Python library, but Gradio is actually so much more! It's an entire ecosystem of Python and JavaScript libraries that let you build machine learning applications, or query them programmatically, in Python or JavaScript. Here are other related parts of the Gradio ecosystem:
* [Gradio Python Client](https://www.gradio.app/guides/getting-started-with-the-python-client) (`gradio_client`): query any Gradio app programmatically in Python.
* [Gradio JavaScript Client](https://www.gradio.app/guides/getting-started-with- | An Overview of Gradio | https://gradio.app/guides/quickstart | Getting Started - Quickstart Guide |
.app/guides/getting-started-with-the-python-client) (`gradio_client`): query any Gradio app programmatically in Python.
* [Gradio JavaScript Client](https://www.gradio.app/guides/getting-started-with-the-js-client) (`@gradio/client`): query any Gradio app programmatically in JavaScript.
* [Hugging Face Spaces](https://huggingface.co/spaces): the most popular place to host Gradio applications — for free!
| An Overview of Gradio | https://gradio.app/guides/quickstart | Getting Started - Quickstart Guide |
Keep learning about Gradio sequentially using the Gradio Guides, which include explanations as well as example code and embedded interactive demos. Next up: [let's dive deeper into the Interface class](https://www.gradio.app/guides/the-interface-class).
Or, if you already know the basics and are looking for something specific, you can search the more [technical API documentation](https://www.gradio.app/docs/).
| What's Next? | https://gradio.app/guides/quickstart | Getting Started - Quickstart Guide |
This guide explains how you can run background tasks from your gradio app.
Background tasks are operations that you'd like to perform outside the request-response
lifecycle of your app either once or on a periodic schedule.
Examples of background tasks include periodically synchronizing data to an external database or
sending a report of model predictions via email.
| Introduction | https://gradio.app/guides/running-background-tasks | Other Tutorials - Running Background Tasks Guide |
We will be creating a simple "Google-forms-style" application to gather feedback from users of the gradio library.
We will use a local sqlite database to store our data, but we will periodically synchronize the state of the database
with a [HuggingFace Dataset](https://huggingface.co/datasets) so that our user reviews are always backed up.
The synchronization will happen in a background task running every 60 seconds.
At the end of the demo, you'll have a fully working application like this one:
<gradio-app space="freddyaboulton/gradio-google-forms"> </gradio-app>
| Overview | https://gradio.app/guides/running-background-tasks | Other Tutorials - Running Background Tasks Guide |
Our application will store the name of the reviewer, their rating of gradio on a scale of 1 to 5, as well as
any comments they want to share about the library. Let's write some code that creates a database table to
store this data. We'll also write some functions to insert a review into that table and fetch the latest 10 reviews.
We're going to use the `sqlite3` library to connect to our sqlite database but gradio will work with any library.
The code will look like this:
```python
DB_FILE = "./reviews.db"
db = sqlite3.connect(DB_FILE)
Create table if it doesn't already exist
try:
db.execute("SELECT * FROM reviews").fetchall()
db.close()
except sqlite3.OperationalError:
db.execute(
'''
CREATE TABLE reviews (id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL,
name TEXT, review INTEGER, comments TEXT)
''')
db.commit()
db.close()
def get_latest_reviews(db: sqlite3.Connection):
reviews = db.execute("SELECT * FROM reviews ORDER BY id DESC limit 10").fetchall()
total_reviews = db.execute("Select COUNT(id) from reviews").fetchone()[0]
reviews = pd.DataFrame(reviews, columns=["id", "date_created", "name", "review", "comments"])
return reviews, total_reviews
def add_review(name: str, review: int, comments: str):
db = sqlite3.connect(DB_FILE)
cursor = db.cursor()
cursor.execute("INSERT INTO reviews(name, review, comments) VALUES(?,?,?)", [name, review, comments])
db.commit()
reviews, total_reviews = get_latest_reviews(db)
db.close()
return reviews, total_reviews
```
Let's also write a function to load the latest reviews when the gradio application loads:
```python
def load_data():
db = sqlite3.connect(DB_FILE)
reviews, total_reviews = get_latest_reviews(db)
db.close()
return reviews, total_reviews
```
| Step 1 - Write your database logic 💾 | https://gradio.app/guides/running-background-tasks | Other Tutorials - Running Background Tasks Guide |
Now that we have our database logic defined, we can use gradio create a dynamic web page to ask our users for feedback!
```python
with gr.Blocks() as demo:
with gr.Row():
with gr.Column():
name = gr.Textbox(label="Name", placeholder="What is your name?")
review = gr.Radio(label="How satisfied are you with using gradio?", choices=[1, 2, 3, 4, 5])
comments = gr.Textbox(label="Comments", lines=10, placeholder="Do you have any feedback on gradio?")
submit = gr.Button(value="Submit Feedback")
with gr.Column():
data = gr.Dataframe(label="Most recently created 10 rows")
count = gr.Number(label="Total number of reviews")
submit.click(add_review, [name, review, comments], [data, count])
demo.load(load_data, None, [data, count])
```
| Step 2 - Create a gradio app ⚡ | https://gradio.app/guides/running-background-tasks | Other Tutorials - Running Background Tasks Guide |
We could call `demo.launch()` after step 2 and have a fully functioning application. However,
our data would be stored locally on our machine. If the sqlite file were accidentally deleted, we'd lose all of our reviews!
Let's back up our data to a dataset on the HuggingFace hub.
Create a dataset [here](https://huggingface.co/datasets) before proceeding.
Now at the **top** of our script, we'll use the [huggingface hub client library](https://huggingface.co/docs/huggingface_hub/index)
to connect to our dataset and pull the latest backup.
```python
TOKEN = os.environ.get('HUB_TOKEN')
repo = huggingface_hub.Repository(
local_dir="data",
repo_type="dataset",
clone_from="<name-of-your-dataset>",
use_auth_token=TOKEN
)
repo.git_pull()
shutil.copyfile("./data/reviews.db", DB_FILE)
```
Note that you'll have to get an access token from the "Settings" tab of your HuggingFace for the above code to work.
In the script, the token is securely accessed via an environment variable.

Now we will create a background task to synch our local database to the dataset hub every 60 seconds.
We will use the [AdvancedPythonScheduler](https://apscheduler.readthedocs.io/en/3.x/) to handle the scheduling.
However, this is not the only task scheduling library available. Feel free to use whatever you are comfortable with.
The function to back up our data will look like this:
```python
from apscheduler.schedulers.background import BackgroundScheduler
def backup_db():
shutil.copyfile(DB_FILE, "./data/reviews.db")
db = sqlite3.connect(DB_FILE)
reviews = db.execute("SELECT * FROM reviews").fetchall()
pd.DataFrame(reviews).to_csv("./data/reviews.csv", index=False)
print("updating db")
repo.push_to_hub(blocking=False, commit_message=f"Updating data at {datetime.datetime.now()}")
scheduler = BackgroundScheduler()
scheduler.add_job(func=backup_db, trigge | Step 3 - Synchronize with HuggingFace Datasets 🤗 | https://gradio.app/guides/running-background-tasks | Other Tutorials - Running Background Tasks Guide |
print("updating db")
repo.push_to_hub(blocking=False, commit_message=f"Updating data at {datetime.datetime.now()}")
scheduler = BackgroundScheduler()
scheduler.add_job(func=backup_db, trigger="interval", seconds=60)
scheduler.start()
```
| Step 3 - Synchronize with HuggingFace Datasets 🤗 | https://gradio.app/guides/running-background-tasks | Other Tutorials - Running Background Tasks Guide |
You can use the HuggingFace [Spaces](https://huggingface.co/spaces) platform to deploy this application for free ✨
If you haven't used Spaces before, follow the previous guide [here](/using_hugging_face_integrations).
You will have to use the `HUB_TOKEN` environment variable as a secret in the Guides.
| Step 4 (Bonus) - Deployment to HuggingFace Spaces | https://gradio.app/guides/running-background-tasks | Other Tutorials - Running Background Tasks Guide |
Congratulations! You know how to run background tasks from your gradio app on a schedule ⏲️.
Checkout the application running on Spaces [here](https://huggingface.co/spaces/freddyaboulton/gradio-google-forms).
The complete code is [here](https://huggingface.co/spaces/freddyaboulton/gradio-google-forms/blob/main/app.py)
| Conclusion | https://gradio.app/guides/running-background-tasks | Other Tutorials - Running Background Tasks Guide |
Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from autonomous vehicles to medical imaging.
Such models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in Python, and it will look like the demo on the bottom of the page.
Let's get started!
Prerequisites
Make sure you have the `gradio` Python package already [installed](/getting_started). We will be using a pretrained image classification model, so you should also have `torch` installed.
| Introduction | https://gradio.app/guides/image-classification-in-pytorch | Other Tutorials - Image Classification In Pytorch Guide |
First, we will need an image classification model. For this tutorial, we will use a pretrained Resnet-18 model, as it is easily downloadable from [PyTorch Hub](https://pytorch.org/hub/pytorch_vision_resnet/). You can use a different pretrained model or train your own.
```python
import torch
model = torch.hub.load('pytorch/vision:v0.6.0', 'resnet18', pretrained=True).eval()
```
Because we will be using the model for inference, we have called the `.eval()` method.
| Step 1 — Setting up the Image Classification Model | https://gradio.app/guides/image-classification-in-pytorch | Other Tutorials - Image Classification In Pytorch Guide |
Next, we will need to define a function that takes in the _user input_, which in this case is an image, and returns the prediction. The prediction should be returned as a dictionary whose keys are class name and values are confidence probabilities. We will load the class names from this [text file](https://git.io/JJkYN).
In the case of our pretrained model, it will look like this:
```python
import requests
from PIL import Image
from torchvision import transforms
Download human-readable labels for ImageNet.
response = requests.get("https://git.io/JJkYN")
labels = response.text.split("\n")
def predict(inp):
inp = transforms.ToTensor()(inp).unsqueeze(0)
with torch.no_grad():
prediction = torch.nn.functional.softmax(model(inp)[0], dim=0)
confidences = {labels[i]: float(prediction[i]) for i in range(1000)}
return confidences
```
Let's break this down. The function takes one parameter:
- `inp`: the input image as a `PIL` image
Then, the function converts the image to a PIL Image and then eventually a PyTorch `tensor`, passes it through the model, and returns:
- `confidences`: the predictions, as a dictionary whose keys are class labels and whose values are confidence probabilities
| Step 2 — Defining a `predict` function | https://gradio.app/guides/image-classification-in-pytorch | Other Tutorials - Image Classification In Pytorch Guide |
Now that we have our predictive function set up, we can create a Gradio Interface around it.
In this case, the input component is a drag-and-drop image component. To create this input, we use `Image(type="pil")` which creates the component and handles the preprocessing to convert that to a `PIL` image.
The output component will be a `Label`, which displays the top labels in a nice form. Since we don't want to show all 1,000 class labels, we will customize it to show only the top 3 images by constructing it as `Label(num_top_classes=3)`.
Finally, we'll add one more parameter, the `examples`, which allows us to prepopulate our interfaces with a few predefined examples. The code for Gradio looks like this:
```python
import gradio as gr
gr.Interface(fn=predict,
inputs=gr.Image(type="pil"),
outputs=gr.Label(num_top_classes=3),
examples=["lion.jpg", "cheetah.jpg"]).launch()
```
This produces the following interface, which you can try right here in your browser (try uploading your own examples!):
<gradio-app space="gradio/pytorch-image-classifier">
---
And you're done! That's all the code you need to build a web demo for an image classifier. If you'd like to share with others, try setting `share=True` when you `launch()` the Interface!
| Step 3 — Creating a Gradio Interface | https://gradio.app/guides/image-classification-in-pytorch | Other Tutorials - Image Classification In Pytorch Guide |
First of all, we need some data to visualize. Following this [excellent guide](https://supabase.com/blog/loading-data-supabase-python), we'll create fake commerce data and put it in Supabase.
1\. Start by creating a new project in Supabase. Once you're logged in, click the "New Project" button
2\. Give your project a name and database password. You can also choose a pricing plan (for our purposes, the Free Tier is sufficient!)
3\. You'll be presented with your API keys while the database spins up (can take up to 2 minutes).
4\. Click on "Table Editor" (the table icon) in the left pane to create a new table. We'll create a single table called `Product`, with the following schema:
<center>
<table>
<tr><td>product_id</td><td>int8</td></tr>
<tr><td>inventory_count</td><td>int8</td></tr>
<tr><td>price</td><td>float8</td></tr>
<tr><td>product_name</td><td>varchar</td></tr>
</table>
</center>
5\. Click Save to save the table schema.
Our table is now ready!
| Create a table in Supabase | https://gradio.app/guides/creating-a-dashboard-from-supabase-data | Other Tutorials - Creating A Dashboard From Supabase Data Guide |
The next step is to write data to a Supabase dataset. We will use the Supabase Python library to do this.
6\. Install `supabase` by running the following command in your terminal:
```bash
pip install supabase
```
7\. Get your project URL and API key. Click the Settings (gear icon) on the left pane and click 'API'. The URL is listed in the Project URL box, while the API key is listed in Project API keys (with the tags `service_role`, `secret`)
8\. Now, run the following Python script to write some fake data to the table (note you have to put the values of `SUPABASE_URL` and `SUPABASE_SECRET_KEY` from step 7):
```python
import supabase
Initialize the Supabase client
client = supabase.create_client('SUPABASE_URL', 'SUPABASE_SECRET_KEY')
Define the data to write
import random
main_list = []
for i in range(10):
value = {'product_id': i,
'product_name': f"Item {i}",
'inventory_count': random.randint(1, 100),
'price': random.random()*100
}
main_list.append(value)
Write the data to the table
data = client.table('Product').insert(main_list).execute()
```
Return to your Supabase dashboard and refresh the page, you should now see 10 rows populated in the `Product` table!
| Write data to Supabase | https://gradio.app/guides/creating-a-dashboard-from-supabase-data | Other Tutorials - Creating A Dashboard From Supabase Data Guide |
Finally, we will read the data from the Supabase dataset using the same `supabase` Python library and create a realtime dashboard using `gradio`.
Note: We repeat certain steps in this section (like creating the Supabase client) in case you did not go through the previous sections. As described in Step 7, you will need the project URL and API Key for your database.
9\. Write a function that loads the data from the `Product` table and returns it as a pandas Dataframe:
```python
import supabase
import pandas as pd
client = supabase.create_client('SUPABASE_URL', 'SUPABASE_SECRET_KEY')
def read_data():
response = client.table('Product').select("*").execute()
df = pd.DataFrame(response.data)
return df
```
10\. Create a small Gradio Dashboard with 2 Barplots that plots the prices and inventories of all of the items every minute and updates in real-time:
```python
import gradio as gr
with gr.Blocks() as dashboard:
with gr.Row():
gr.BarPlot(read_data, x="product_id", y="price", title="Prices", every=gr.Timer(60))
gr.BarPlot(read_data, x="product_id", y="inventory_count", title="Inventory", every=gr.Timer(60))
dashboard.queue().launch()
```
Notice that by passing in a function to `gr.BarPlot()`, we have the BarPlot query the database as soon as the web app loads (and then again every 60 seconds because of the `every` parameter). Your final dashboard should look something like this:
<gradio-app space="gradio/supabase"></gradio-app>
| Visualize the Data in a Real-Time Gradio Dashboard | https://gradio.app/guides/creating-a-dashboard-from-supabase-data | Other Tutorials - Creating A Dashboard From Supabase Data Guide |
That's it! In this tutorial, you learned how to write data to a Supabase dataset, and then read that data and plot the results as bar plots. If you update the data in the Supabase database, you'll notice that the Gradio dashboard will update within a minute.
Try adding more plots and visualizations to this example (or with a different dataset) to build a more complex dashboard!
| Conclusion | https://gradio.app/guides/creating-a-dashboard-from-supabase-data | Other Tutorials - Creating A Dashboard From Supabase Data Guide |
3D models are becoming more popular in machine learning and make for some of the most fun demos to experiment with. Using `gradio`, you can easily build a demo of your 3D image model and share it with anyone. The Gradio 3D Model component accepts 3 file types including: _.obj_, _.glb_, & _.gltf_.
This guide will show you how to build a demo for your 3D image model in a few lines of code; like the one below. Play around with 3D object by clicking around, dragging and zooming:
<gradio-app space="gradio/Model3D"> </gradio-app>
Prerequisites
Make sure you have the `gradio` Python package already [installed](https://gradio.app/guides/quickstart).
| Introduction | https://gradio.app/guides/how-to-use-3D-model-component | Other Tutorials - How To Use 3D Model Component Guide |
Let's take a look at how to create the minimal interface above. The prediction function in this case will just return the original 3D model mesh, but you can change this function to run inference on your machine learning model. We'll take a look at more complex examples below.
```python
import gradio as gr
import os
def load_mesh(mesh_file_name):
return mesh_file_name
demo = gr.Interface(
fn=load_mesh,
inputs=gr.Model3D(),
outputs=gr.Model3D(
clear_color=[0.0, 0.0, 0.0, 0.0], label="3D Model"),
examples=[
[os.path.join(os.path.dirname(__file__), "files/Bunny.obj")],
[os.path.join(os.path.dirname(__file__), "files/Duck.glb")],
[os.path.join(os.path.dirname(__file__), "files/Fox.gltf")],
[os.path.join(os.path.dirname(__file__), "files/face.obj")],
],
)
if __name__ == "__main__":
demo.launch()
```
Let's break down the code above:
`load_mesh`: This is our 'prediction' function and for simplicity, this function will take in the 3D model mesh and return it.
Creating the Interface:
- `fn`: the prediction function that is used when the user clicks submit. In our case this is the `load_mesh` function.
- `inputs`: create a model3D input component. The input expects an uploaded file as a {str} filepath.
- `outputs`: create a model3D output component. The output component also expects a file as a {str} filepath.
- `clear_color`: this is the background color of the 3D model canvas. Expects RGBa values.
- `label`: the label that appears on the top left of the component.
- `examples`: list of 3D model files. The 3D model component can accept _.obj_, _.glb_, & _.gltf_ file types.
- `cache_examples`: saves the predicted output for the examples, to save time on inference.
| Taking a Look at the Code | https://gradio.app/guides/how-to-use-3D-model-component | Other Tutorials - How To Use 3D Model Component Guide |
Below is a demo that uses the DPT model to predict the depth of an image and then uses 3D Point Cloud to create a 3D object. Take a look at the [app.py](https://huggingface.co/spaces/gradio/dpt-depth-estimation-3d-obj/blob/main/app.py) file for a peek into the code and the model prediction function.
<gradio-app space="gradio/dpt-depth-estimation-3d-obj"> </gradio-app>
---
And you're done! That's all the code you need to build an interface for your Model3D model. Here are some references that you may find useful:
- Gradio's ["Getting Started" guide](https://gradio.app/getting_started/)
- The first [3D Model Demo](https://huggingface.co/spaces/gradio/Model3D) and [complete code](https://huggingface.co/spaces/gradio/Model3D/tree/main) (on Hugging Face Spaces)
| Exploring a more complex Model3D Demo: | https://gradio.app/guides/how-to-use-3D-model-component | Other Tutorials - How To Use 3D Model Component Guide |
It seems that cryptocurrencies, [NFTs](https://www.nytimes.com/interactive/2022/03/18/technology/nft-guide.html), and the web3 movement are all the rage these days! Digital assets are being listed on marketplaces for astounding amounts of money, and just about every celebrity is debuting their own NFT collection. While your crypto assets [may be taxable, such as in Canada](https://www.canada.ca/en/revenue-agency/programs/about-canada-revenue-agency-cra/compliance/digital-currency/cryptocurrency-guide.html), today we'll explore some fun and tax-free ways to generate your own assortment of procedurally generated [CryptoPunks](https://www.larvalabs.com/cryptopunks).
Generative Adversarial Networks, often known just as _GANs_, are a specific class of deep-learning models that are designed to learn from an input dataset to create (_generate!_) new material that is convincingly similar to elements of the original training set. Famously, the website [thispersondoesnotexist.com](https://thispersondoesnotexist.com/) went viral with lifelike, yet synthetic, images of people generated with a model called StyleGAN2. GANs have gained traction in the machine learning world, and are now being used to generate all sorts of images, text, and even [music](https://salu133445.github.io/musegan/)!
Today we'll briefly look at the high-level intuition behind GANs, and then we'll build a small demo around a pre-trained GAN to see what all the fuss is about. Here's a [peek](https://nimaboscarino-cryptopunks.hf.space) at what we're going to be putting together.
Prerequisites
Make sure you have the `gradio` Python package already [installed](/getting_started). To use the pretrained model, also install `torch` and `torchvision`.
| Introduction | https://gradio.app/guides/create-your-own-friends-with-a-gan | Other Tutorials - Create Your Own Friends With A Gan Guide |
Originally proposed in [Goodfellow et al. 2014](https://arxiv.org/abs/1406.2661), GANs are made up of neural networks which compete with the intention of outsmarting each other. One network, known as the _generator_, is responsible for generating images. The other network, the _discriminator_, receives an image at a time from the generator along with a **real** image from the training data set. The discriminator then has to guess: which image is the fake?
The generator is constantly training to create images which are trickier for the discriminator to identify, while the discriminator raises the bar for the generator every time it correctly detects a fake. As the networks engage in this competitive (_adversarial!_) relationship, the images that get generated improve to the point where they become indistinguishable to human eyes!
For a more in-depth look at GANs, you can take a look at [this excellent post on Analytics Vidhya](https://www.analyticsvidhya.com/blog/2021/06/a-detailed-explanation-of-gan-with-implementation-using-tensorflow-and-keras/) or this [PyTorch tutorial](https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html). For now, though, we'll dive into a demo!
| GANs: a very brief introduction | https://gradio.app/guides/create-your-own-friends-with-a-gan | Other Tutorials - Create Your Own Friends With A Gan Guide |
To generate new images with a GAN, you only need the generator model. There are many different architectures that the generator could use, but for this demo we'll use a pretrained GAN generator model with the following architecture:
```python
from torch import nn
class Generator(nn.Module):
Refer to the link below for explanations about nc, nz, and ngf
https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.htmlinputs
def __init__(self, nc=4, nz=100, ngf=64):
super(Generator, self).__init__()
self.network = nn.Sequential(
nn.ConvTranspose2d(nz, ngf * 4, 3, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
nn.ConvTranspose2d(ngf * 4, ngf * 2, 3, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 0, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False),
nn.Tanh(),
)
def forward(self, input):
output = self.network(input)
return output
```
We're taking the generator from [this repo by @teddykoker](https://github.com/teddykoker/cryptopunks-gan/blob/main/train.pyL90), where you can also see the original discriminator model structure.
After instantiating the model, we'll load in the weights from the Hugging Face Hub, stored at [nateraw/cryptopunks-gan](https://huggingface.co/nateraw/cryptopunks-gan):
```python
from huggingface_hub import hf_hub_download
import torch
model = Generator()
weights_path = hf_hub_download('nateraw/cryptopunks-gan', 'generator.pth')
model.load_state_dict(torch.load(weights_path, map_location=torch.device('cpu'))) Use 'cuda' if you have a GPU available
```
| Step 1 — Create the Generator model | https://gradio.app/guides/create-your-own-friends-with-a-gan | Other Tutorials - Create Your Own Friends With A Gan Guide |
The `predict` function is the key to making Gradio work! Whatever inputs we choose through the Gradio interface will get passed through our `predict` function, which should operate on the inputs and generate outputs that we can display with Gradio output components. For GANs it's common to pass random noise into our model as the input, so we'll generate a tensor of random numbers and pass that through the model. We can then use `torchvision`'s `save_image` function to save the output of the model as a `png` file, and return the file name:
```python
from torchvision.utils import save_image
def predict(seed):
num_punks = 4
torch.manual_seed(seed)
z = torch.randn(num_punks, 100, 1, 1)
punks = model(z)
save_image(punks, "punks.png", normalize=True)
return 'punks.png'
```
We're giving our `predict` function a `seed` parameter, so that we can fix the random tensor generation with a seed. We'll then be able to reproduce punks if we want to see them again by passing in the same seed.
_Note!_ Our model needs an input tensor of dimensions 100x1x1 to do a single inference, or (BatchSize)x100x1x1 for generating a batch of images. In this demo we'll start by generating 4 punks at a time.
| Step 2 — Defining a `predict` function | https://gradio.app/guides/create-your-own-friends-with-a-gan | Other Tutorials - Create Your Own Friends With A Gan Guide |
At this point you can even run the code you have with `predict(<SOME_NUMBER>)`, and you'll find your freshly generated punks in your file system at `./punks.png`. To make a truly interactive demo, though, we'll build out a simple interface with Gradio. Our goals here are to:
- Set a slider input so users can choose the "seed" value
- Use an image component for our output to showcase the generated punks
- Use our `predict()` to take the seed and generate the images
With `gr.Interface()`, we can define all of that with a single function call:
```python
import gradio as gr
gr.Interface(
predict,
inputs=[
gr.Slider(0, 1000, label='Seed', default=42),
],
outputs="image",
).launch()
```
| Step 3 — Creating a Gradio interface | https://gradio.app/guides/create-your-own-friends-with-a-gan | Other Tutorials - Create Your Own Friends With A Gan Guide |
Generating 4 punks at a time is a good start, but maybe we'd like to control how many we want to make each time. Adding more inputs to our Gradio interface is as simple as adding another item to the `inputs` list that we pass to `gr.Interface`:
```python
gr.Interface(
predict,
inputs=[
gr.Slider(0, 1000, label='Seed', default=42),
gr.Slider(4, 64, label='Number of Punks', step=1, default=10), Adding another slider!
],
outputs="image",
).launch()
```
The new input will be passed to our `predict()` function, so we have to make some changes to that function to accept a new parameter:
```python
def predict(seed, num_punks):
torch.manual_seed(seed)
z = torch.randn(num_punks, 100, 1, 1)
punks = model(z)
save_image(punks, "punks.png", normalize=True)
return 'punks.png'
```
When you relaunch your interface, you should see a second slider that'll let you control the number of punks!
| Step 4 — Even more punks! | https://gradio.app/guides/create-your-own-friends-with-a-gan | Other Tutorials - Create Your Own Friends With A Gan Guide |
Your Gradio app is pretty much good to go, but you can add a few extra things to really make it ready for the spotlight ✨
We can add some examples that users can easily try out by adding this to the `gr.Interface`:
```python
gr.Interface(
...
keep everything as it is, and then add
examples=[[123, 15], [42, 29], [456, 8], [1337, 35]],
).launch(cache_examples=True) cache_examples is optional
```
The `examples` parameter takes a list of lists, where each item in the sublists is ordered in the same order that we've listed the `inputs`. So in our case, `[seed, num_punks]`. Give it a try!
You can also try adding a `title`, `description`, and `article` to the `gr.Interface`. Each of those parameters accepts a string, so try it out and see what happens 👀 `article` will also accept HTML, as [explored in a previous guide](/guides/key-features/descriptive-content)!
When you're all done, you may end up with something like [this](https://nimaboscarino-cryptopunks.hf.space).
For reference, here is our full code:
```python
import torch
from torch import nn
from huggingface_hub import hf_hub_download
from torchvision.utils import save_image
import gradio as gr
class Generator(nn.Module):
Refer to the link below for explanations about nc, nz, and ngf
https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.htmlinputs
def __init__(self, nc=4, nz=100, ngf=64):
super(Generator, self).__init__()
self.network = nn.Sequential(
nn.ConvTranspose2d(nz, ngf * 4, 3, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
nn.ConvTranspose2d(ngf * 4, ngf * 2, 3, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 0, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False),
nn.Tanh(),
)
def forward(self, | Step 5 - Polishing it up | https://gradio.app/guides/create-your-own-friends-with-a-gan | Other Tutorials - Create Your Own Friends With A Gan Guide |
4, 2, 0, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False),
nn.Tanh(),
)
def forward(self, input):
output = self.network(input)
return output
model = Generator()
weights_path = hf_hub_download('nateraw/cryptopunks-gan', 'generator.pth')
model.load_state_dict(torch.load(weights_path, map_location=torch.device('cpu'))) Use 'cuda' if you have a GPU available
def predict(seed, num_punks):
torch.manual_seed(seed)
z = torch.randn(num_punks, 100, 1, 1)
punks = model(z)
save_image(punks, "punks.png", normalize=True)
return 'punks.png'
gr.Interface(
predict,
inputs=[
gr.Slider(0, 1000, label='Seed', default=42),
gr.Slider(4, 64, label='Number of Punks', step=1, default=10),
],
outputs="image",
examples=[[123, 15], [42, 29], [456, 8], [1337, 35]],
).launch(cache_examples=True)
```
---
Congratulations! You've built out your very own GAN-powered CryptoPunks generator, with a fancy Gradio interface that makes it easy for anyone to use. Now you can [scour the Hub for more GANs](https://huggingface.co/models?other=gan) (or train your own) and continue making even more awesome demos 🤗
| Step 5 - Polishing it up | https://gradio.app/guides/create-your-own-friends-with-a-gan | Other Tutorials - Create Your Own Friends With A Gan Guide |
In this Guide, we'll walk you through:
- Introduction of ONNX, ONNX model zoo, Gradio, and Hugging Face Spaces
- How to setup a Gradio demo for EfficientNet-Lite4
- How to contribute your own Gradio demos for the ONNX organization on Hugging Face
Here's an [example](https://onnx-efficientnet-lite4.hf.space/) of an ONNX model.
| Introduction | https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face | Other Tutorials - Gradio And Onnx On Hugging Face Guide |
Open Neural Network Exchange ([ONNX](https://onnx.ai/)) is an open standard format for representing machine learning models. ONNX is supported by a community of partners who have implemented it in many frameworks and tools. For example, if you have trained a model in TensorFlow or PyTorch, you can convert it to ONNX easily, and from there run it on a variety of devices using an engine/compiler like ONNX Runtime.
The [ONNX Model Zoo](https://github.com/onnx/models) is a collection of pre-trained, state-of-the-art models in the ONNX format contributed by community members. Accompanying each model are Jupyter notebooks for model training and running inference with the trained model. The notebooks are written in Python and include links to the training dataset as well as references to the original paper that describes the model architecture.
| What is the ONNX Model Zoo? | https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face | Other Tutorials - Gradio And Onnx On Hugging Face Guide |
Gradio
Gradio lets users demo their machine learning models as a web app all in python code. Gradio wraps a python function into a user interface and the demos can be launched inside jupyter notebooks, colab notebooks, as well as embedded in your own website and hosted on Hugging Face Spaces for free.
Get started [here](https://gradio.app/getting_started)
Hugging Face Spaces
Hugging Face Spaces is a free hosting option for Gradio demos. Spaces comes with 3 SDK options: Gradio, Streamlit and Static HTML demos. Spaces can be public or private and the workflow is similar to github repos. There are over 2000+ spaces currently on Hugging Face. Learn more about spaces [here](https://huggingface.co/spaces/launch).
Hugging Face Models
Hugging Face Model Hub also supports ONNX models and ONNX models can be filtered through the [ONNX tag](https://huggingface.co/models?library=onnx&sort=downloads)
| What are Hugging Face Spaces & Gradio? | https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face | Other Tutorials - Gradio And Onnx On Hugging Face Guide |
There are a lot of Jupyter notebooks in the ONNX Model Zoo for users to test models. Previously, users needed to download the models themselves and run those notebooks locally for testing. With Hugging Face, the testing process can be much simpler and more user-friendly. Users can easily try certain ONNX Model Zoo model on Hugging Face Spaces and run a quick demo powered by Gradio with ONNX Runtime, all on cloud without downloading anything locally. Note, there are various runtimes for ONNX, e.g., [ONNX Runtime](https://github.com/microsoft/onnxruntime), [MXNet](https://github.com/apache/incubator-mxnet).
| How did Hugging Face help the ONNX Model Zoo? | https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face | Other Tutorials - Gradio And Onnx On Hugging Face Guide |
ONNX Runtime is a cross-platform inference and training machine-learning accelerator. It makes live Gradio demos with ONNX Model Zoo model on Hugging Face possible.
ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. For more information please see the [official website](https://onnxruntime.ai/).
| What is the role of ONNX Runtime? | https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face | Other Tutorials - Gradio And Onnx On Hugging Face Guide |
EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite models. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4% ImageNet top-1 accuracy, while still running in real-time (e.g. 30ms/image) on a Pixel 4 CPU. To learn more read the [model card](https://github.com/onnx/models/tree/main/vision/classification/efficientnet-lite4)
Here we walk through setting up a example demo for EfficientNet-Lite4 using Gradio
First we import our dependencies and download and load the efficientnet-lite4 model from the onnx model zoo. Then load the labels from the labels_map.txt file. We then setup our preprocessing functions, load the model for inference, and setup the inference function. Finally, the inference function is wrapped into a gradio interface for a user to interact with. See the full code below.
```python
import numpy as np
import math
import matplotlib.pyplot as plt
import cv2
import json
import gradio as gr
from huggingface_hub import hf_hub_download
from onnx import hub
import onnxruntime as ort
loads ONNX model from ONNX Model Zoo
model = hub.load("efficientnet-lite4")
loads the labels text file
labels = json.load(open("labels_map.txt", "r"))
sets image file dimensions to 224x224 by resizing and cropping image from center
def pre_process_edgetpu(img, dims):
output_height, output_width, _ = dims
img = resize_with_aspectratio(img, output_height, output_width, inter_pol=cv2.INTER_LINEAR)
img = center_crop(img, output_height, output_width)
img = np.asarray(img, dtype='float32')
converts jpg pixel value from [0 - 255] to float array [-1.0 - 1.0]
img -= [127.0, 127.0, 127.0]
img /= [128.0, 128.0, 128.0]
return img
resizes the image with a proportional scale
def resize_with_aspectratio(img, out_height, out_width, scale=87.5, inter_pol=cv2.INTER_LINEAR):
height, width, _ = img.shape
new_height = int(100. * out_he | Setting up a Gradio Demo for EfficientNet-Lite4 | https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face | Other Tutorials - Gradio And Onnx On Hugging Face Guide |
the image with a proportional scale
def resize_with_aspectratio(img, out_height, out_width, scale=87.5, inter_pol=cv2.INTER_LINEAR):
height, width, _ = img.shape
new_height = int(100. * out_height / scale)
new_width = int(100. * out_width / scale)
if height > width:
w = new_width
h = int(new_height * height / width)
else:
h = new_height
w = int(new_width * width / height)
img = cv2.resize(img, (w, h), interpolation=inter_pol)
return img
crops the image around the center based on given height and width
def center_crop(img, out_height, out_width):
height, width, _ = img.shape
left = int((width - out_width) / 2)
right = int((width + out_width) / 2)
top = int((height - out_height) / 2)
bottom = int((height + out_height) / 2)
img = img[top:bottom, left:right]
return img
sess = ort.InferenceSession(model)
def inference(img):
img = cv2.imread(img)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = pre_process_edgetpu(img, (224, 224, 3))
img_batch = np.expand_dims(img, axis=0)
results = sess.run(["Softmax:0"], {"images:0": img_batch})[0]
result = reversed(results[0].argsort()[-5:])
resultdic = {}
for r in result:
resultdic[labels[str(r)]] = float(results[0][r])
return resultdic
title = "EfficientNet-Lite4"
description = "EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite model. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4% ImageNet top-1 accuracy, while still running in real-time (e.g. 30ms/image) on a Pixel 4 CPU."
examples = [['catonnx.jpg']]
gr.Interface(inference, gr.Image(type="filepath"), "label", title=title, description=description, examples=examples).launch()
```
| Setting up a Gradio Demo for EfficientNet-Lite4 | https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face | Other Tutorials - Gradio And Onnx On Hugging Face Guide |
examples=examples).launch()
```
| Setting up a Gradio Demo for EfficientNet-Lite4 | https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face | Other Tutorials - Gradio And Onnx On Hugging Face Guide |
- Add model to the [onnx model zoo](https://github.com/onnx/models/blob/main/.github/PULL_REQUEST_TEMPLATE.md)
- Create an account on Hugging Face [here](https://huggingface.co/join).
- See list of models left to add to ONNX organization, please refer to the table with the [Models list](https://github.com/onnx/modelsmodels)
- Add Gradio Demo under your username, see this [blog post](https://huggingface.co/blog/gradio-spaces) for setting up Gradio Demo on Hugging Face.
- Request to join ONNX Organization [here](https://huggingface.co/onnx).
- Once approved transfer model from your username to ONNX organization
- Add a badge for model in model table, see examples in [Models list](https://github.com/onnx/modelsmodels)
| How to contribute Gradio demos on HF spaces using ONNX models | https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face | Other Tutorials - Gradio And Onnx On Hugging Face Guide |
Encoder functions to send audio as base64-encoded data and images as base64-encoded JPEG.
```python
import base64
import numpy as np
from io import BytesIO
from PIL import Image
def encode_audio(data: np.ndarray) -> dict:
"""Encode audio data (int16 mono) for Gemini."""
return {
"mime_type": "audio/pcm",
"data": base64.b64encode(data.tobytes()).decode("UTF-8"),
}
def encode_image(data: np.ndarray) -> dict:
with BytesIO() as output_bytes:
pil_image = Image.fromarray(data)
pil_image.save(output_bytes, "JPEG")
bytes_data = output_bytes.getvalue()
base64_str = str(base64.b64encode(bytes_data), "utf-8")
return {"mime_type": "image/jpeg", "data": base64_str}
```
| 1) Encoders for audio and images | https://gradio.app/guides/create-immersive-demo | Other Tutorials - Create Immersive Demo Guide |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.