Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- _sources_indexrsttxt_2addc090.txt +5 -0
- analytics_sentry_cce7534d.txt +5 -0
- analytics_sentry_d2812956.txt +5 -0
- android_introduction_58957849.txt +5 -0
- android_introduction_8283d024.txt +5 -0
- audio_audio-buffer-processor_904d4c23.txt +5 -0
- audio_audio-buffer-processor_bbd8a26b.txt +5 -0
- audio_audio-buffer-processor_ec358c5f.txt +5 -0
- audio_krisp-filter_48a7e00f.txt +5 -0
- audio_krisp-filter_af1a17f9.txt +5 -0
- audio_krisp-filter_e2c509bd.txt +5 -0
- audio_noisereduce-filter_a57b6720.txt +5 -0
- audio_noisereduce-filter_e38294d3.txt +5 -0
- audio_silero-vad-analyzer_4295c585.txt +5 -0
- audio_silero-vad-analyzer_95efee77.txt +5 -0
- audio_soundfile-mixer_2cc2bb00.txt +5 -0
- audio_soundfile-mixer_7adf2889.txt +5 -0
- audio_soundfile-mixer_8700f49a.txt +5 -0
- base-classes_media_b6dde063.txt +5 -0
- c_transport_59ca9f73.txt +5 -0
- client_introduction_1c027bef.txt +5 -0
- client_introduction_c9b73d79.txt +5 -0
- client_rtvi-standard_065571ef.txt +5 -0
- client_rtvi-standard_4cc2f2cb.txt +5 -0
- client_rtvi-standard_5425fbb5.txt +5 -0
- client_rtvi-standard_ee7dc446.txt +5 -0
- daily_rest-helpers_07e70cfd.txt +5 -0
- daily_rest-helpers_35407073.txt +5 -0
- daily_rest-helpers_40141281.txt +5 -0
- daily_rest-helpers_4c97fee6.txt +5 -0
- daily_rest-helpers_a9d99269.txt +5 -0
- daily_rest-helpers_cbb5a2ed.txt +5 -0
- daily_rest-helpers_df8e58ba.txt +5 -0
- daily_rest-helpers_e36053a2.txt +5 -0
- daily_rest-helpers_e67003ac.txt +5 -0
- daily_rest-helpers_f7ab8d86.txt +5 -0
- deployment_cerebrium_31600fa3.txt +5 -0
- deployment_cerebrium_53a507d6.txt +5 -0
- deployment_modal_f35ace44.txt +5 -0
- deployment_pattern_786babb2.txt +5 -0
- deployment_pattern_81874897.txt +5 -0
- deployment_pattern_a1fae09a.txt +5 -0
- deployment_pipecat-cloud_0dc09447.txt +5 -0
- deployment_pipecat-cloud_90216d23.txt +5 -0
- deployment_pipecat-cloud_fb17bfdb.txt +5 -0
- deployment_wwwflyio_db5d82f0.txt +5 -0
- features_gemini-multimodal-live_3e9df7b7.txt +5 -0
- features_gemini-multimodal-live_a62f1d14.txt +5 -0
- features_gemini-multimodal-live_bef94131.txt +5 -0
- features_gemini-multimodal-live_daa6fbbb.txt +5 -0
_sources_indexrsttxt_2addc090.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/links/_sources/index.rst.txt#how-it-works
|
| 2 |
+
Title: Overview - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Overview - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Get Started Overview Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Get Started Overview Installation & Setup Quickstart Core Concepts Next Steps & Examples Pipecat is an open source Python framework that handles the complex orchestration of AI services, network transport, audio processing, and multimodal interactions. “Multimodal” means you can use any combination of audio, video, images, and/or text in your interactions. And “real-time” means that things are happening quickly enough that it feels conversational—a “back-and-forth” with a bot, not submitting a query and waiting for results. What You Can Build Voice Assistants Natural, real-time conversations with AI using speech recognition and synthesis Interactive Agents Personal coaches and meeting assistants that can understand context and provide guidance Multimodal Apps Applications that combine voice, video, images, and text for rich interactions Creative Tools Storytelling experiences and social companions that engage users Business Solutions Customer intake flows and support bots for automated business processes Complex Flows Structured conversations using Pipecat Flows for managing complex interactions How It Works The flow of interactions in a Pipecat application is typically straightforward: The bot says something The user says something The bot says something The user says something This continues until the conversation naturally ends. While this flow seems simple, making it feel natural requires sophisticated real-time processing. Real-time Processing Pipecat’s pipeline architecture handles both simple voice interactions and complex multimodal processing. Let’s look at how data flows through the system: Voice app Multimodal app 1 Send Audio Transmit and capture streamed audio from the user 2 Transcribe Speech Convert speech to text as the user is talking 3 Process with LLM Generate responses using a large language model 4 Convert to Speech Transform text responses into natural speech 5 Play Audio Stream the audio response back to the user 1 Send Audio Transmit and capture streamed audio from the user 2 Transcribe Speech Convert speech to text as the user is talking 3 Process with LLM Generate responses using a large language model 4 Convert to Speech Transform text responses into natural speech 5 Play Audio Stream the audio response back to the user 1 Send Audio and Video Transmit and capture audio, video, and image inputs simultaneously 2 Process Streams Handle multiple input streams in parallel 3 Model Processing Send combined inputs to multimodal models (like GPT-4V) 4 Generate Outputs Create various outputs (text, images, audio, etc.) 5 Coordinate Presentation Synchronize and present multiple output types In both cases, Pipecat: Processes responses as they stream in Handles multiple input/output modalities concurrently Manages resource allocation and synchronization Coordinates parallel processing tasks This architecture creates fluid, natural interactions without noticeable delays, whether you’re building a simple voice assistant or a complex multimodal application. Pipecat’s pipeline architecture is particularly valuable for managing the complexity of real-time, multimodal interactions, ensuring smooth data flow and proper synchronization regardless of the input/output types involved. Pipecat handles all this complexity for you, letting you focus on building your application rather than managing the underlying infrastructure. Next Steps Ready to build your first Pipecat application? Installation & Setup Prepare your environment and install required dependencies Quickstart Build and run your first Pipecat application Core Concepts Learn about pipelines, frames, and real-time processing Use Cases Explore example implementations and patterns Join Our Community Discord Community Connect with other developers, share your projects, and get support from the Pipecat team. Installation & Setup On this page What You Can Build How It Works Real-time Processing Next Steps Join Our Community Assistant Responses are generated using AI and may contain mistakes.
|
analytics_sentry_cce7534d.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/services/analytics/sentry#notes
|
| 2 |
+
Title: Sentry Metrics - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Sentry Metrics - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Analytics & Monitoring Sentry Metrics Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Sentry Metrics Utilities Advanced Frame Processors Audio Processing Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Overview SentryMetrics extends FrameProcessorMetrics to provide performance monitoring integration with Sentry. It tracks Time to First Byte (TTFB) and processing duration metrics for frame processors. Installation To use Sentry metrics, install the Sentry SDK: Copy Ask AI pip install "pipecat-ai[sentry]" Configuration Sentry must be initialized in your application before metrics will be collected: Copy Ask AI import sentry_sdk sentry_sdk.init( dsn = "your-sentry-dsn" , traces_sample_rate = 1.0 , ) Usage Example Copy Ask AI import sentry_sdk from pipecat.services.openai.llm import OpenAILLMService from pipecat.services.elevenlabs.tts import ElevenLabsTTSService from pipecat.processors.aggregators.openai_llm_context import OpenAILLMContext from pipecat.processors.metrics.sentry import SentryMetrics from pipecat.transports.services.daily import DailyParams, DailyTransport async def create_metrics_pipeline (): sentry_sdk.init( dsn = "your-sentry-dsn" , traces_sample_rate = 1.0 , ) transport = DailyTransport( room_url, token, "Chatbot" , DailyParams( audio_out_enabled = True , audio_in_enabled = True , video_out_enabled = False , vad_analyzer = SileroVADAnalyzer(), transcription_enabled = True , ), ) tts = ElevenLabsTTSService( api_key = os.getenv( "ELEVENLABS_API_KEY" ), metrics = SentryMetrics(), ) llm = OpenAILLMService( api_key = os.getenv( "OPENAI_API_KEY" ), model = "gpt-4o" ), metrics = SentryMetrics(), ) messages = [ { "role" : "system" , "content" : "You are Chatbot, a friendly, helpful robot. Your goal is to demonstrate your capabilities in a succinct way. Your output will be converted to audio so don't include special characters in your answers. Respond to what the user said in a creative and helpful way, but keep your responses brief. Start by introducing yourself. Keep all your responses to 12 words or fewer." , }, ] context = OpenAILLMContext(messages) context_aggregator = llm.create_context_aggregator(context) # Use in pipeline pipeline = Pipeline([ transport.input(), context_aggregator.user(), llm, tts, transport.output(), context_aggregator.assistant(), ]) Transaction Information Each transaction includes: Operation type ( ttfb or processing ) Description with processor name Start timestamp End timestamp Unique transaction ID Fallback Behavior If Sentry is not available (not installed or not initialized): Warning logs are generated Metric methods execute without error No data is sent to Sentry Notes Requires Sentry SDK to be installed and initialized Thread-safe metric collection Automatic transaction management Supports selective TTFB reporting Integrates with Sentry’s performance monitoring Provides detailed timing information Maintains timing data even when Sentry is unavailable Moondream Producer & Consumer Processors On this page Overview Installation Configuration Usage Example Transaction Information Fallback Behavior Notes Assistant Responses are generated using AI and may contain mistakes.
|
analytics_sentry_d2812956.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/services/analytics/sentry
|
| 2 |
+
Title: Sentry Metrics - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Sentry Metrics - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Analytics & Monitoring Sentry Metrics Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Sentry Metrics Utilities Advanced Frame Processors Audio Processing Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Overview SentryMetrics extends FrameProcessorMetrics to provide performance monitoring integration with Sentry. It tracks Time to First Byte (TTFB) and processing duration metrics for frame processors. Installation To use Sentry metrics, install the Sentry SDK: Copy Ask AI pip install "pipecat-ai[sentry]" Configuration Sentry must be initialized in your application before metrics will be collected: Copy Ask AI import sentry_sdk sentry_sdk.init( dsn = "your-sentry-dsn" , traces_sample_rate = 1.0 , ) Usage Example Copy Ask AI import sentry_sdk from pipecat.services.openai.llm import OpenAILLMService from pipecat.services.elevenlabs.tts import ElevenLabsTTSService from pipecat.processors.aggregators.openai_llm_context import OpenAILLMContext from pipecat.processors.metrics.sentry import SentryMetrics from pipecat.transports.services.daily import DailyParams, DailyTransport async def create_metrics_pipeline (): sentry_sdk.init( dsn = "your-sentry-dsn" , traces_sample_rate = 1.0 , ) transport = DailyTransport( room_url, token, "Chatbot" , DailyParams( audio_out_enabled = True , audio_in_enabled = True , video_out_enabled = False , vad_analyzer = SileroVADAnalyzer(), transcription_enabled = True , ), ) tts = ElevenLabsTTSService( api_key = os.getenv( "ELEVENLABS_API_KEY" ), metrics = SentryMetrics(), ) llm = OpenAILLMService( api_key = os.getenv( "OPENAI_API_KEY" ), model = "gpt-4o" ), metrics = SentryMetrics(), ) messages = [ { "role" : "system" , "content" : "You are Chatbot, a friendly, helpful robot. Your goal is to demonstrate your capabilities in a succinct way. Your output will be converted to audio so don't include special characters in your answers. Respond to what the user said in a creative and helpful way, but keep your responses brief. Start by introducing yourself. Keep all your responses to 12 words or fewer." , }, ] context = OpenAILLMContext(messages) context_aggregator = llm.create_context_aggregator(context) # Use in pipeline pipeline = Pipeline([ transport.input(), context_aggregator.user(), llm, tts, transport.output(), context_aggregator.assistant(), ]) Transaction Information Each transaction includes: Operation type ( ttfb or processing ) Description with processor name Start timestamp End timestamp Unique transaction ID Fallback Behavior If Sentry is not available (not installed or not initialized): Warning logs are generated Metric methods execute without error No data is sent to Sentry Notes Requires Sentry SDK to be installed and initialized Thread-safe metric collection Automatic transaction management Supports selective TTFB reporting Integrates with Sentry’s performance monitoring Provides detailed timing information Maintains timing data even when Sentry is unavailable Moondream Producer & Consumer Processors On this page Overview Installation Configuration Usage Example Transaction Information Fallback Behavior Notes Assistant Responses are generated using AI and may contain mistakes.
|
android_introduction_58957849.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/client/android/introduction#documentation
|
| 2 |
+
Title: SDK Introduction - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
SDK Introduction - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Android SDK SDK Introduction Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Client SDKs The RTVI Standard RTVIClient Migration Guide Javascript SDK SDK Introduction API Reference Transport packages React SDK SDK Introduction API Reference React Native SDK SDK Introduction API Reference iOS SDK SDK Introduction API Reference Transport packages Android SDK SDK Introduction API Reference Transport packages C++ SDK SDK Introduction Daily WebRTC Transport The Pipecat Android SDK provides a Kotlin implementation for building voice and multimodal AI applications on Android. It handles: Real-time audio and video streaming Bot communication and state management Media device handling Configuration management Event handling Installation Add the dependency for your chosen transport to your build.gradle file. For example, to use the Daily transport: Copy Ask AI implementation "ai.pipecat:daily-transport:0.3.3" Example Here’s a simple example using Daily as the transport layer. Note that the clientConfig is optional and depends on what is required by the bot backend. Copy Ask AI val clientConfig = listOf ( ServiceConfig ( service = "llm" , options = listOf ( Option ( "model" , "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo" ), Option ( "messages" , Value. Array ( Value. Object ( "role" to Value. Str ( "system" ), "content" to Value. Str ( "You are a helpful assistant." ) ) )) ) ), ServiceConfig ( service = "tts" , options = listOf ( Option ( "voice" , "79a125e8-cd45-4c13-8a67-188112f4dd22" ) ) ) ) val callbacks = object : RTVIEventCallbacks () { override fun onBackendError (message: String ) { Log. e (TAG, "Error from backend: $message " ) } } val options = RTVIClientOptions ( services = listOf ( ServiceRegistration ( "llm" , "together" ), ServiceRegistration ( "tts" , "cartesia" )), params = RTVIClientParams (baseUrl = "<your API url>" , config = clientConfig) ) val client = RTVIClient (DailyTransport. Factory (context), callbacks, options) client. connect (). await () // Using Coroutines // Or using callbacks: // client.start().withCallback { /* handle completion */ } Documentation API Reference Complete SDK API documentation Daily Transport WebRTC implementation using Daily OpenAIRealTimeWebRTCTransport API Reference On this page Installation Example Documentation Assistant Responses are generated using AI and may contain mistakes.
|
android_introduction_8283d024.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/client/android/introduction#example
|
| 2 |
+
Title: SDK Introduction - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
SDK Introduction - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Android SDK SDK Introduction Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Client SDKs The RTVI Standard RTVIClient Migration Guide Javascript SDK SDK Introduction API Reference Transport packages React SDK SDK Introduction API Reference React Native SDK SDK Introduction API Reference iOS SDK SDK Introduction API Reference Transport packages Android SDK SDK Introduction API Reference Transport packages C++ SDK SDK Introduction Daily WebRTC Transport The Pipecat Android SDK provides a Kotlin implementation for building voice and multimodal AI applications on Android. It handles: Real-time audio and video streaming Bot communication and state management Media device handling Configuration management Event handling Installation Add the dependency for your chosen transport to your build.gradle file. For example, to use the Daily transport: Copy Ask AI implementation "ai.pipecat:daily-transport:0.3.3" Example Here’s a simple example using Daily as the transport layer. Note that the clientConfig is optional and depends on what is required by the bot backend. Copy Ask AI val clientConfig = listOf ( ServiceConfig ( service = "llm" , options = listOf ( Option ( "model" , "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo" ), Option ( "messages" , Value. Array ( Value. Object ( "role" to Value. Str ( "system" ), "content" to Value. Str ( "You are a helpful assistant." ) ) )) ) ), ServiceConfig ( service = "tts" , options = listOf ( Option ( "voice" , "79a125e8-cd45-4c13-8a67-188112f4dd22" ) ) ) ) val callbacks = object : RTVIEventCallbacks () { override fun onBackendError (message: String ) { Log. e (TAG, "Error from backend: $message " ) } } val options = RTVIClientOptions ( services = listOf ( ServiceRegistration ( "llm" , "together" ), ServiceRegistration ( "tts" , "cartesia" )), params = RTVIClientParams (baseUrl = "<your API url>" , config = clientConfig) ) val client = RTVIClient (DailyTransport. Factory (context), callbacks, options) client. connect (). await () // Using Coroutines // Or using callbacks: // client.start().withCallback { /* handle completion */ } Documentation API Reference Complete SDK API documentation Daily Transport WebRTC implementation using Daily OpenAIRealTimeWebRTCTransport API Reference On this page Installation Example Documentation Assistant Responses are generated using AI and may contain mistakes.
|
audio_audio-buffer-processor_904d4c23.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/audio/audio-buffer-processor
|
| 2 |
+
Title: AudioBufferProcessor - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
AudioBufferProcessor - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Audio Processing AudioBufferProcessor Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing AudioBufferProcessor KoalaFilter KrispFilter NoisereduceFilter SileroVADAnalyzer SoundfileMixer Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Overview The AudioBufferProcessor captures and buffers audio frames from both input (user) and output (bot) sources during conversations. It provides synchronized audio streams with configurable sample rates, supports both mono and stereo output, and offers flexible event handlers for various audio processing workflows. Constructor Copy Ask AI AudioBufferProcessor( sample_rate = None , num_channels = 1 , buffer_size = 0 , enable_turn_audio = False , ** kwargs ) Parameters sample_rate Optional[int] default: "None" The desired output sample rate in Hz. If None , uses the transport’s sample rate from the StartFrame . num_channels int default: "1" Number of output audio channels: 1 : Mono output (user and bot audio are mixed together) 2 : Stereo output (user audio on left channel, bot audio on right channel) buffer_size int default: "0" Buffer size in bytes that triggers audio data events: 0 : Events only trigger when recording stops >0 : Events trigger whenever buffer reaches this size (useful for chunked processing) enable_turn_audio bool default: "False" Whether to enable per-turn audio event handlers ( on_user_turn_audio_data and on_bot_turn_audio_data ). Properties sample_rate Copy Ask AI @ property def sample_rate ( self ) -> int The current sample rate of the audio processor in Hz. num_channels Copy Ask AI @ property def num_channels ( self ) -> int The number of channels in the audio output (1 for mono, 2 for stereo). Methods start_recording() Copy Ask AI async def start_recording () Start recording audio from both user and bot sources. Initializes recording state and resets audio buffers. stop_recording() Copy Ask AI async def stop_recording () Stop recording and trigger final audio data handlers with any remaining buffered audio. has_audio() Copy Ask AI def has_audio () -> bool Check if both user and bot audio buffers contain data. Returns: True if both buffers contain audio data. Event Handlers The processor supports multiple event handlers for different audio processing workflows. Register handlers using the @processor.event_handler() decorator. on_audio_data Triggered when buffer_size is reached or recording stops, providing merged audio. Copy Ask AI @audiobuffer.event_handler ( "on_audio_data" ) async def on_audio_data ( buffer , audio : bytes , sample_rate : int , num_channels : int ): # Handle merged audio data pass Parameters: buffer : The AudioBufferProcessor instance audio : Merged audio data (format depends on num_channels setting) sample_rate : Sample rate in Hz num_channels : Number of channels (1 or 2) on_track_audio_data Triggered alongside on_audio_data , providing separate user and bot audio tracks. Copy Ask AI @audiobuffer.event_handler ( "on_track_audio_data" ) async def on_track_audio_data ( buffer , user_audio : bytes , bot_audio : bytes , sample_rate : int , num_channels : int ): # Handle separate audio tracks pass Parameters: buffer : The AudioBufferProcessor instance user_audio : Raw user audio bytes (always mono) bot_audio : Raw bot audio bytes (always mono) sample_rate : Sample rate in Hz num_channels : Always 1 for individual tracks on_user_turn_audio_data Triggered when a user speaking turn ends. Requires enable_turn_audio=True . Copy Ask AI @audiobuffer.event_handler ( "on_user_turn_audio_data" ) async def on_user_turn_audio_data ( buffer , audio : bytes , sample_rate : int , num_channels : int ): # Handle user turn audio pass Parameters: buffer : The AudioBufferProcessor instance audio : Audio data from the user’s speaking turn sample_rate : Sample rate in Hz num_channels : Always 1 (mono) on_bot_turn_audio_data Triggered when a bot speaking turn ends. Requires enable_turn_audio=True . Copy Ask AI @audiobuffer.event_handler ( "on_bot_turn_audio_data" ) async def on_bot_turn_audio_data ( buffer , audio : bytes , sample_rate : int , num_channels : int ): # Handle bot turn audio pass Parameters: buffer : The AudioBufferProcessor instance audio : Audio data from the bot’s speaking turn sample_rate : Sample rate in Hz num_channels : Always 1 (mono) Audio Processing Features Automatic resampling : Converts incoming audio to the specified sample rate Buffer synchronization : Aligns user and bot audio streams temporally Silence insertion : Fills gaps in non-continuous audio streams to maintain timing Turn tracking : Monitors speaking turns when enable_turn_audio=True Integration Notes STT Audio Passthrough If using an STT service in your pipeline, enable audio passthrough to make audio available to the AudioBufferProcessor: Copy Ask AI stt = DeepgramSTTService( api_key = os.getenv( "DEEPGRAM_API_KEY" ), audio_passthrough = True , ) audio_passthrough is enabled by default. Pipeline Placement Add the AudioBufferProcessor after transport.output() to capture both user and bot audio: Copy Ask AI pipeline = Pipeline([ transport.input(), # ... other processors ... transport.output(), audiobuffer, # Place after audio output # ... remaining processors ... ]) UserIdleProcessor KoalaFilter On this page Overview Constructor Parameters Properties sample_rate num_channels Methods start_recording() stop_recording() has_audio() Event Handlers on_audio_data on_track_audio_data on_user_turn_audio_data on_bot_turn_audio_data Audio Processing Features Integration Notes STT Audio Passthrough Pipeline Placement Assistant Responses are generated using AI and may contain mistakes.
|
audio_audio-buffer-processor_bbd8a26b.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/audio/audio-buffer-processor#audio-processing-features
|
| 2 |
+
Title: AudioBufferProcessor - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
AudioBufferProcessor - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Audio Processing AudioBufferProcessor Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing AudioBufferProcessor KoalaFilter KrispFilter NoisereduceFilter SileroVADAnalyzer SoundfileMixer Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Overview The AudioBufferProcessor captures and buffers audio frames from both input (user) and output (bot) sources during conversations. It provides synchronized audio streams with configurable sample rates, supports both mono and stereo output, and offers flexible event handlers for various audio processing workflows. Constructor Copy Ask AI AudioBufferProcessor( sample_rate = None , num_channels = 1 , buffer_size = 0 , enable_turn_audio = False , ** kwargs ) Parameters sample_rate Optional[int] default: "None" The desired output sample rate in Hz. If None , uses the transport’s sample rate from the StartFrame . num_channels int default: "1" Number of output audio channels: 1 : Mono output (user and bot audio are mixed together) 2 : Stereo output (user audio on left channel, bot audio on right channel) buffer_size int default: "0" Buffer size in bytes that triggers audio data events: 0 : Events only trigger when recording stops >0 : Events trigger whenever buffer reaches this size (useful for chunked processing) enable_turn_audio bool default: "False" Whether to enable per-turn audio event handlers ( on_user_turn_audio_data and on_bot_turn_audio_data ). Properties sample_rate Copy Ask AI @ property def sample_rate ( self ) -> int The current sample rate of the audio processor in Hz. num_channels Copy Ask AI @ property def num_channels ( self ) -> int The number of channels in the audio output (1 for mono, 2 for stereo). Methods start_recording() Copy Ask AI async def start_recording () Start recording audio from both user and bot sources. Initializes recording state and resets audio buffers. stop_recording() Copy Ask AI async def stop_recording () Stop recording and trigger final audio data handlers with any remaining buffered audio. has_audio() Copy Ask AI def has_audio () -> bool Check if both user and bot audio buffers contain data. Returns: True if both buffers contain audio data. Event Handlers The processor supports multiple event handlers for different audio processing workflows. Register handlers using the @processor.event_handler() decorator. on_audio_data Triggered when buffer_size is reached or recording stops, providing merged audio. Copy Ask AI @audiobuffer.event_handler ( "on_audio_data" ) async def on_audio_data ( buffer , audio : bytes , sample_rate : int , num_channels : int ): # Handle merged audio data pass Parameters: buffer : The AudioBufferProcessor instance audio : Merged audio data (format depends on num_channels setting) sample_rate : Sample rate in Hz num_channels : Number of channels (1 or 2) on_track_audio_data Triggered alongside on_audio_data , providing separate user and bot audio tracks. Copy Ask AI @audiobuffer.event_handler ( "on_track_audio_data" ) async def on_track_audio_data ( buffer , user_audio : bytes , bot_audio : bytes , sample_rate : int , num_channels : int ): # Handle separate audio tracks pass Parameters: buffer : The AudioBufferProcessor instance user_audio : Raw user audio bytes (always mono) bot_audio : Raw bot audio bytes (always mono) sample_rate : Sample rate in Hz num_channels : Always 1 for individual tracks on_user_turn_audio_data Triggered when a user speaking turn ends. Requires enable_turn_audio=True . Copy Ask AI @audiobuffer.event_handler ( "on_user_turn_audio_data" ) async def on_user_turn_audio_data ( buffer , audio : bytes , sample_rate : int , num_channels : int ): # Handle user turn audio pass Parameters: buffer : The AudioBufferProcessor instance audio : Audio data from the user’s speaking turn sample_rate : Sample rate in Hz num_channels : Always 1 (mono) on_bot_turn_audio_data Triggered when a bot speaking turn ends. Requires enable_turn_audio=True . Copy Ask AI @audiobuffer.event_handler ( "on_bot_turn_audio_data" ) async def on_bot_turn_audio_data ( buffer , audio : bytes , sample_rate : int , num_channels : int ): # Handle bot turn audio pass Parameters: buffer : The AudioBufferProcessor instance audio : Audio data from the bot’s speaking turn sample_rate : Sample rate in Hz num_channels : Always 1 (mono) Audio Processing Features Automatic resampling : Converts incoming audio to the specified sample rate Buffer synchronization : Aligns user and bot audio streams temporally Silence insertion : Fills gaps in non-continuous audio streams to maintain timing Turn tracking : Monitors speaking turns when enable_turn_audio=True Integration Notes STT Audio Passthrough If using an STT service in your pipeline, enable audio passthrough to make audio available to the AudioBufferProcessor: Copy Ask AI stt = DeepgramSTTService( api_key = os.getenv( "DEEPGRAM_API_KEY" ), audio_passthrough = True , ) audio_passthrough is enabled by default. Pipeline Placement Add the AudioBufferProcessor after transport.output() to capture both user and bot audio: Copy Ask AI pipeline = Pipeline([ transport.input(), # ... other processors ... transport.output(), audiobuffer, # Place after audio output # ... remaining processors ... ]) UserIdleProcessor KoalaFilter On this page Overview Constructor Parameters Properties sample_rate num_channels Methods start_recording() stop_recording() has_audio() Event Handlers on_audio_data on_track_audio_data on_user_turn_audio_data on_bot_turn_audio_data Audio Processing Features Integration Notes STT Audio Passthrough Pipeline Placement Assistant Responses are generated using AI and may contain mistakes.
|
audio_audio-buffer-processor_ec358c5f.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/audio/audio-buffer-processor#param-sample-rate
|
| 2 |
+
Title: AudioBufferProcessor - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
AudioBufferProcessor - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Audio Processing AudioBufferProcessor Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing AudioBufferProcessor KoalaFilter KrispFilter NoisereduceFilter SileroVADAnalyzer SoundfileMixer Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Overview The AudioBufferProcessor captures and buffers audio frames from both input (user) and output (bot) sources during conversations. It provides synchronized audio streams with configurable sample rates, supports both mono and stereo output, and offers flexible event handlers for various audio processing workflows. Constructor Copy Ask AI AudioBufferProcessor( sample_rate = None , num_channels = 1 , buffer_size = 0 , enable_turn_audio = False , ** kwargs ) Parameters sample_rate Optional[int] default: "None" The desired output sample rate in Hz. If None , uses the transport’s sample rate from the StartFrame . num_channels int default: "1" Number of output audio channels: 1 : Mono output (user and bot audio are mixed together) 2 : Stereo output (user audio on left channel, bot audio on right channel) buffer_size int default: "0" Buffer size in bytes that triggers audio data events: 0 : Events only trigger when recording stops >0 : Events trigger whenever buffer reaches this size (useful for chunked processing) enable_turn_audio bool default: "False" Whether to enable per-turn audio event handlers ( on_user_turn_audio_data and on_bot_turn_audio_data ). Properties sample_rate Copy Ask AI @ property def sample_rate ( self ) -> int The current sample rate of the audio processor in Hz. num_channels Copy Ask AI @ property def num_channels ( self ) -> int The number of channels in the audio output (1 for mono, 2 for stereo). Methods start_recording() Copy Ask AI async def start_recording () Start recording audio from both user and bot sources. Initializes recording state and resets audio buffers. stop_recording() Copy Ask AI async def stop_recording () Stop recording and trigger final audio data handlers with any remaining buffered audio. has_audio() Copy Ask AI def has_audio () -> bool Check if both user and bot audio buffers contain data. Returns: True if both buffers contain audio data. Event Handlers The processor supports multiple event handlers for different audio processing workflows. Register handlers using the @processor.event_handler() decorator. on_audio_data Triggered when buffer_size is reached or recording stops, providing merged audio. Copy Ask AI @audiobuffer.event_handler ( "on_audio_data" ) async def on_audio_data ( buffer , audio : bytes , sample_rate : int , num_channels : int ): # Handle merged audio data pass Parameters: buffer : The AudioBufferProcessor instance audio : Merged audio data (format depends on num_channels setting) sample_rate : Sample rate in Hz num_channels : Number of channels (1 or 2) on_track_audio_data Triggered alongside on_audio_data , providing separate user and bot audio tracks. Copy Ask AI @audiobuffer.event_handler ( "on_track_audio_data" ) async def on_track_audio_data ( buffer , user_audio : bytes , bot_audio : bytes , sample_rate : int , num_channels : int ): # Handle separate audio tracks pass Parameters: buffer : The AudioBufferProcessor instance user_audio : Raw user audio bytes (always mono) bot_audio : Raw bot audio bytes (always mono) sample_rate : Sample rate in Hz num_channels : Always 1 for individual tracks on_user_turn_audio_data Triggered when a user speaking turn ends. Requires enable_turn_audio=True . Copy Ask AI @audiobuffer.event_handler ( "on_user_turn_audio_data" ) async def on_user_turn_audio_data ( buffer , audio : bytes , sample_rate : int , num_channels : int ): # Handle user turn audio pass Parameters: buffer : The AudioBufferProcessor instance audio : Audio data from the user’s speaking turn sample_rate : Sample rate in Hz num_channels : Always 1 (mono) on_bot_turn_audio_data Triggered when a bot speaking turn ends. Requires enable_turn_audio=True . Copy Ask AI @audiobuffer.event_handler ( "on_bot_turn_audio_data" ) async def on_bot_turn_audio_data ( buffer , audio : bytes , sample_rate : int , num_channels : int ): # Handle bot turn audio pass Parameters: buffer : The AudioBufferProcessor instance audio : Audio data from the bot’s speaking turn sample_rate : Sample rate in Hz num_channels : Always 1 (mono) Audio Processing Features Automatic resampling : Converts incoming audio to the specified sample rate Buffer synchronization : Aligns user and bot audio streams temporally Silence insertion : Fills gaps in non-continuous audio streams to maintain timing Turn tracking : Monitors speaking turns when enable_turn_audio=True Integration Notes STT Audio Passthrough If using an STT service in your pipeline, enable audio passthrough to make audio available to the AudioBufferProcessor: Copy Ask AI stt = DeepgramSTTService( api_key = os.getenv( "DEEPGRAM_API_KEY" ), audio_passthrough = True , ) audio_passthrough is enabled by default. Pipeline Placement Add the AudioBufferProcessor after transport.output() to capture both user and bot audio: Copy Ask AI pipeline = Pipeline([ transport.input(), # ... other processors ... transport.output(), audiobuffer, # Place after audio output # ... remaining processors ... ]) UserIdleProcessor KoalaFilter On this page Overview Constructor Parameters Properties sample_rate num_channels Methods start_recording() stop_recording() has_audio() Event Handlers on_audio_data on_track_audio_data on_user_turn_audio_data on_bot_turn_audio_data Audio Processing Features Integration Notes STT Audio Passthrough Pipeline Placement Assistant Responses are generated using AI and may contain mistakes.
|
audio_krisp-filter_48a7e00f.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/api-reference/utilities/audio/krisp-filter#join-our-community
|
| 2 |
+
Title: Overview - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Overview - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Get Started Overview Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Get Started Overview Installation & Setup Quickstart Core Concepts Next Steps & Examples Pipecat is an open source Python framework that handles the complex orchestration of AI services, network transport, audio processing, and multimodal interactions. “Multimodal” means you can use any combination of audio, video, images, and/or text in your interactions. And “real-time” means that things are happening quickly enough that it feels conversational—a “back-and-forth” with a bot, not submitting a query and waiting for results. What You Can Build Voice Assistants Natural, real-time conversations with AI using speech recognition and synthesis Interactive Agents Personal coaches and meeting assistants that can understand context and provide guidance Multimodal Apps Applications that combine voice, video, images, and text for rich interactions Creative Tools Storytelling experiences and social companions that engage users Business Solutions Customer intake flows and support bots for automated business processes Complex Flows Structured conversations using Pipecat Flows for managing complex interactions How It Works The flow of interactions in a Pipecat application is typically straightforward: The bot says something The user says something The bot says something The user says something This continues until the conversation naturally ends. While this flow seems simple, making it feel natural requires sophisticated real-time processing. Real-time Processing Pipecat’s pipeline architecture handles both simple voice interactions and complex multimodal processing. Let’s look at how data flows through the system: Voice app Multimodal app 1 Send Audio Transmit and capture streamed audio from the user 2 Transcribe Speech Convert speech to text as the user is talking 3 Process with LLM Generate responses using a large language model 4 Convert to Speech Transform text responses into natural speech 5 Play Audio Stream the audio response back to the user 1 Send Audio Transmit and capture streamed audio from the user 2 Transcribe Speech Convert speech to text as the user is talking 3 Process with LLM Generate responses using a large language model 4 Convert to Speech Transform text responses into natural speech 5 Play Audio Stream the audio response back to the user 1 Send Audio and Video Transmit and capture audio, video, and image inputs simultaneously 2 Process Streams Handle multiple input streams in parallel 3 Model Processing Send combined inputs to multimodal models (like GPT-4V) 4 Generate Outputs Create various outputs (text, images, audio, etc.) 5 Coordinate Presentation Synchronize and present multiple output types In both cases, Pipecat: Processes responses as they stream in Handles multiple input/output modalities concurrently Manages resource allocation and synchronization Coordinates parallel processing tasks This architecture creates fluid, natural interactions without noticeable delays, whether you’re building a simple voice assistant or a complex multimodal application. Pipecat’s pipeline architecture is particularly valuable for managing the complexity of real-time, multimodal interactions, ensuring smooth data flow and proper synchronization regardless of the input/output types involved. Pipecat handles all this complexity for you, letting you focus on building your application rather than managing the underlying infrastructure. Next Steps Ready to build your first Pipecat application? Installation & Setup Prepare your environment and install required dependencies Quickstart Build and run your first Pipecat application Core Concepts Learn about pipelines, frames, and real-time processing Use Cases Explore example implementations and patterns Join Our Community Discord Community Connect with other developers, share your projects, and get support from the Pipecat team. Installation & Setup On this page What You Can Build How It Works Real-time Processing Next Steps Join Our Community Assistant Responses are generated using AI and may contain mistakes.
|
audio_krisp-filter_af1a17f9.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/audio/krisp-filter#usage-example
|
| 2 |
+
Title: KrispFilter - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
KrispFilter - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Audio Processing KrispFilter Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing AudioBufferProcessor KoalaFilter KrispFilter NoisereduceFilter SileroVADAnalyzer SoundfileMixer Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Overview KrispFilter is an audio processor that reduces background noise in real-time audio streams using Krisp AI technology. It inherits from BaseAudioFilter and processes audio frames to improve audio quality by removing unwanted noise. To use Krisp, you need a Krisp SDK license. Get started at Krisp.ai . Looking for help getting started with Krisp and Pipecat? Checkout our Krisp noise cancellation guide . Installation The Krisp filter requires additional dependencies: Copy Ask AI pip install "pipecat-ai[krisp]" Environment Variables You need to provide the path to the Krisp model. This can either be done by setting the KRISP_MODEL_PATH environment variable or by setting the model_path in the constructor. Constructor Parameters sample_type str default: "PCM_16" Audio sample type format channels int default: "1" Number of audio channels model_path str default: "None" Path to the Krisp model file. You can set the model_path directly. Alternatively, you can set the KRISP_MODEL_PATH environment variable to the model file path. Input Frames FilterEnableFrame Frame Specific control frame to toggle filtering on/off Copy Ask AI from pipecat.frames.frames import FilterEnableFrame # Disable noise reduction await task.queue_frame(FilterEnableFrame( False )) # Re-enable noise reduction await task.queue_frame(FilterEnableFrame( True )) Usage Example Copy Ask AI from pipecat.audio.filters.krisp_filter import KrispFilter transport = DailyTransport( room_url, token, "Respond bot" , DailyParams( audio_in_filter = KrispFilter(), # Enable Krisp noise reduction audio_in_enabled = True , audio_out_enabled = True , vad_analyzer = SileroVADAnalyzer(), ), ) Audio Flow Notes Requires Krisp SDK and model file to be available Supports real-time audio processing Supports additional features like background voice removal Handles PCM_16 audio format Thread-safe for pipeline processing Can be dynamically enabled/disabled Maintains audio quality while reducing noise Efficient processing for low latency KoalaFilter NoisereduceFilter On this page Overview Installation Environment Variables Constructor Parameters Input Frames Usage Example Audio Flow Notes Assistant Responses are generated using AI and may contain mistakes.
|
audio_krisp-filter_e2c509bd.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/audio/krisp-filter#constructor-parameters
|
| 2 |
+
Title: KrispFilter - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
KrispFilter - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Audio Processing KrispFilter Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing AudioBufferProcessor KoalaFilter KrispFilter NoisereduceFilter SileroVADAnalyzer SoundfileMixer Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Overview KrispFilter is an audio processor that reduces background noise in real-time audio streams using Krisp AI technology. It inherits from BaseAudioFilter and processes audio frames to improve audio quality by removing unwanted noise. To use Krisp, you need a Krisp SDK license. Get started at Krisp.ai . Looking for help getting started with Krisp and Pipecat? Checkout our Krisp noise cancellation guide . Installation The Krisp filter requires additional dependencies: Copy Ask AI pip install "pipecat-ai[krisp]" Environment Variables You need to provide the path to the Krisp model. This can either be done by setting the KRISP_MODEL_PATH environment variable or by setting the model_path in the constructor. Constructor Parameters sample_type str default: "PCM_16" Audio sample type format channels int default: "1" Number of audio channels model_path str default: "None" Path to the Krisp model file. You can set the model_path directly. Alternatively, you can set the KRISP_MODEL_PATH environment variable to the model file path. Input Frames FilterEnableFrame Frame Specific control frame to toggle filtering on/off Copy Ask AI from pipecat.frames.frames import FilterEnableFrame # Disable noise reduction await task.queue_frame(FilterEnableFrame( False )) # Re-enable noise reduction await task.queue_frame(FilterEnableFrame( True )) Usage Example Copy Ask AI from pipecat.audio.filters.krisp_filter import KrispFilter transport = DailyTransport( room_url, token, "Respond bot" , DailyParams( audio_in_filter = KrispFilter(), # Enable Krisp noise reduction audio_in_enabled = True , audio_out_enabled = True , vad_analyzer = SileroVADAnalyzer(), ), ) Audio Flow Notes Requires Krisp SDK and model file to be available Supports real-time audio processing Supports additional features like background voice removal Handles PCM_16 audio format Thread-safe for pipeline processing Can be dynamically enabled/disabled Maintains audio quality while reducing noise Efficient processing for low latency KoalaFilter NoisereduceFilter On this page Overview Installation Environment Variables Constructor Parameters Input Frames Usage Example Audio Flow Notes Assistant Responses are generated using AI and may contain mistakes.
|
audio_noisereduce-filter_a57b6720.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/audio/noisereduce-filter#param-filter-enable-frame
|
| 2 |
+
Title: NoisereduceFilter - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
NoisereduceFilter - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Audio Processing NoisereduceFilter Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing AudioBufferProcessor KoalaFilter KrispFilter NoisereduceFilter SileroVADAnalyzer SoundfileMixer Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Overview NoisereduceFilter is an audio processor that reduces background noise in real-time audio streams using the noisereduce library. It inherits from BaseAudioFilter and processes audio frames to improve audio quality by removing unwanted noise. Installation The noisereduce filter requires additional dependencies: Copy Ask AI pip install "pipecat-ai[noisereduce]" Constructor Parameters This filter has no configurable parameters in its constructor. Input Frames FilterEnableFrame Frame Specific control frame to toggle filtering on/off Copy Ask AI from pipecat.frames.frames import FilterEnableFrame # Disable noise reduction await task.queue_frame(FilterEnableFrame( False )) # Re-enable noise reduction await task.queue_frame(FilterEnableFrame( True )) Usage Example Copy Ask AI from pipecat.audio.filters.noisereduce_filter import NoisereduceFilter transport = DailyTransport( room_url, token, "Respond bot" , DailyParams( audio_in_filter = NoisereduceFilter(), # Enable noise reduction audio_in_enabled = True , audio_out_enabled = True , vad_analyzer = SileroVADAnalyzer(), ), ) Audio Flow Notes Lightweight alternative to Krisp for noise reduction Supports real-time audio processing Handles PCM_16 audio format Thread-safe for pipeline processing Can be dynamically enabled/disabled No additional configuration required Uses statistical noise reduction techniques KrispFilter SileroVADAnalyzer On this page Overview Installation Constructor Parameters Input Frames Usage Example Audio Flow Notes Assistant Responses are generated using AI and may contain mistakes.
|
audio_noisereduce-filter_e38294d3.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/audio/noisereduce-filter#notes
|
| 2 |
+
Title: NoisereduceFilter - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
NoisereduceFilter - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Audio Processing NoisereduceFilter Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing AudioBufferProcessor KoalaFilter KrispFilter NoisereduceFilter SileroVADAnalyzer SoundfileMixer Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Overview NoisereduceFilter is an audio processor that reduces background noise in real-time audio streams using the noisereduce library. It inherits from BaseAudioFilter and processes audio frames to improve audio quality by removing unwanted noise. Installation The noisereduce filter requires additional dependencies: Copy Ask AI pip install "pipecat-ai[noisereduce]" Constructor Parameters This filter has no configurable parameters in its constructor. Input Frames FilterEnableFrame Frame Specific control frame to toggle filtering on/off Copy Ask AI from pipecat.frames.frames import FilterEnableFrame # Disable noise reduction await task.queue_frame(FilterEnableFrame( False )) # Re-enable noise reduction await task.queue_frame(FilterEnableFrame( True )) Usage Example Copy Ask AI from pipecat.audio.filters.noisereduce_filter import NoisereduceFilter transport = DailyTransport( room_url, token, "Respond bot" , DailyParams( audio_in_filter = NoisereduceFilter(), # Enable noise reduction audio_in_enabled = True , audio_out_enabled = True , vad_analyzer = SileroVADAnalyzer(), ), ) Audio Flow Notes Lightweight alternative to Krisp for noise reduction Supports real-time audio processing Handles PCM_16 audio format Thread-safe for pipeline processing Can be dynamically enabled/disabled No additional configuration required Uses statistical noise reduction techniques KrispFilter SileroVADAnalyzer On this page Overview Installation Constructor Parameters Input Frames Usage Example Audio Flow Notes Assistant Responses are generated using AI and may contain mistakes.
|
audio_silero-vad-analyzer_4295c585.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/audio/silero-vad-analyzer#param-start-secs
|
| 2 |
+
Title: SileroVADAnalyzer - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
SileroVADAnalyzer - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Audio Processing SileroVADAnalyzer Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing AudioBufferProcessor KoalaFilter KrispFilter NoisereduceFilter SileroVADAnalyzer SoundfileMixer Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Overview SileroVADAnalyzer is a Voice Activity Detection (VAD) analyzer that uses the Silero VAD ONNX model to detect speech in audio streams. It provides high-accuracy speech detection with efficient processing using ONNX runtime. Installation The Silero VAD analyzer requires additional dependencies: Copy Ask AI pip install "pipecat-ai[silero]" Constructor Parameters sample_rate int default: "None" Audio sample rate in Hz. Must be either 8000 or 16000. params VADParams default: "VADParams()" Voice Activity Detection parameters object Show properties confidence float default: "0.7" Confidence threshold for speech detection. Higher values make detection more strict. Must be between 0 and 1. start_secs float default: "0.2" Time in seconds that speech must be detected before transitioning to SPEAKING state. stop_secs float default: "0.8" Time in seconds of silence required before transitioning back to QUIET state. min_volume float default: "0.6" Minimum audio volume threshold for speech detection. Must be between 0 and 1. Usage Example Copy Ask AI transport = DailyTransport( room_url, token, "Respond bot" , DailyParams( audio_in_enabled = True , audio_out_enabled = True , vad_analyzer = SileroVADAnalyzer( params = VADParams( stop_secs = 0.5 )), ), ) Technical Details Sample Rate Requirements The analyzer supports two sample rates: 8000 Hz (256 samples per frame) 16000 Hz (512 samples per frame) Model Management Uses ONNX runtime for efficient inference Automatically resets model state every 5 seconds to manage memory Runs on CPU by default for consistent performance Includes built-in model file Notes High-accuracy speech detection Efficient ONNX-based processing Automatic memory management Thread-safe for pipeline processing Built-in model file included CPU-optimized inference Supports 8kHz and 16kHz audio NoisereduceFilter SoundfileMixer On this page Overview Installation Constructor Parameters Usage Example Technical Details Sample Rate Requirements Notes Assistant Responses are generated using AI and may contain mistakes.
|
audio_silero-vad-analyzer_95efee77.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/audio/silero-vad-analyzer#param-min-volume
|
| 2 |
+
Title: SileroVADAnalyzer - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
SileroVADAnalyzer - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Audio Processing SileroVADAnalyzer Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing AudioBufferProcessor KoalaFilter KrispFilter NoisereduceFilter SileroVADAnalyzer SoundfileMixer Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Overview SileroVADAnalyzer is a Voice Activity Detection (VAD) analyzer that uses the Silero VAD ONNX model to detect speech in audio streams. It provides high-accuracy speech detection with efficient processing using ONNX runtime. Installation The Silero VAD analyzer requires additional dependencies: Copy Ask AI pip install "pipecat-ai[silero]" Constructor Parameters sample_rate int default: "None" Audio sample rate in Hz. Must be either 8000 or 16000. params VADParams default: "VADParams()" Voice Activity Detection parameters object Show properties confidence float default: "0.7" Confidence threshold for speech detection. Higher values make detection more strict. Must be between 0 and 1. start_secs float default: "0.2" Time in seconds that speech must be detected before transitioning to SPEAKING state. stop_secs float default: "0.8" Time in seconds of silence required before transitioning back to QUIET state. min_volume float default: "0.6" Minimum audio volume threshold for speech detection. Must be between 0 and 1. Usage Example Copy Ask AI transport = DailyTransport( room_url, token, "Respond bot" , DailyParams( audio_in_enabled = True , audio_out_enabled = True , vad_analyzer = SileroVADAnalyzer( params = VADParams( stop_secs = 0.5 )), ), ) Technical Details Sample Rate Requirements The analyzer supports two sample rates: 8000 Hz (256 samples per frame) 16000 Hz (512 samples per frame) Model Management Uses ONNX runtime for efficient inference Automatically resets model state every 5 seconds to manage memory Runs on CPU by default for consistent performance Includes built-in model file Notes High-accuracy speech detection Efficient ONNX-based processing Automatic memory management Thread-safe for pipeline processing Built-in model file included CPU-optimized inference Supports 8kHz and 16kHz audio NoisereduceFilter SoundfileMixer On this page Overview Installation Constructor Parameters Usage Example Technical Details Sample Rate Requirements Notes Assistant Responses are generated using AI and may contain mistakes.
|
audio_soundfile-mixer_2cc2bb00.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/audio/soundfile-mixer#param-volume-1
|
| 2 |
+
Title: SoundfileMixer - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
SoundfileMixer - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Audio Processing SoundfileMixer Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing AudioBufferProcessor KoalaFilter KrispFilter NoisereduceFilter SileroVADAnalyzer SoundfileMixer Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Overview SoundfileMixer is an audio mixer that combines incoming audio with audio from files. It supports multiple audio file formats through the soundfile library and can handle runtime volume adjustments and sound switching. Installation The soundfile mixer requires additional dependencies: Copy Ask AI pip install "pipecat-ai[soundfile]" Constructor Parameters sound_files Mapping[str, str] required Dictionary mapping sound names to file paths. Files must be mono (single channel). default_sound str required Name of the default sound to play (must be a key in sound_files). volume float default: "0.4" Initial volume for the mixed sound. Values typically range from 0.0 to 1.0, but can go higher. loop bool default: "true" Whether to loop the sound file when it reaches the end. Control Frames MixerUpdateSettingsFrame Frame Updates mixer settings at runtime Show properties sound str Changes the current playing sound (must be a key in sound_files) volume float Updates the mixing volume loop bool Updates whether the sound should loop MixerEnableFrame Frame Enables or disables the mixer Show properties enable bool Whether mixing should be enabled Usage Example Copy Ask AI # Initialize mixer with sound files mixer = SoundfileMixer( sound_files = { "office" : "office_ambience.wav" }, default_sound = "office" , volume = 2.0 , ) # Add to transport transport = DailyTransport( room_url, token, "Audio Bot" , DailyParams( audio_out_enabled = True , audio_out_mixer = mixer, ), ) # Control mixer at runtime await task.queue_frame(MixerUpdateSettingsFrame({ "volume" : 0.5 })) await task.queue_frame(MixerEnableFrame( False )) # Disable mixing await task.queue_frame(MixerEnableFrame( True )) # Enable mixing Notes Supports any audio format that soundfile can read Automatically resamples audio files to match output sample rate Files must be mono (single channel) Thread-safe for pipeline processing Can dynamically switch between multiple sound files Volume can be adjusted in real-time Mixing can be enabled/disabled on demand SileroVADAnalyzer FrameFilter On this page Overview Installation Constructor Parameters Control Frames Usage Example Notes Assistant Responses are generated using AI and may contain mistakes.
|
audio_soundfile-mixer_7adf2889.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/audio/soundfile-mixer#control-frames
|
| 2 |
+
Title: SoundfileMixer - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
SoundfileMixer - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Audio Processing SoundfileMixer Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing AudioBufferProcessor KoalaFilter KrispFilter NoisereduceFilter SileroVADAnalyzer SoundfileMixer Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Overview SoundfileMixer is an audio mixer that combines incoming audio with audio from files. It supports multiple audio file formats through the soundfile library and can handle runtime volume adjustments and sound switching. Installation The soundfile mixer requires additional dependencies: Copy Ask AI pip install "pipecat-ai[soundfile]" Constructor Parameters sound_files Mapping[str, str] required Dictionary mapping sound names to file paths. Files must be mono (single channel). default_sound str required Name of the default sound to play (must be a key in sound_files). volume float default: "0.4" Initial volume for the mixed sound. Values typically range from 0.0 to 1.0, but can go higher. loop bool default: "true" Whether to loop the sound file when it reaches the end. Control Frames MixerUpdateSettingsFrame Frame Updates mixer settings at runtime Show properties sound str Changes the current playing sound (must be a key in sound_files) volume float Updates the mixing volume loop bool Updates whether the sound should loop MixerEnableFrame Frame Enables or disables the mixer Show properties enable bool Whether mixing should be enabled Usage Example Copy Ask AI # Initialize mixer with sound files mixer = SoundfileMixer( sound_files = { "office" : "office_ambience.wav" }, default_sound = "office" , volume = 2.0 , ) # Add to transport transport = DailyTransport( room_url, token, "Audio Bot" , DailyParams( audio_out_enabled = True , audio_out_mixer = mixer, ), ) # Control mixer at runtime await task.queue_frame(MixerUpdateSettingsFrame({ "volume" : 0.5 })) await task.queue_frame(MixerEnableFrame( False )) # Disable mixing await task.queue_frame(MixerEnableFrame( True )) # Enable mixing Notes Supports any audio format that soundfile can read Automatically resamples audio files to match output sample rate Files must be mono (single channel) Thread-safe for pipeline processing Can dynamically switch between multiple sound files Volume can be adjusted in real-time Mixing can be enabled/disabled on demand SileroVADAnalyzer FrameFilter On this page Overview Installation Constructor Parameters Control Frames Usage Example Notes Assistant Responses are generated using AI and may contain mistakes.
|
audio_soundfile-mixer_8700f49a.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/audio/soundfile-mixer#param-default-sound
|
| 2 |
+
Title: SoundfileMixer - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
SoundfileMixer - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Audio Processing SoundfileMixer Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing AudioBufferProcessor KoalaFilter KrispFilter NoisereduceFilter SileroVADAnalyzer SoundfileMixer Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Overview SoundfileMixer is an audio mixer that combines incoming audio with audio from files. It supports multiple audio file formats through the soundfile library and can handle runtime volume adjustments and sound switching. Installation The soundfile mixer requires additional dependencies: Copy Ask AI pip install "pipecat-ai[soundfile]" Constructor Parameters sound_files Mapping[str, str] required Dictionary mapping sound names to file paths. Files must be mono (single channel). default_sound str required Name of the default sound to play (must be a key in sound_files). volume float default: "0.4" Initial volume for the mixed sound. Values typically range from 0.0 to 1.0, but can go higher. loop bool default: "true" Whether to loop the sound file when it reaches the end. Control Frames MixerUpdateSettingsFrame Frame Updates mixer settings at runtime Show properties sound str Changes the current playing sound (must be a key in sound_files) volume float Updates the mixing volume loop bool Updates whether the sound should loop MixerEnableFrame Frame Enables or disables the mixer Show properties enable bool Whether mixing should be enabled Usage Example Copy Ask AI # Initialize mixer with sound files mixer = SoundfileMixer( sound_files = { "office" : "office_ambience.wav" }, default_sound = "office" , volume = 2.0 , ) # Add to transport transport = DailyTransport( room_url, token, "Audio Bot" , DailyParams( audio_out_enabled = True , audio_out_mixer = mixer, ), ) # Control mixer at runtime await task.queue_frame(MixerUpdateSettingsFrame({ "volume" : 0.5 })) await task.queue_frame(MixerEnableFrame( False )) # Disable mixing await task.queue_frame(MixerEnableFrame( True )) # Enable mixing Notes Supports any audio format that soundfile can read Automatically resamples audio files to match output sample rate Files must be mono (single channel) Thread-safe for pipeline processing Can dynamically switch between multiple sound files Volume can be adjusted in real-time Mixing can be enabled/disabled on demand SileroVADAnalyzer FrameFilter On this page Overview Installation Constructor Parameters Control Frames Usage Example Notes Assistant Responses are generated using AI and may contain mistakes.
|
base-classes_media_b6dde063.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/base-classes/media#what-you-can-build
|
| 2 |
+
Title: Overview - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Overview - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Get Started Overview Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Get Started Overview Installation & Setup Quickstart Core Concepts Next Steps & Examples Pipecat is an open source Python framework that handles the complex orchestration of AI services, network transport, audio processing, and multimodal interactions. “Multimodal” means you can use any combination of audio, video, images, and/or text in your interactions. And “real-time” means that things are happening quickly enough that it feels conversational—a “back-and-forth” with a bot, not submitting a query and waiting for results. What You Can Build Voice Assistants Natural, real-time conversations with AI using speech recognition and synthesis Interactive Agents Personal coaches and meeting assistants that can understand context and provide guidance Multimodal Apps Applications that combine voice, video, images, and text for rich interactions Creative Tools Storytelling experiences and social companions that engage users Business Solutions Customer intake flows and support bots for automated business processes Complex Flows Structured conversations using Pipecat Flows for managing complex interactions How It Works The flow of interactions in a Pipecat application is typically straightforward: The bot says something The user says something The bot says something The user says something This continues until the conversation naturally ends. While this flow seems simple, making it feel natural requires sophisticated real-time processing. Real-time Processing Pipecat’s pipeline architecture handles both simple voice interactions and complex multimodal processing. Let’s look at how data flows through the system: Voice app Multimodal app 1 Send Audio Transmit and capture streamed audio from the user 2 Transcribe Speech Convert speech to text as the user is talking 3 Process with LLM Generate responses using a large language model 4 Convert to Speech Transform text responses into natural speech 5 Play Audio Stream the audio response back to the user 1 Send Audio Transmit and capture streamed audio from the user 2 Transcribe Speech Convert speech to text as the user is talking 3 Process with LLM Generate responses using a large language model 4 Convert to Speech Transform text responses into natural speech 5 Play Audio Stream the audio response back to the user 1 Send Audio and Video Transmit and capture audio, video, and image inputs simultaneously 2 Process Streams Handle multiple input streams in parallel 3 Model Processing Send combined inputs to multimodal models (like GPT-4V) 4 Generate Outputs Create various outputs (text, images, audio, etc.) 5 Coordinate Presentation Synchronize and present multiple output types In both cases, Pipecat: Processes responses as they stream in Handles multiple input/output modalities concurrently Manages resource allocation and synchronization Coordinates parallel processing tasks This architecture creates fluid, natural interactions without noticeable delays, whether you’re building a simple voice assistant or a complex multimodal application. Pipecat’s pipeline architecture is particularly valuable for managing the complexity of real-time, multimodal interactions, ensuring smooth data flow and proper synchronization regardless of the input/output types involved. Pipecat handles all this complexity for you, letting you focus on building your application rather than managing the underlying infrastructure. Next Steps Ready to build your first Pipecat application? Installation & Setup Prepare your environment and install required dependencies Quickstart Build and run your first Pipecat application Core Concepts Learn about pipelines, frames, and real-time processing Use Cases Explore example implementations and patterns Join Our Community Discord Community Connect with other developers, share your projects, and get support from the Pipecat team. Installation & Setup On this page What You Can Build How It Works Real-time Processing Next Steps Join Our Community Assistant Responses are generated using AI and may contain mistakes.
|
c_transport_59ca9f73.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/client/c++/transport#building
|
| 2 |
+
Title: Daily WebRTC Transport - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Daily WebRTC Transport - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation C++ SDK Daily WebRTC Transport Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Client SDKs The RTVI Standard RTVIClient Migration Guide Javascript SDK SDK Introduction API Reference Transport packages React SDK SDK Introduction API Reference React Native SDK SDK Introduction API Reference iOS SDK SDK Introduction API Reference Transport packages Android SDK SDK Introduction API Reference Transport packages C++ SDK SDK Introduction Daily WebRTC Transport The Daily transport implementation enables real-time audio and video communication in your Pipecat C++ applications using Daily’s WebRTC infrastructure. Dependencies Daily Core C++ SDK Download the Daily Core C++ SDK from the available releases for your platform and set: Copy Ask AI export DAILY_CORE_PATH = / path / to / daily-core-sdk Pipecat C++ SDK Build the base Pipecat C++ SDK first and set: Copy Ask AI export PIPECAT_SDK_PATH = / path / to / pipecat-client-cxx Building First, set a few environment variables: Copy Ask AI PIPECAT_SDK_PATH = /path/to/pipecat-client-cxx DAILY_CORE_PATH = /path/to/daily-core-sdk Then, build the project: Linux/macOS Windows Copy Ask AI cmake . -G Ninja -Bbuild -DCMAKE_BUILD_TYPE=Release ninja -C build Copy Ask AI cmake . -G Ninja -Bbuild -DCMAKE_BUILD_TYPE=Release ninja -C build Copy Ask AI # Initialize Visual Studio environment "C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Auxiliary\Build\vcvarsall.bat" amd64 # Configure and build cmake . -Bbuild --preset vcpkg cmake --build build --config Release Examples Basic Client Simple C++ implementation example Audio Client C++ client with PortAudio support Node.js Server Example Node.js proxy implementation SDK Introduction On this page Dependencies Daily Core C++ SDK Pipecat C++ SDK Building Examples Assistant Responses are generated using AI and may contain mistakes.
|
client_introduction_1c027bef.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/client/introduction#next-steps
|
| 2 |
+
Title: Client SDKs - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Client SDKs - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Client SDKs Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Client SDKs The RTVI Standard RTVIClient Migration Guide Javascript SDK SDK Introduction API Reference Transport packages React SDK SDK Introduction API Reference React Native SDK SDK Introduction API Reference iOS SDK SDK Introduction API Reference Transport packages Android SDK SDK Introduction API Reference Transport packages C++ SDK SDK Introduction Daily WebRTC Transport The Client SDKs are currently in transition to a new, simpler API design. The js and react libraries have already been deployed with these changes. Their corresponding documentation along with this top-level documentation has been updated to reflect the latest changes. For transitioning to the new API, please refer to the migration guide . Note that React Native, iOS, and Android SDKs are still in the process of being updated and their documentation will be updated once the new versions are released. If you have any questions or need assistance, please reach out to us on Discord . Pipecat provides client SDKs for multiple platforms, all implementing the RTVI (Real-Time Voice and Video Inference) standard. These SDKs make it easy to build real-time AI applications that can handle voice, video, and text interactions. Javascript Pipecat JS SDK React Pipecat React SDK React Native Pipecat React Native SDK Swift Pipecat iOS SDK Kotlin Pipecat Android SDK C++ Pipecat C++ SDK Core Functionality All Pipecat client SDKs provide: Media Management Handle device inputs and media streams for audio and video Bot Integration Configure and communicate with your Pipecat bot Session Management Manage connection state and error handling Core Types PipecatClient The main class for interacting with Pipecat bots. It is the primary type you will interact with. Transport The PipecatClient wraps a Transport, which defines and provides the underlying connection mechanism (e.g., WebSocket, WebRTC). Your Pipecat pipeline will contain a corresponding transport. RTVIMessage Represents a message sent to or received from a Pipecat bot. Simple Usage Examples Connecting to a Bot Custom Messaging Establish ongoing connections via WebSocket or WebRTC for: Live voice conversations Real-time video processing Continuous interactions javascript react Copy Ask AI // Example: Establishing a real-time connection import { RTVIEvent , RTVIMessage , PipecatClient } from "@pipecat-ai/client-js" ; import { DailyTransport } from "@pipecat-ai/daily-transport" ; const pcClient = new PipecatClient ({ transport: new DailyTransport (), enableMic: true , enableCam: false , enableScreenShare: false , callbacks: { onBotConnected : () => { console . log ( "[CALLBACK] Bot connected" ); }, onBotDisconnected : () => { console . log ( "[CALLBACK] Bot disconnected" ); }, onBotReady : () => { console . log ( "[CALLBACK] Bot ready to chat!" ); }, }, }); try { // Below, we use a REST endpoint to fetch connection credentials for our // Daily Transport. Alternatively, you could provide those credentials // directly to `connect()`. await pcClient . connect ({ endpoint: "https://your-connect-end-point-here/connect" , }); } catch ( e ) { console . error ( e . message ); } // Events (alternative approach to constructor-provided callbacks) pcClient . on ( RTVIEvent . Connected , () => { console . log ( "[EVENT] User connected" ); }); pcClient . on ( RTVIEvent . Disconnected , () => { console . log ( "[EVENT] User disconnected" ); }); Establish ongoing connections via WebSocket or WebRTC for: Live voice conversations Real-time video processing Continuous interactions javascript react Copy Ask AI // Example: Establishing a real-time connection import { RTVIEvent , RTVIMessage , PipecatClient } from "@pipecat-ai/client-js" ; import { DailyTransport } from "@pipecat-ai/daily-transport" ; const pcClient = new PipecatClient ({ transport: new DailyTransport (), enableMic: true , enableCam: false , enableScreenShare: false , callbacks: { onBotConnected : () => { console . log ( "[CALLBACK] Bot connected" ); }, onBotDisconnected : () => { console . log ( "[CALLBACK] Bot disconnected" ); }, onBotReady : () => { console . log ( "[CALLBACK] Bot ready to chat!" ); }, }, }); try { // Below, we use a REST endpoint to fetch connection credentials for our // Daily Transport. Alternatively, you could provide those credentials // directly to `connect()`. await pcClient . connect ({ endpoint: "https://your-connect-end-point-here/connect" , }); } catch ( e ) { console . error ( e . message ); } // Events (alternative approach to constructor-provided callbacks) pcClient . on ( RTVIEvent . Connected , () => { console . log ( "[EVENT] User connected" ); }); pcClient . on ( RTVIEvent . Disconnected , () => { console . log ( "[EVENT] User disconnected" ); }); Send custom messages and handle responses from your bot. This is useful for: Running server-side functionality Triggering specific bot actions Querying the server Responding to server requests javascript react Copy Ask AI import { PipecatClient } from "@pipecat-ai/client-js" ; const pcClient = new PipecatClient ({ transport: new DailyTransport (), callbacks: { onBotConnected : () => { pcClient . sendClientRequest ( 'get-language' ) . then (( response ) => { console . log ( "[CALLBACK] Bot using language:" , response ); if ( response !== preferredLanguage ) { pcClient . sendClientMessage ( 'set-language' , { language: preferredLanguage }); } }) . catch (( error ) => { console . error ( "[CALLBACK] Error getting language:" , error ); }); }, onServerMessage : ( message ) => { console . log ( "[CALLBACK] Received message from server:" , message ); }, }, }); await pcClient . connect ({ url: "https://your-daily-room-url" , token: "your-daily-token" }); About RTVI Pipecat’s client SDKs implement the RTVI (Real-Time Voice and Video Inference) standard, an open specification for real-time AI inference. This means: Your code can work with any RTVI-compatible inference service You get battle-tested tooling for real-time multimedia handling You can easily set up development and testing environments Next Steps Get started by trying out examples: Simple Chatbot Example Complete client-server example with both bot backend (Python) and frontend implementation (JS, React, React Native, iOS, and Android). More Examples Explore our full collection of example applications and implementations across different platforms and use cases. The RTVI Standard On this page Core Functionality Core Types PipecatClient Transport RTVIMessage Simple Usage Examples About RTVI Next Steps Assistant Responses are generated using AI and may contain mistakes.
|
client_introduction_c9b73d79.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/client/introduction
|
| 2 |
+
Title: Client SDKs - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Client SDKs - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Client SDKs Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Client SDKs The RTVI Standard RTVIClient Migration Guide Javascript SDK SDK Introduction API Reference Transport packages React SDK SDK Introduction API Reference React Native SDK SDK Introduction API Reference iOS SDK SDK Introduction API Reference Transport packages Android SDK SDK Introduction API Reference Transport packages C++ SDK SDK Introduction Daily WebRTC Transport The Client SDKs are currently in transition to a new, simpler API design. The js and react libraries have already been deployed with these changes. Their corresponding documentation along with this top-level documentation has been updated to reflect the latest changes. For transitioning to the new API, please refer to the migration guide . Note that React Native, iOS, and Android SDKs are still in the process of being updated and their documentation will be updated once the new versions are released. If you have any questions or need assistance, please reach out to us on Discord . Pipecat provides client SDKs for multiple platforms, all implementing the RTVI (Real-Time Voice and Video Inference) standard. These SDKs make it easy to build real-time AI applications that can handle voice, video, and text interactions. Javascript Pipecat JS SDK React Pipecat React SDK React Native Pipecat React Native SDK Swift Pipecat iOS SDK Kotlin Pipecat Android SDK C++ Pipecat C++ SDK Core Functionality All Pipecat client SDKs provide: Media Management Handle device inputs and media streams for audio and video Bot Integration Configure and communicate with your Pipecat bot Session Management Manage connection state and error handling Core Types PipecatClient The main class for interacting with Pipecat bots. It is the primary type you will interact with. Transport The PipecatClient wraps a Transport, which defines and provides the underlying connection mechanism (e.g., WebSocket, WebRTC). Your Pipecat pipeline will contain a corresponding transport. RTVIMessage Represents a message sent to or received from a Pipecat bot. Simple Usage Examples Connecting to a Bot Custom Messaging Establish ongoing connections via WebSocket or WebRTC for: Live voice conversations Real-time video processing Continuous interactions javascript react Copy Ask AI // Example: Establishing a real-time connection import { RTVIEvent , RTVIMessage , PipecatClient } from "@pipecat-ai/client-js" ; import { DailyTransport } from "@pipecat-ai/daily-transport" ; const pcClient = new PipecatClient ({ transport: new DailyTransport (), enableMic: true , enableCam: false , enableScreenShare: false , callbacks: { onBotConnected : () => { console . log ( "[CALLBACK] Bot connected" ); }, onBotDisconnected : () => { console . log ( "[CALLBACK] Bot disconnected" ); }, onBotReady : () => { console . log ( "[CALLBACK] Bot ready to chat!" ); }, }, }); try { // Below, we use a REST endpoint to fetch connection credentials for our // Daily Transport. Alternatively, you could provide those credentials // directly to `connect()`. await pcClient . connect ({ endpoint: "https://your-connect-end-point-here/connect" , }); } catch ( e ) { console . error ( e . message ); } // Events (alternative approach to constructor-provided callbacks) pcClient . on ( RTVIEvent . Connected , () => { console . log ( "[EVENT] User connected" ); }); pcClient . on ( RTVIEvent . Disconnected , () => { console . log ( "[EVENT] User disconnected" ); }); Establish ongoing connections via WebSocket or WebRTC for: Live voice conversations Real-time video processing Continuous interactions javascript react Copy Ask AI // Example: Establishing a real-time connection import { RTVIEvent , RTVIMessage , PipecatClient } from "@pipecat-ai/client-js" ; import { DailyTransport } from "@pipecat-ai/daily-transport" ; const pcClient = new PipecatClient ({ transport: new DailyTransport (), enableMic: true , enableCam: false , enableScreenShare: false , callbacks: { onBotConnected : () => { console . log ( "[CALLBACK] Bot connected" ); }, onBotDisconnected : () => { console . log ( "[CALLBACK] Bot disconnected" ); }, onBotReady : () => { console . log ( "[CALLBACK] Bot ready to chat!" ); }, }, }); try { // Below, we use a REST endpoint to fetch connection credentials for our // Daily Transport. Alternatively, you could provide those credentials // directly to `connect()`. await pcClient . connect ({ endpoint: "https://your-connect-end-point-here/connect" , }); } catch ( e ) { console . error ( e . message ); } // Events (alternative approach to constructor-provided callbacks) pcClient . on ( RTVIEvent . Connected , () => { console . log ( "[EVENT] User connected" ); }); pcClient . on ( RTVIEvent . Disconnected , () => { console . log ( "[EVENT] User disconnected" ); }); Send custom messages and handle responses from your bot. This is useful for: Running server-side functionality Triggering specific bot actions Querying the server Responding to server requests javascript react Copy Ask AI import { PipecatClient } from "@pipecat-ai/client-js" ; const pcClient = new PipecatClient ({ transport: new DailyTransport (), callbacks: { onBotConnected : () => { pcClient . sendClientRequest ( 'get-language' ) . then (( response ) => { console . log ( "[CALLBACK] Bot using language:" , response ); if ( response !== preferredLanguage ) { pcClient . sendClientMessage ( 'set-language' , { language: preferredLanguage }); } }) . catch (( error ) => { console . error ( "[CALLBACK] Error getting language:" , error ); }); }, onServerMessage : ( message ) => { console . log ( "[CALLBACK] Received message from server:" , message ); }, }, }); await pcClient . connect ({ url: "https://your-daily-room-url" , token: "your-daily-token" }); About RTVI Pipecat’s client SDKs implement the RTVI (Real-Time Voice and Video Inference) standard, an open specification for real-time AI inference. This means: Your code can work with any RTVI-compatible inference service You get battle-tested tooling for real-time multimedia handling You can easily set up development and testing environments Next Steps Get started by trying out examples: Simple Chatbot Example Complete client-server example with both bot backend (Python) and frontend implementation (JS, React, React Native, iOS, and Android). More Examples Explore our full collection of example applications and implementations across different platforms and use cases. The RTVI Standard On this page Core Functionality Core Types PipecatClient Transport RTVIMessage Simple Usage Examples About RTVI Next Steps Assistant Responses are generated using AI and may contain mistakes.
|
client_rtvi-standard_065571ef.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/client/rtvi-standard#service-specific-insights
|
| 2 |
+
Title: The RTVI Standard - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
The RTVI Standard - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation The RTVI Standard Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Client SDKs The RTVI Standard RTVIClient Migration Guide Javascript SDK SDK Introduction API Reference Transport packages React SDK SDK Introduction API Reference React Native SDK SDK Introduction API Reference iOS SDK SDK Introduction API Reference Transport packages Android SDK SDK Introduction API Reference Transport packages C++ SDK SDK Introduction Daily WebRTC Transport The RTVI (Real-Time Voice and Video Inference) standard defines a set of message types and structures sent between clients and servers. It is designed to facilitate real-time interactions between clients and AI applications that require voice, video, and text communication. It provides a consistent framework for building applications that can communicate with AI models and the backends running those models in real-time. This page documents version 1.0 of the RTVI standard, released in June 2025. Key Features Connection Management RTVI provides a flexible connection model that allows clients to connect to AI services and coordinate state. Transcriptions The standard includes built-in support for real-time transcription of audio streams. Client-Server Messaging The standard defines a messaging protocol for sending and receiving messages between clients and servers, allowing for efficient communication of requests and responses. Advanced LLM Interactions The standard supports advanced interactions with large language models (LLMs), including context management, function call handline, and search results. Service-Specific Insights RTVI supports events to provide insight into the input/output and state for typical services that exist in speech-to-speech workflows. Metrics and Monitoring RTVI provides mechanisms for collecting metrics and monitoring the performance of server-side services. Terms Client : The front-end application or user interface that interacts with the RTVI server. Server : The backend-end service that runs the AI framework and processes requests from the client. User : The end user interacting with the client application. Bot : The AI interacting with the user, technically an amalgamation of a large language model (LLM) and a text-to-speech (TTS) service. RTVI Message Format The messages defined as part of the RTVI protocol adhere to the following format: Copy Ask AI { "id" : string , "label" : "rtvi-ai" , "type" : string , "data" : unknown } id string A unique identifier for the message, used to correlate requests and responses. label string default: "rtvi-ai" required A label that identifies this message as an RTVI message. This field is required and should always be set to 'rtvi-ai' . type string required The type of message being sent. This field is required and should be set to one of the predefined RTVI message types listed below. data unknown The payload of the message, which can be any data structure relevant to the message type. RTVI Message Types Following the above format, this section describes the various message types defined by the RTVI standard. Each message type has a specific purpose and structure, allowing for clear communication between clients and servers. Each message type below includes either a 🤖 or 🏄 emoji to denote whether the message is sent from the bot (🤖) or client (🏄). Connection Management client-ready 🏄 Indicates that the client is ready to receive messages and interact with the server. Typically sent after the transport media channels have connected. type : 'client-ready' data : version : string The version of the RTVI standard being used. This is useful for ensuring compatibility between client and server implementations. about : AboutClient Object An object containing information about the client, such as its rtvi-version, client library, and any other relevant metadata. The AboutClient object follows this structure: Show AboutClient library string required library_version string platform string platform_version string platform_details any Any platform-specific details that may be relevant to the server. This could include information about the browser, operating system, or any other environment-specific data needed by the server. This field is optional and open-ended, so please be mindful of the data you include here and any security concerns that may arise from exposing sensitive or personal-identifiable information. bot-ready 🤖 Indicates that the bot is ready to receive messages and interact with the client. Typically send after the transport media channels have connected. type : 'bot-ready' data : version : string The version of the RTVI standard being used. This is useful for ensuring compatibility between client and server implementations. about : any (Optional) An object containing information about the server or bot. It’s structure and value are both undefined by default. This provides flexibility to include any relevant metadata your client may need to know about the server at connection time, without any built-in security concerns. Please be mindful of the data you include here and any security concerns that may arise from exposing sensitive information. disconnect-bot 🏄 Indicates that the client wishes to disconnect from the bot. Typically used when the client is shutting down or no longer needs to interact with the bot. Note: Disconnets should happen automatically when either the client or bot disconnects from the transport, so this message is intended for the case where a client may want to remain connected to the transport but no longer wishes to interact with the bot. type : 'disconnect-bot' data : undefined error 🤖 Indicates an error occurred during bot initialization or runtime. type : 'error' data : message : string Description of the error. fatal : boolean Indicates if the error is fatal to the session. Transcription user-started-speaking 🤖 Emitted when the user begins speaking type : 'user-started-speaking' data : None user-stopped-speaking 🤖 Emitted when the user stops speaking type : 'user-stopped-speaking' data : None bot-started-speaking 🤖 Emitted when the bot begins speaking type : 'bot-started-speaking' data : None bot-stopped-speaking 🤖 Emitted when the bot stops speaking type : 'bot-stopped-speaking' data : None user-transcription 🤖 Real-time transcription of user speech, including both partial and final results. type : 'user-transcription' data : text : string The transcribed text of the user. final : boolean Indicates if this is a final transcription or a partial result. timestamp : string The timestamp when the transcription was generated. user_id : string Identifier for the user who spoke. bot-transcription 🤖 Transcription of the bot’s speech. Note: This protocol currently does not match the user transcription format to support real-time timestamping for bot transcriptions. Rather, the event is typically sent for each sentence of the bot’s response. This difference is currently due to limitations in TTS services which mostly do not support (or support well), accurate timing information. If/when this changes, this protocol may be updated to include the necessary timing information. For now, if you want to attempt real-time transcription to match your bot’s speaking, you can try using the bot-tts-text message type. type : 'bot-transcription' data : text : string The transcribed text from the bot, typically aggregated at a per-sentence level. Client-Server Messaging server-message 🤖 An arbitrary message sent from the server to the client. This can be used for custom interactions or commands. This message may be coupled with the client-message message type to handle responses from the client. type : 'server-message' data : any The data can be any JSON-serializable object, formatted according to your own specifications. client-message 🏄 An arbitrary message sent from the client to the server. This can be used for custom interactions or commands. This message may be coupled with the server-response message type to handle responses from the server. type : 'client-message' data : t : string d : unknown (optional) The data payload should contain a t field indicating the type of message and an optional d field containing any custom, corresponding data needed for the message. server-response 🤖 An message sent from the server to the client in response to a client-message . IMPORTANT : The id should match the id of the original client-message to correlate the response with the request. type : 'client-message' data : t : string d : unknown (optional) The data payload should contain a t field indicating the type of message and an optional d field containing any custom, corresponding data needed for the message. error-response 🤖 Error response to a specific client message. IMPORTANT : The id should match the id of the original client-message to correlate the response with the request. type : 'error-response' data : error : string Advanced LLM Interactions append-to-context 🏄 A message sent from the client to the server to append data to the context of the current llm conversation. This is useful for providing text-based content for the user or augmenting the context for the assistant. type : 'append-to-context' data : role : "user" | "assistant" The role the context should be appended to. Currently only supports "user" and "assistant" . content : unknown The content to append to the context. This can be any data structure the llm understand. run_immediately : boolean (optional) Indicates whether the context should be run immediately after appending. Defaults to false . If set to false , the context will be appended but not executed until the next llm run. llm-function-call 🤖 A function call request from the LLM, sent from the bot to the client. Note that for most cases, an LLM function call will be handled completely server-side. However, in the event that the call requires input from the client or the client needs to be aware of the function call, this message/response schema is required. type : 'llm-function-call' data : function_name : string Name of the function to be called. tool_call_id : string Unique identifier for this function call. args : Record<string, unknown> Arguments to be passed to the function. llm-function-call-result 🏄 The result of the function call requested by the LLM, returned from the client. type : 'llm-function-call-result' data : function_name : string Name of the called function. tool_call_id : string Identifier matching the original function call. args : Record<string, unknown> Arguments that were passed to the function. result : Record<string, unknown> | string The result returned by the function. bot-llm-search-response 🤖 Search results from the LLM’s knowledge base. Currently, Google Gemini is the only LLM that supports built-in search. However, we expect other LLMs to follow suite, which is why this message type is defined as part of the RTVI standard. As more LLMs add support for this feature, the format of this message type may evolve to accommodate discrepancies. type : 'bot-llm-search-response' data : search_result : string (optional) Raw search result text. rendered_content : string (optional) Formatted version of the search results. origins : Array<Origin Object> Source information and confidence scores for search results. The Origin Object follows this structure: Copy Ask AI { "site_uri" : string (optional) , "site_title" : string (optional) , "results" : Array< { "text" : string , "confidence" : number [] } > } Example: Copy Ask AI "id" : undefined "label" : "rtvi-ai" "type" : "bot-llm-search-response" "data" : { "origins" : [ { "results" : [ { "confidence" : [ 0.9881149530410768 ], "text" : "* Juneteenth: A Freedom Celebration is scheduled for June 18th from 12:00 pm to 2:00 pm." }, { "confidence" : [ 0.9692034721374512 ], "ext" : "* A Juneteenth celebration at Fort Negley Park will take place on June 19th from 5:00 pm to 9:30 pm." } ], "site_title" : "vanderbilt.edu" , "site_uri" : "https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQHwif83VK9KAzrbMSGSBsKwL8vWfSfC9pgEWYKmStHyqiRoV1oe8j1S0nbwRg_iWgqAr9wUkiegu3ATC8Ll-cuE-vpzwElRHiJ2KgRYcqnOQMoOeokVpWqi" }, { "results" : [ { "confidence" : [ 0.6554043292999268 ], "text" : "In addition to these events, Vanderbilt University is a large research institution with ongoing activities across many fields." } ], "site_title" : "wikipedia.org" , "site_uri" : "https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQESbF-ijx78QbaglrhflHCUWdPTD4M6tYOQigW5hgsHNctRlAHu9ktfPmJx7DfoP5QicE0y-OQY1cRl9w4Id0btiFgLYSKIm2-SPtOHXeNrAlgA7mBnclaGrD7rgnLIbrjl8DgUEJrrvT0CKzuo" }], "rendered_content" : "<style> \n .container ... </div> \n </div> \n " , "search_result" : "Several events are happening at Vanderbilt University: \n\n * Juneteenth: A Freedom Celebration is scheduled for June 18th from 12:00 pm to 2:00 pm. \n * A Juneteenth celebration at Fort Negley Park will take place on June 19th from 5:00 pm to 9:30 pm. \n\n In addition to these events, Vanderbilt University is a large research institution with ongoing activities across many fields. For the most recent news, you should check Vanderbilt's official news website. \n " } Service-Specific Insights bot-llm-started 🤖 Indicates LLM processing has begun type : bot-llm-started data : None bot-llm-stopped 🤖 Indicates LLM processing has completed type : bot-llm-stopped data : None user-llm-text 🤖 Aggregated user input text that is sent to the LLM. type : 'user-llm-text' data : text : string The user’s input text to be processed by the LLM. bot-llm-text 🤖 Individual tokens streamed from the LLM as they are generated. type : 'bot-llm-text' data : text : string The token text from the LLM. bot-tts-started 🤖 Indicates text-to-speech (TTS) processing has begun. type : 'bot-tts-started' data : None bot-tts-stopped 🤖 Indicates text-to-speech (TTS) processing has completed. type : 'bot-tts-stopped' data : None bot-tts-text 🤖 The per-token text output of the text-to-speech (TTS) service (what the TTS actually says). type : 'bot-tts-text' data : text : string The text representation of the generated bot speech. Metrics and Monitoring metrics 🤖 Performance metrics for various processing stages and services. Each message will contain entries for one or more of the metrics types: processing , ttfb , characters . type : 'metrics' data : processing : [See Below] (optional) Processing time metrics. ttfb : [See Below] (optional) Time to first byte metrics. characters : [See Below] (optional) Character processing metrics. For each metric type, the data structure is an array of objects with the following structure: processor : string The name of the processor or service that generated the metric. value : number The value of the metric, typically in milliseconds or character count. model : string (optional) The model of the service that generated the metric, if applicable. Example: Copy Ask AI { "type" : "metrics" , "data" : { "processing" : [ { "model" : "eleven_flash_v2_5" , "processor" : "ElevenLabsTTSService#0" , "value" : 0.0005140304565429688 } ], "ttfb" : [ { "model" : "eleven_flash_v2_5" , "processor" : "ElevenLabsTTSService#0" , "value" : 0.1573178768157959 } ], "characters" : [ { "model" : "eleven_flash_v2_5" , "processor" : "ElevenLabsTTSService#0" , "value" : 43 } ] } } Client SDKs RTVIClient Migration Guide On this page Key Features Terms RTVI Message Format RTVI Message Types Connection Management client-ready 🏄 bot-ready 🤖 disconnect-bot 🏄 error 🤖 Transcription user-started-speaking 🤖 user-stopped-speaking 🤖 bot-started-speaking 🤖 bot-stopped-speaking 🤖 user-transcription 🤖 bot-transcription 🤖 Client-Server Messaging server-message 🤖 client-message 🏄 server-response 🤖 error-response 🤖 Advanced LLM Interactions append-to-context 🏄 llm-function-call 🤖 llm-function-call-result 🏄 bot-llm-search-response 🤖 Service-Specific Insights bot-llm-started 🤖 bot-llm-stopped 🤖 user-llm-text 🤖 bot-llm-text 🤖 bot-tts-started 🤖 bot-tts-stopped 🤖 bot-tts-text 🤖 Metrics and Monitoring metrics 🤖 Assistant Responses are generated using AI and may contain mistakes.
|
client_rtvi-standard_4cc2f2cb.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/client/rtvi-standard#client-ready-%F0%9F%8F%84
|
| 2 |
+
Title: The RTVI Standard - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
The RTVI Standard - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation The RTVI Standard Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Client SDKs The RTVI Standard RTVIClient Migration Guide Javascript SDK SDK Introduction API Reference Transport packages React SDK SDK Introduction API Reference React Native SDK SDK Introduction API Reference iOS SDK SDK Introduction API Reference Transport packages Android SDK SDK Introduction API Reference Transport packages C++ SDK SDK Introduction Daily WebRTC Transport The RTVI (Real-Time Voice and Video Inference) standard defines a set of message types and structures sent between clients and servers. It is designed to facilitate real-time interactions between clients and AI applications that require voice, video, and text communication. It provides a consistent framework for building applications that can communicate with AI models and the backends running those models in real-time. This page documents version 1.0 of the RTVI standard, released in June 2025. Key Features Connection Management RTVI provides a flexible connection model that allows clients to connect to AI services and coordinate state. Transcriptions The standard includes built-in support for real-time transcription of audio streams. Client-Server Messaging The standard defines a messaging protocol for sending and receiving messages between clients and servers, allowing for efficient communication of requests and responses. Advanced LLM Interactions The standard supports advanced interactions with large language models (LLMs), including context management, function call handline, and search results. Service-Specific Insights RTVI supports events to provide insight into the input/output and state for typical services that exist in speech-to-speech workflows. Metrics and Monitoring RTVI provides mechanisms for collecting metrics and monitoring the performance of server-side services. Terms Client : The front-end application or user interface that interacts with the RTVI server. Server : The backend-end service that runs the AI framework and processes requests from the client. User : The end user interacting with the client application. Bot : The AI interacting with the user, technically an amalgamation of a large language model (LLM) and a text-to-speech (TTS) service. RTVI Message Format The messages defined as part of the RTVI protocol adhere to the following format: Copy Ask AI { "id" : string , "label" : "rtvi-ai" , "type" : string , "data" : unknown } id string A unique identifier for the message, used to correlate requests and responses. label string default: "rtvi-ai" required A label that identifies this message as an RTVI message. This field is required and should always be set to 'rtvi-ai' . type string required The type of message being sent. This field is required and should be set to one of the predefined RTVI message types listed below. data unknown The payload of the message, which can be any data structure relevant to the message type. RTVI Message Types Following the above format, this section describes the various message types defined by the RTVI standard. Each message type has a specific purpose and structure, allowing for clear communication between clients and servers. Each message type below includes either a 🤖 or 🏄 emoji to denote whether the message is sent from the bot (🤖) or client (🏄). Connection Management client-ready 🏄 Indicates that the client is ready to receive messages and interact with the server. Typically sent after the transport media channels have connected. type : 'client-ready' data : version : string The version of the RTVI standard being used. This is useful for ensuring compatibility between client and server implementations. about : AboutClient Object An object containing information about the client, such as its rtvi-version, client library, and any other relevant metadata. The AboutClient object follows this structure: Show AboutClient library string required library_version string platform string platform_version string platform_details any Any platform-specific details that may be relevant to the server. This could include information about the browser, operating system, or any other environment-specific data needed by the server. This field is optional and open-ended, so please be mindful of the data you include here and any security concerns that may arise from exposing sensitive or personal-identifiable information. bot-ready 🤖 Indicates that the bot is ready to receive messages and interact with the client. Typically send after the transport media channels have connected. type : 'bot-ready' data : version : string The version of the RTVI standard being used. This is useful for ensuring compatibility between client and server implementations. about : any (Optional) An object containing information about the server or bot. It’s structure and value are both undefined by default. This provides flexibility to include any relevant metadata your client may need to know about the server at connection time, without any built-in security concerns. Please be mindful of the data you include here and any security concerns that may arise from exposing sensitive information. disconnect-bot 🏄 Indicates that the client wishes to disconnect from the bot. Typically used when the client is shutting down or no longer needs to interact with the bot. Note: Disconnets should happen automatically when either the client or bot disconnects from the transport, so this message is intended for the case where a client may want to remain connected to the transport but no longer wishes to interact with the bot. type : 'disconnect-bot' data : undefined error 🤖 Indicates an error occurred during bot initialization or runtime. type : 'error' data : message : string Description of the error. fatal : boolean Indicates if the error is fatal to the session. Transcription user-started-speaking 🤖 Emitted when the user begins speaking type : 'user-started-speaking' data : None user-stopped-speaking 🤖 Emitted when the user stops speaking type : 'user-stopped-speaking' data : None bot-started-speaking 🤖 Emitted when the bot begins speaking type : 'bot-started-speaking' data : None bot-stopped-speaking 🤖 Emitted when the bot stops speaking type : 'bot-stopped-speaking' data : None user-transcription 🤖 Real-time transcription of user speech, including both partial and final results. type : 'user-transcription' data : text : string The transcribed text of the user. final : boolean Indicates if this is a final transcription or a partial result. timestamp : string The timestamp when the transcription was generated. user_id : string Identifier for the user who spoke. bot-transcription 🤖 Transcription of the bot’s speech. Note: This protocol currently does not match the user transcription format to support real-time timestamping for bot transcriptions. Rather, the event is typically sent for each sentence of the bot’s response. This difference is currently due to limitations in TTS services which mostly do not support (or support well), accurate timing information. If/when this changes, this protocol may be updated to include the necessary timing information. For now, if you want to attempt real-time transcription to match your bot’s speaking, you can try using the bot-tts-text message type. type : 'bot-transcription' data : text : string The transcribed text from the bot, typically aggregated at a per-sentence level. Client-Server Messaging server-message 🤖 An arbitrary message sent from the server to the client. This can be used for custom interactions or commands. This message may be coupled with the client-message message type to handle responses from the client. type : 'server-message' data : any The data can be any JSON-serializable object, formatted according to your own specifications. client-message 🏄 An arbitrary message sent from the client to the server. This can be used for custom interactions or commands. This message may be coupled with the server-response message type to handle responses from the server. type : 'client-message' data : t : string d : unknown (optional) The data payload should contain a t field indicating the type of message and an optional d field containing any custom, corresponding data needed for the message. server-response 🤖 An message sent from the server to the client in response to a client-message . IMPORTANT : The id should match the id of the original client-message to correlate the response with the request. type : 'client-message' data : t : string d : unknown (optional) The data payload should contain a t field indicating the type of message and an optional d field containing any custom, corresponding data needed for the message. error-response 🤖 Error response to a specific client message. IMPORTANT : The id should match the id of the original client-message to correlate the response with the request. type : 'error-response' data : error : string Advanced LLM Interactions append-to-context 🏄 A message sent from the client to the server to append data to the context of the current llm conversation. This is useful for providing text-based content for the user or augmenting the context for the assistant. type : 'append-to-context' data : role : "user" | "assistant" The role the context should be appended to. Currently only supports "user" and "assistant" . content : unknown The content to append to the context. This can be any data structure the llm understand. run_immediately : boolean (optional) Indicates whether the context should be run immediately after appending. Defaults to false . If set to false , the context will be appended but not executed until the next llm run. llm-function-call 🤖 A function call request from the LLM, sent from the bot to the client. Note that for most cases, an LLM function call will be handled completely server-side. However, in the event that the call requires input from the client or the client needs to be aware of the function call, this message/response schema is required. type : 'llm-function-call' data : function_name : string Name of the function to be called. tool_call_id : string Unique identifier for this function call. args : Record<string, unknown> Arguments to be passed to the function. llm-function-call-result 🏄 The result of the function call requested by the LLM, returned from the client. type : 'llm-function-call-result' data : function_name : string Name of the called function. tool_call_id : string Identifier matching the original function call. args : Record<string, unknown> Arguments that were passed to the function. result : Record<string, unknown> | string The result returned by the function. bot-llm-search-response 🤖 Search results from the LLM’s knowledge base. Currently, Google Gemini is the only LLM that supports built-in search. However, we expect other LLMs to follow suite, which is why this message type is defined as part of the RTVI standard. As more LLMs add support for this feature, the format of this message type may evolve to accommodate discrepancies. type : 'bot-llm-search-response' data : search_result : string (optional) Raw search result text. rendered_content : string (optional) Formatted version of the search results. origins : Array<Origin Object> Source information and confidence scores for search results. The Origin Object follows this structure: Copy Ask AI { "site_uri" : string (optional) , "site_title" : string (optional) , "results" : Array< { "text" : string , "confidence" : number [] } > } Example: Copy Ask AI "id" : undefined "label" : "rtvi-ai" "type" : "bot-llm-search-response" "data" : { "origins" : [ { "results" : [ { "confidence" : [ 0.9881149530410768 ], "text" : "* Juneteenth: A Freedom Celebration is scheduled for June 18th from 12:00 pm to 2:00 pm." }, { "confidence" : [ 0.9692034721374512 ], "ext" : "* A Juneteenth celebration at Fort Negley Park will take place on June 19th from 5:00 pm to 9:30 pm." } ], "site_title" : "vanderbilt.edu" , "site_uri" : "https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQHwif83VK9KAzrbMSGSBsKwL8vWfSfC9pgEWYKmStHyqiRoV1oe8j1S0nbwRg_iWgqAr9wUkiegu3ATC8Ll-cuE-vpzwElRHiJ2KgRYcqnOQMoOeokVpWqi" }, { "results" : [ { "confidence" : [ 0.6554043292999268 ], "text" : "In addition to these events, Vanderbilt University is a large research institution with ongoing activities across many fields." } ], "site_title" : "wikipedia.org" , "site_uri" : "https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQESbF-ijx78QbaglrhflHCUWdPTD4M6tYOQigW5hgsHNctRlAHu9ktfPmJx7DfoP5QicE0y-OQY1cRl9w4Id0btiFgLYSKIm2-SPtOHXeNrAlgA7mBnclaGrD7rgnLIbrjl8DgUEJrrvT0CKzuo" }], "rendered_content" : "<style> \n .container ... </div> \n </div> \n " , "search_result" : "Several events are happening at Vanderbilt University: \n\n * Juneteenth: A Freedom Celebration is scheduled for June 18th from 12:00 pm to 2:00 pm. \n * A Juneteenth celebration at Fort Negley Park will take place on June 19th from 5:00 pm to 9:30 pm. \n\n In addition to these events, Vanderbilt University is a large research institution with ongoing activities across many fields. For the most recent news, you should check Vanderbilt's official news website. \n " } Service-Specific Insights bot-llm-started 🤖 Indicates LLM processing has begun type : bot-llm-started data : None bot-llm-stopped 🤖 Indicates LLM processing has completed type : bot-llm-stopped data : None user-llm-text 🤖 Aggregated user input text that is sent to the LLM. type : 'user-llm-text' data : text : string The user’s input text to be processed by the LLM. bot-llm-text 🤖 Individual tokens streamed from the LLM as they are generated. type : 'bot-llm-text' data : text : string The token text from the LLM. bot-tts-started 🤖 Indicates text-to-speech (TTS) processing has begun. type : 'bot-tts-started' data : None bot-tts-stopped 🤖 Indicates text-to-speech (TTS) processing has completed. type : 'bot-tts-stopped' data : None bot-tts-text 🤖 The per-token text output of the text-to-speech (TTS) service (what the TTS actually says). type : 'bot-tts-text' data : text : string The text representation of the generated bot speech. Metrics and Monitoring metrics 🤖 Performance metrics for various processing stages and services. Each message will contain entries for one or more of the metrics types: processing , ttfb , characters . type : 'metrics' data : processing : [See Below] (optional) Processing time metrics. ttfb : [See Below] (optional) Time to first byte metrics. characters : [See Below] (optional) Character processing metrics. For each metric type, the data structure is an array of objects with the following structure: processor : string The name of the processor or service that generated the metric. value : number The value of the metric, typically in milliseconds or character count. model : string (optional) The model of the service that generated the metric, if applicable. Example: Copy Ask AI { "type" : "metrics" , "data" : { "processing" : [ { "model" : "eleven_flash_v2_5" , "processor" : "ElevenLabsTTSService#0" , "value" : 0.0005140304565429688 } ], "ttfb" : [ { "model" : "eleven_flash_v2_5" , "processor" : "ElevenLabsTTSService#0" , "value" : 0.1573178768157959 } ], "characters" : [ { "model" : "eleven_flash_v2_5" , "processor" : "ElevenLabsTTSService#0" , "value" : 43 } ] } } Client SDKs RTVIClient Migration Guide On this page Key Features Terms RTVI Message Format RTVI Message Types Connection Management client-ready 🏄 bot-ready 🤖 disconnect-bot 🏄 error 🤖 Transcription user-started-speaking 🤖 user-stopped-speaking 🤖 bot-started-speaking 🤖 bot-stopped-speaking 🤖 user-transcription 🤖 bot-transcription 🤖 Client-Server Messaging server-message 🤖 client-message 🏄 server-response 🤖 error-response 🤖 Advanced LLM Interactions append-to-context 🏄 llm-function-call 🤖 llm-function-call-result 🏄 bot-llm-search-response 🤖 Service-Specific Insights bot-llm-started 🤖 bot-llm-stopped 🤖 user-llm-text 🤖 bot-llm-text 🤖 bot-tts-started 🤖 bot-tts-stopped 🤖 bot-tts-text 🤖 Metrics and Monitoring metrics 🤖 Assistant Responses are generated using AI and may contain mistakes.
|
client_rtvi-standard_5425fbb5.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/client/rtvi-standard#rtvi-message-format
|
| 2 |
+
Title: The RTVI Standard - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
The RTVI Standard - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation The RTVI Standard Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Client SDKs The RTVI Standard RTVIClient Migration Guide Javascript SDK SDK Introduction API Reference Transport packages React SDK SDK Introduction API Reference React Native SDK SDK Introduction API Reference iOS SDK SDK Introduction API Reference Transport packages Android SDK SDK Introduction API Reference Transport packages C++ SDK SDK Introduction Daily WebRTC Transport The RTVI (Real-Time Voice and Video Inference) standard defines a set of message types and structures sent between clients and servers. It is designed to facilitate real-time interactions between clients and AI applications that require voice, video, and text communication. It provides a consistent framework for building applications that can communicate with AI models and the backends running those models in real-time. This page documents version 1.0 of the RTVI standard, released in June 2025. Key Features Connection Management RTVI provides a flexible connection model that allows clients to connect to AI services and coordinate state. Transcriptions The standard includes built-in support for real-time transcription of audio streams. Client-Server Messaging The standard defines a messaging protocol for sending and receiving messages between clients and servers, allowing for efficient communication of requests and responses. Advanced LLM Interactions The standard supports advanced interactions with large language models (LLMs), including context management, function call handline, and search results. Service-Specific Insights RTVI supports events to provide insight into the input/output and state for typical services that exist in speech-to-speech workflows. Metrics and Monitoring RTVI provides mechanisms for collecting metrics and monitoring the performance of server-side services. Terms Client : The front-end application or user interface that interacts with the RTVI server. Server : The backend-end service that runs the AI framework and processes requests from the client. User : The end user interacting with the client application. Bot : The AI interacting with the user, technically an amalgamation of a large language model (LLM) and a text-to-speech (TTS) service. RTVI Message Format The messages defined as part of the RTVI protocol adhere to the following format: Copy Ask AI { "id" : string , "label" : "rtvi-ai" , "type" : string , "data" : unknown } id string A unique identifier for the message, used to correlate requests and responses. label string default: "rtvi-ai" required A label that identifies this message as an RTVI message. This field is required and should always be set to 'rtvi-ai' . type string required The type of message being sent. This field is required and should be set to one of the predefined RTVI message types listed below. data unknown The payload of the message, which can be any data structure relevant to the message type. RTVI Message Types Following the above format, this section describes the various message types defined by the RTVI standard. Each message type has a specific purpose and structure, allowing for clear communication between clients and servers. Each message type below includes either a 🤖 or 🏄 emoji to denote whether the message is sent from the bot (🤖) or client (🏄). Connection Management client-ready 🏄 Indicates that the client is ready to receive messages and interact with the server. Typically sent after the transport media channels have connected. type : 'client-ready' data : version : string The version of the RTVI standard being used. This is useful for ensuring compatibility between client and server implementations. about : AboutClient Object An object containing information about the client, such as its rtvi-version, client library, and any other relevant metadata. The AboutClient object follows this structure: Show AboutClient library string required library_version string platform string platform_version string platform_details any Any platform-specific details that may be relevant to the server. This could include information about the browser, operating system, or any other environment-specific data needed by the server. This field is optional and open-ended, so please be mindful of the data you include here and any security concerns that may arise from exposing sensitive or personal-identifiable information. bot-ready 🤖 Indicates that the bot is ready to receive messages and interact with the client. Typically send after the transport media channels have connected. type : 'bot-ready' data : version : string The version of the RTVI standard being used. This is useful for ensuring compatibility between client and server implementations. about : any (Optional) An object containing information about the server or bot. It’s structure and value are both undefined by default. This provides flexibility to include any relevant metadata your client may need to know about the server at connection time, without any built-in security concerns. Please be mindful of the data you include here and any security concerns that may arise from exposing sensitive information. disconnect-bot 🏄 Indicates that the client wishes to disconnect from the bot. Typically used when the client is shutting down or no longer needs to interact with the bot. Note: Disconnets should happen automatically when either the client or bot disconnects from the transport, so this message is intended for the case where a client may want to remain connected to the transport but no longer wishes to interact with the bot. type : 'disconnect-bot' data : undefined error 🤖 Indicates an error occurred during bot initialization or runtime. type : 'error' data : message : string Description of the error. fatal : boolean Indicates if the error is fatal to the session. Transcription user-started-speaking 🤖 Emitted when the user begins speaking type : 'user-started-speaking' data : None user-stopped-speaking 🤖 Emitted when the user stops speaking type : 'user-stopped-speaking' data : None bot-started-speaking 🤖 Emitted when the bot begins speaking type : 'bot-started-speaking' data : None bot-stopped-speaking 🤖 Emitted when the bot stops speaking type : 'bot-stopped-speaking' data : None user-transcription 🤖 Real-time transcription of user speech, including both partial and final results. type : 'user-transcription' data : text : string The transcribed text of the user. final : boolean Indicates if this is a final transcription or a partial result. timestamp : string The timestamp when the transcription was generated. user_id : string Identifier for the user who spoke. bot-transcription 🤖 Transcription of the bot’s speech. Note: This protocol currently does not match the user transcription format to support real-time timestamping for bot transcriptions. Rather, the event is typically sent for each sentence of the bot’s response. This difference is currently due to limitations in TTS services which mostly do not support (or support well), accurate timing information. If/when this changes, this protocol may be updated to include the necessary timing information. For now, if you want to attempt real-time transcription to match your bot’s speaking, you can try using the bot-tts-text message type. type : 'bot-transcription' data : text : string The transcribed text from the bot, typically aggregated at a per-sentence level. Client-Server Messaging server-message 🤖 An arbitrary message sent from the server to the client. This can be used for custom interactions or commands. This message may be coupled with the client-message message type to handle responses from the client. type : 'server-message' data : any The data can be any JSON-serializable object, formatted according to your own specifications. client-message 🏄 An arbitrary message sent from the client to the server. This can be used for custom interactions or commands. This message may be coupled with the server-response message type to handle responses from the server. type : 'client-message' data : t : string d : unknown (optional) The data payload should contain a t field indicating the type of message and an optional d field containing any custom, corresponding data needed for the message. server-response 🤖 An message sent from the server to the client in response to a client-message . IMPORTANT : The id should match the id of the original client-message to correlate the response with the request. type : 'client-message' data : t : string d : unknown (optional) The data payload should contain a t field indicating the type of message and an optional d field containing any custom, corresponding data needed for the message. error-response 🤖 Error response to a specific client message. IMPORTANT : The id should match the id of the original client-message to correlate the response with the request. type : 'error-response' data : error : string Advanced LLM Interactions append-to-context 🏄 A message sent from the client to the server to append data to the context of the current llm conversation. This is useful for providing text-based content for the user or augmenting the context for the assistant. type : 'append-to-context' data : role : "user" | "assistant" The role the context should be appended to. Currently only supports "user" and "assistant" . content : unknown The content to append to the context. This can be any data structure the llm understand. run_immediately : boolean (optional) Indicates whether the context should be run immediately after appending. Defaults to false . If set to false , the context will be appended but not executed until the next llm run. llm-function-call 🤖 A function call request from the LLM, sent from the bot to the client. Note that for most cases, an LLM function call will be handled completely server-side. However, in the event that the call requires input from the client or the client needs to be aware of the function call, this message/response schema is required. type : 'llm-function-call' data : function_name : string Name of the function to be called. tool_call_id : string Unique identifier for this function call. args : Record<string, unknown> Arguments to be passed to the function. llm-function-call-result 🏄 The result of the function call requested by the LLM, returned from the client. type : 'llm-function-call-result' data : function_name : string Name of the called function. tool_call_id : string Identifier matching the original function call. args : Record<string, unknown> Arguments that were passed to the function. result : Record<string, unknown> | string The result returned by the function. bot-llm-search-response 🤖 Search results from the LLM’s knowledge base. Currently, Google Gemini is the only LLM that supports built-in search. However, we expect other LLMs to follow suite, which is why this message type is defined as part of the RTVI standard. As more LLMs add support for this feature, the format of this message type may evolve to accommodate discrepancies. type : 'bot-llm-search-response' data : search_result : string (optional) Raw search result text. rendered_content : string (optional) Formatted version of the search results. origins : Array<Origin Object> Source information and confidence scores for search results. The Origin Object follows this structure: Copy Ask AI { "site_uri" : string (optional) , "site_title" : string (optional) , "results" : Array< { "text" : string , "confidence" : number [] } > } Example: Copy Ask AI "id" : undefined "label" : "rtvi-ai" "type" : "bot-llm-search-response" "data" : { "origins" : [ { "results" : [ { "confidence" : [ 0.9881149530410768 ], "text" : "* Juneteenth: A Freedom Celebration is scheduled for June 18th from 12:00 pm to 2:00 pm." }, { "confidence" : [ 0.9692034721374512 ], "ext" : "* A Juneteenth celebration at Fort Negley Park will take place on June 19th from 5:00 pm to 9:30 pm." } ], "site_title" : "vanderbilt.edu" , "site_uri" : "https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQHwif83VK9KAzrbMSGSBsKwL8vWfSfC9pgEWYKmStHyqiRoV1oe8j1S0nbwRg_iWgqAr9wUkiegu3ATC8Ll-cuE-vpzwElRHiJ2KgRYcqnOQMoOeokVpWqi" }, { "results" : [ { "confidence" : [ 0.6554043292999268 ], "text" : "In addition to these events, Vanderbilt University is a large research institution with ongoing activities across many fields." } ], "site_title" : "wikipedia.org" , "site_uri" : "https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQESbF-ijx78QbaglrhflHCUWdPTD4M6tYOQigW5hgsHNctRlAHu9ktfPmJx7DfoP5QicE0y-OQY1cRl9w4Id0btiFgLYSKIm2-SPtOHXeNrAlgA7mBnclaGrD7rgnLIbrjl8DgUEJrrvT0CKzuo" }], "rendered_content" : "<style> \n .container ... </div> \n </div> \n " , "search_result" : "Several events are happening at Vanderbilt University: \n\n * Juneteenth: A Freedom Celebration is scheduled for June 18th from 12:00 pm to 2:00 pm. \n * A Juneteenth celebration at Fort Negley Park will take place on June 19th from 5:00 pm to 9:30 pm. \n\n In addition to these events, Vanderbilt University is a large research institution with ongoing activities across many fields. For the most recent news, you should check Vanderbilt's official news website. \n " } Service-Specific Insights bot-llm-started 🤖 Indicates LLM processing has begun type : bot-llm-started data : None bot-llm-stopped 🤖 Indicates LLM processing has completed type : bot-llm-stopped data : None user-llm-text 🤖 Aggregated user input text that is sent to the LLM. type : 'user-llm-text' data : text : string The user’s input text to be processed by the LLM. bot-llm-text 🤖 Individual tokens streamed from the LLM as they are generated. type : 'bot-llm-text' data : text : string The token text from the LLM. bot-tts-started 🤖 Indicates text-to-speech (TTS) processing has begun. type : 'bot-tts-started' data : None bot-tts-stopped 🤖 Indicates text-to-speech (TTS) processing has completed. type : 'bot-tts-stopped' data : None bot-tts-text 🤖 The per-token text output of the text-to-speech (TTS) service (what the TTS actually says). type : 'bot-tts-text' data : text : string The text representation of the generated bot speech. Metrics and Monitoring metrics 🤖 Performance metrics for various processing stages and services. Each message will contain entries for one or more of the metrics types: processing , ttfb , characters . type : 'metrics' data : processing : [See Below] (optional) Processing time metrics. ttfb : [See Below] (optional) Time to first byte metrics. characters : [See Below] (optional) Character processing metrics. For each metric type, the data structure is an array of objects with the following structure: processor : string The name of the processor or service that generated the metric. value : number The value of the metric, typically in milliseconds or character count. model : string (optional) The model of the service that generated the metric, if applicable. Example: Copy Ask AI { "type" : "metrics" , "data" : { "processing" : [ { "model" : "eleven_flash_v2_5" , "processor" : "ElevenLabsTTSService#0" , "value" : 0.0005140304565429688 } ], "ttfb" : [ { "model" : "eleven_flash_v2_5" , "processor" : "ElevenLabsTTSService#0" , "value" : 0.1573178768157959 } ], "characters" : [ { "model" : "eleven_flash_v2_5" , "processor" : "ElevenLabsTTSService#0" , "value" : 43 } ] } } Client SDKs RTVIClient Migration Guide On this page Key Features Terms RTVI Message Format RTVI Message Types Connection Management client-ready 🏄 bot-ready 🤖 disconnect-bot 🏄 error 🤖 Transcription user-started-speaking 🤖 user-stopped-speaking 🤖 bot-started-speaking 🤖 bot-stopped-speaking 🤖 user-transcription 🤖 bot-transcription 🤖 Client-Server Messaging server-message 🤖 client-message 🏄 server-response 🤖 error-response 🤖 Advanced LLM Interactions append-to-context 🏄 llm-function-call 🤖 llm-function-call-result 🏄 bot-llm-search-response 🤖 Service-Specific Insights bot-llm-started 🤖 bot-llm-stopped 🤖 user-llm-text 🤖 bot-llm-text 🤖 bot-tts-started 🤖 bot-tts-stopped 🤖 bot-tts-text 🤖 Metrics and Monitoring metrics 🤖 Assistant Responses are generated using AI and may contain mistakes.
|
client_rtvi-standard_ee7dc446.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/client/rtvi-standard#param-data
|
| 2 |
+
Title: The RTVI Standard - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
The RTVI Standard - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation The RTVI Standard Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Client SDKs The RTVI Standard RTVIClient Migration Guide Javascript SDK SDK Introduction API Reference Transport packages React SDK SDK Introduction API Reference React Native SDK SDK Introduction API Reference iOS SDK SDK Introduction API Reference Transport packages Android SDK SDK Introduction API Reference Transport packages C++ SDK SDK Introduction Daily WebRTC Transport The RTVI (Real-Time Voice and Video Inference) standard defines a set of message types and structures sent between clients and servers. It is designed to facilitate real-time interactions between clients and AI applications that require voice, video, and text communication. It provides a consistent framework for building applications that can communicate with AI models and the backends running those models in real-time. This page documents version 1.0 of the RTVI standard, released in June 2025. Key Features Connection Management RTVI provides a flexible connection model that allows clients to connect to AI services and coordinate state. Transcriptions The standard includes built-in support for real-time transcription of audio streams. Client-Server Messaging The standard defines a messaging protocol for sending and receiving messages between clients and servers, allowing for efficient communication of requests and responses. Advanced LLM Interactions The standard supports advanced interactions with large language models (LLMs), including context management, function call handline, and search results. Service-Specific Insights RTVI supports events to provide insight into the input/output and state for typical services that exist in speech-to-speech workflows. Metrics and Monitoring RTVI provides mechanisms for collecting metrics and monitoring the performance of server-side services. Terms Client : The front-end application or user interface that interacts with the RTVI server. Server : The backend-end service that runs the AI framework and processes requests from the client. User : The end user interacting with the client application. Bot : The AI interacting with the user, technically an amalgamation of a large language model (LLM) and a text-to-speech (TTS) service. RTVI Message Format The messages defined as part of the RTVI protocol adhere to the following format: Copy Ask AI { "id" : string , "label" : "rtvi-ai" , "type" : string , "data" : unknown } id string A unique identifier for the message, used to correlate requests and responses. label string default: "rtvi-ai" required A label that identifies this message as an RTVI message. This field is required and should always be set to 'rtvi-ai' . type string required The type of message being sent. This field is required and should be set to one of the predefined RTVI message types listed below. data unknown The payload of the message, which can be any data structure relevant to the message type. RTVI Message Types Following the above format, this section describes the various message types defined by the RTVI standard. Each message type has a specific purpose and structure, allowing for clear communication between clients and servers. Each message type below includes either a 🤖 or 🏄 emoji to denote whether the message is sent from the bot (🤖) or client (🏄). Connection Management client-ready 🏄 Indicates that the client is ready to receive messages and interact with the server. Typically sent after the transport media channels have connected. type : 'client-ready' data : version : string The version of the RTVI standard being used. This is useful for ensuring compatibility between client and server implementations. about : AboutClient Object An object containing information about the client, such as its rtvi-version, client library, and any other relevant metadata. The AboutClient object follows this structure: Show AboutClient library string required library_version string platform string platform_version string platform_details any Any platform-specific details that may be relevant to the server. This could include information about the browser, operating system, or any other environment-specific data needed by the server. This field is optional and open-ended, so please be mindful of the data you include here and any security concerns that may arise from exposing sensitive or personal-identifiable information. bot-ready 🤖 Indicates that the bot is ready to receive messages and interact with the client. Typically send after the transport media channels have connected. type : 'bot-ready' data : version : string The version of the RTVI standard being used. This is useful for ensuring compatibility between client and server implementations. about : any (Optional) An object containing information about the server or bot. It’s structure and value are both undefined by default. This provides flexibility to include any relevant metadata your client may need to know about the server at connection time, without any built-in security concerns. Please be mindful of the data you include here and any security concerns that may arise from exposing sensitive information. disconnect-bot 🏄 Indicates that the client wishes to disconnect from the bot. Typically used when the client is shutting down or no longer needs to interact with the bot. Note: Disconnets should happen automatically when either the client or bot disconnects from the transport, so this message is intended for the case where a client may want to remain connected to the transport but no longer wishes to interact with the bot. type : 'disconnect-bot' data : undefined error 🤖 Indicates an error occurred during bot initialization or runtime. type : 'error' data : message : string Description of the error. fatal : boolean Indicates if the error is fatal to the session. Transcription user-started-speaking 🤖 Emitted when the user begins speaking type : 'user-started-speaking' data : None user-stopped-speaking 🤖 Emitted when the user stops speaking type : 'user-stopped-speaking' data : None bot-started-speaking 🤖 Emitted when the bot begins speaking type : 'bot-started-speaking' data : None bot-stopped-speaking 🤖 Emitted when the bot stops speaking type : 'bot-stopped-speaking' data : None user-transcription 🤖 Real-time transcription of user speech, including both partial and final results. type : 'user-transcription' data : text : string The transcribed text of the user. final : boolean Indicates if this is a final transcription or a partial result. timestamp : string The timestamp when the transcription was generated. user_id : string Identifier for the user who spoke. bot-transcription 🤖 Transcription of the bot’s speech. Note: This protocol currently does not match the user transcription format to support real-time timestamping for bot transcriptions. Rather, the event is typically sent for each sentence of the bot’s response. This difference is currently due to limitations in TTS services which mostly do not support (or support well), accurate timing information. If/when this changes, this protocol may be updated to include the necessary timing information. For now, if you want to attempt real-time transcription to match your bot’s speaking, you can try using the bot-tts-text message type. type : 'bot-transcription' data : text : string The transcribed text from the bot, typically aggregated at a per-sentence level. Client-Server Messaging server-message 🤖 An arbitrary message sent from the server to the client. This can be used for custom interactions or commands. This message may be coupled with the client-message message type to handle responses from the client. type : 'server-message' data : any The data can be any JSON-serializable object, formatted according to your own specifications. client-message 🏄 An arbitrary message sent from the client to the server. This can be used for custom interactions or commands. This message may be coupled with the server-response message type to handle responses from the server. type : 'client-message' data : t : string d : unknown (optional) The data payload should contain a t field indicating the type of message and an optional d field containing any custom, corresponding data needed for the message. server-response 🤖 An message sent from the server to the client in response to a client-message . IMPORTANT : The id should match the id of the original client-message to correlate the response with the request. type : 'client-message' data : t : string d : unknown (optional) The data payload should contain a t field indicating the type of message and an optional d field containing any custom, corresponding data needed for the message. error-response 🤖 Error response to a specific client message. IMPORTANT : The id should match the id of the original client-message to correlate the response with the request. type : 'error-response' data : error : string Advanced LLM Interactions append-to-context 🏄 A message sent from the client to the server to append data to the context of the current llm conversation. This is useful for providing text-based content for the user or augmenting the context for the assistant. type : 'append-to-context' data : role : "user" | "assistant" The role the context should be appended to. Currently only supports "user" and "assistant" . content : unknown The content to append to the context. This can be any data structure the llm understand. run_immediately : boolean (optional) Indicates whether the context should be run immediately after appending. Defaults to false . If set to false , the context will be appended but not executed until the next llm run. llm-function-call 🤖 A function call request from the LLM, sent from the bot to the client. Note that for most cases, an LLM function call will be handled completely server-side. However, in the event that the call requires input from the client or the client needs to be aware of the function call, this message/response schema is required. type : 'llm-function-call' data : function_name : string Name of the function to be called. tool_call_id : string Unique identifier for this function call. args : Record<string, unknown> Arguments to be passed to the function. llm-function-call-result 🏄 The result of the function call requested by the LLM, returned from the client. type : 'llm-function-call-result' data : function_name : string Name of the called function. tool_call_id : string Identifier matching the original function call. args : Record<string, unknown> Arguments that were passed to the function. result : Record<string, unknown> | string The result returned by the function. bot-llm-search-response 🤖 Search results from the LLM’s knowledge base. Currently, Google Gemini is the only LLM that supports built-in search. However, we expect other LLMs to follow suite, which is why this message type is defined as part of the RTVI standard. As more LLMs add support for this feature, the format of this message type may evolve to accommodate discrepancies. type : 'bot-llm-search-response' data : search_result : string (optional) Raw search result text. rendered_content : string (optional) Formatted version of the search results. origins : Array<Origin Object> Source information and confidence scores for search results. The Origin Object follows this structure: Copy Ask AI { "site_uri" : string (optional) , "site_title" : string (optional) , "results" : Array< { "text" : string , "confidence" : number [] } > } Example: Copy Ask AI "id" : undefined "label" : "rtvi-ai" "type" : "bot-llm-search-response" "data" : { "origins" : [ { "results" : [ { "confidence" : [ 0.9881149530410768 ], "text" : "* Juneteenth: A Freedom Celebration is scheduled for June 18th from 12:00 pm to 2:00 pm." }, { "confidence" : [ 0.9692034721374512 ], "ext" : "* A Juneteenth celebration at Fort Negley Park will take place on June 19th from 5:00 pm to 9:30 pm." } ], "site_title" : "vanderbilt.edu" , "site_uri" : "https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQHwif83VK9KAzrbMSGSBsKwL8vWfSfC9pgEWYKmStHyqiRoV1oe8j1S0nbwRg_iWgqAr9wUkiegu3ATC8Ll-cuE-vpzwElRHiJ2KgRYcqnOQMoOeokVpWqi" }, { "results" : [ { "confidence" : [ 0.6554043292999268 ], "text" : "In addition to these events, Vanderbilt University is a large research institution with ongoing activities across many fields." } ], "site_title" : "wikipedia.org" , "site_uri" : "https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQESbF-ijx78QbaglrhflHCUWdPTD4M6tYOQigW5hgsHNctRlAHu9ktfPmJx7DfoP5QicE0y-OQY1cRl9w4Id0btiFgLYSKIm2-SPtOHXeNrAlgA7mBnclaGrD7rgnLIbrjl8DgUEJrrvT0CKzuo" }], "rendered_content" : "<style> \n .container ... </div> \n </div> \n " , "search_result" : "Several events are happening at Vanderbilt University: \n\n * Juneteenth: A Freedom Celebration is scheduled for June 18th from 12:00 pm to 2:00 pm. \n * A Juneteenth celebration at Fort Negley Park will take place on June 19th from 5:00 pm to 9:30 pm. \n\n In addition to these events, Vanderbilt University is a large research institution with ongoing activities across many fields. For the most recent news, you should check Vanderbilt's official news website. \n " } Service-Specific Insights bot-llm-started 🤖 Indicates LLM processing has begun type : bot-llm-started data : None bot-llm-stopped 🤖 Indicates LLM processing has completed type : bot-llm-stopped data : None user-llm-text 🤖 Aggregated user input text that is sent to the LLM. type : 'user-llm-text' data : text : string The user’s input text to be processed by the LLM. bot-llm-text 🤖 Individual tokens streamed from the LLM as they are generated. type : 'bot-llm-text' data : text : string The token text from the LLM. bot-tts-started 🤖 Indicates text-to-speech (TTS) processing has begun. type : 'bot-tts-started' data : None bot-tts-stopped 🤖 Indicates text-to-speech (TTS) processing has completed. type : 'bot-tts-stopped' data : None bot-tts-text 🤖 The per-token text output of the text-to-speech (TTS) service (what the TTS actually says). type : 'bot-tts-text' data : text : string The text representation of the generated bot speech. Metrics and Monitoring metrics 🤖 Performance metrics for various processing stages and services. Each message will contain entries for one or more of the metrics types: processing , ttfb , characters . type : 'metrics' data : processing : [See Below] (optional) Processing time metrics. ttfb : [See Below] (optional) Time to first byte metrics. characters : [See Below] (optional) Character processing metrics. For each metric type, the data structure is an array of objects with the following structure: processor : string The name of the processor or service that generated the metric. value : number The value of the metric, typically in milliseconds or character count. model : string (optional) The model of the service that generated the metric, if applicable. Example: Copy Ask AI { "type" : "metrics" , "data" : { "processing" : [ { "model" : "eleven_flash_v2_5" , "processor" : "ElevenLabsTTSService#0" , "value" : 0.0005140304565429688 } ], "ttfb" : [ { "model" : "eleven_flash_v2_5" , "processor" : "ElevenLabsTTSService#0" , "value" : 0.1573178768157959 } ], "characters" : [ { "model" : "eleven_flash_v2_5" , "processor" : "ElevenLabsTTSService#0" , "value" : 43 } ] } } Client SDKs RTVIClient Migration Guide On this page Key Features Terms RTVI Message Format RTVI Message Types Connection Management client-ready 🏄 bot-ready 🤖 disconnect-bot 🏄 error 🤖 Transcription user-started-speaking 🤖 user-stopped-speaking 🤖 bot-started-speaking 🤖 bot-stopped-speaking 🤖 user-transcription 🤖 bot-transcription 🤖 Client-Server Messaging server-message 🤖 client-message 🏄 server-response 🤖 error-response 🤖 Advanced LLM Interactions append-to-context 🏄 llm-function-call 🤖 llm-function-call-result 🏄 bot-llm-search-response 🤖 Service-Specific Insights bot-llm-started 🤖 bot-llm-stopped 🤖 user-llm-text 🤖 bot-llm-text 🤖 bot-tts-started 🤖 bot-tts-stopped 🤖 bot-tts-text 🤖 Metrics and Monitoring metrics 🤖 Assistant Responses are generated using AI and may contain mistakes.
|
daily_rest-helpers_07e70cfd.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/daily/rest-helpers#param-nbf
|
| 2 |
+
Title: Daily REST Helper - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Daily REST Helper - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Service Utilities Daily REST Helper Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing Frame Filters Metrics and Telemetry MCP Observers Service Utilities Daily REST Helper Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Daily REST API Documentation For complete Daily REST API reference and additional details Classes DailyRoomSipParams Configuration for SIP (Session Initiation Protocol) parameters. display_name string default: "sw-sip-dialin" Display name for the SIP endpoint video boolean default: false Whether video is enabled for SIP sip_mode string default: "dial-in" SIP connection mode num_endpoints integer default: 1 Number of SIP endpoints Copy Ask AI from pipecat.transports.services.helpers.daily_rest import DailyRoomSipParams sip_params = DailyRoomSipParams( display_name = "conference-line" , video = True , num_endpoints = 2 ) RecordingsBucketConfig Configuration for storing Daily recordings in a custom S3 bucket. bucket_name string required Name of the S3 bucket for storing recordings bucket_region string required AWS region where the S3 bucket is located assume_role_arn string required ARN of the IAM role to assume for S3 access allow_api_access boolean default: false Whether to allow API access to the recordings Copy Ask AI from pipecat.transports.services.helpers.daily_rest import RecordingsBucketConfig bucket_config = RecordingsBucketConfig( bucket_name = "my-recordings-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRecordingsRole" , allow_api_access = True ) DailyRoomProperties Properties that configure a Daily room’s behavior and features. exp float Room expiration time as Unix timestamp (e.g., time.time() + 300 for 5 minutes) enable_chat boolean default: false Whether chat is enabled in the room enable_prejoin_ui boolean default: false Whether the prejoin lobby UI is enabled enable_emoji_reactions boolean default: false Whether emoji reactions are enabled eject_at_room_exp boolean default: false Whether to eject participants when room expires enable_dialout boolean Whether dial-out is enabled enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) geo string Geographic region for room max_participants number Maximum number of participants allowed in the room recordings_bucket RecordingsBucketConfig Configuration for custom S3 bucket recordings sip DailyRoomSipParams SIP configuration parameters sip_uri dict SIP URI configuration (returned by Daily) start_video_off boolean default: false Whether the camera video is turned off by default The class also includes a sip_endpoint property that returns the SIP endpoint URI if available. Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomProperties, DailyRoomSipParams, RecordingsBucketConfig, ) properties = DailyRoomProperties( exp = time.time() + 3600 , # 1 hour from now enable_chat = True , enable_emoji_reactions = True , enable_recording = "cloud" , geo = "us-west" , max_participants = 50 , sip = DailyRoomSipParams( display_name = "conference" ), recordings_bucket = RecordingsBucketConfig( bucket_name = "my-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRole" ) ) # Access SIP endpoint if available if properties.sip_endpoint: print ( f "SIP endpoint: { properties.sip_endpoint } " ) DailyRoomParams Parameters for creating a new Daily room. name string Room name (if not provided, one will be generated) privacy string default: "public" Room privacy setting (“private” or “public”) properties DailyRoomProperties Room configuration properties Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomParams, DailyRoomProperties, ) params = DailyRoomParams( name = "team-meeting" , privacy = "private" , properties = DailyRoomProperties( enable_chat = True , exp = time.time() + 7200 # 2 hours from now ) ) DailyRoomObject Response object representing a Daily room. id string Unique room identifier name string Room name api_created boolean Whether the room was created via API privacy string Room privacy setting url string Complete room URL created_at string Room creation timestamp in ISO 8601 format config DailyRoomProperties Room configuration Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyRoomObject, DailyRoomProperties, ) # Example of what a DailyRoomObject looks like when received room = DailyRoomObject( id = "abc123" , name = "team-meeting" , api_created = True , privacy = "private" , url = "https://your-domain.daily.co/team-meeting" , created_at = "2024-01-20T10:00:00.000Z" , config = DailyRoomProperties( enable_chat = True , exp = 1705743600 ) ) DailyMeetingTokenProperties Properties for configuring a Daily meeting token. room_name string The room this token is valid for. If not set, token is valid for all rooms. eject_at_token_exp boolean Whether to eject user when token expires eject_after_elapsed integer Eject user after this many seconds nbf integer “Not before” timestamp - users cannot join before this time exp integer Expiration timestamp - users cannot join after this time is_owner boolean Whether token grants owner privileges user_name string User’s display name in the meeting user_id string Unique identifier for the user (36 char limit) enable_screenshare boolean Whether user can share their screen start_video_off boolean Whether to join with video off start_audio_off boolean Whether to join with audio off enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) enable_prejoin_ui boolean Whether to show prejoin UI start_cloud_recording boolean Whether to start cloud recording when user joins permissions dict Initial default permissions for a non-meeting-owner participant DailyMeetingTokenParams Parameters for creating a Daily meeting token. properties DailyMeetingTokenProperties Token configuration properties Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyMeetingTokenParams, DailyMeetingTokenProperties, ) token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , enable_screenshare = True , start_video_off = True , permissions = { "canSend" : [ "video" , "audio" ]} ) ) Initialize DailyRESTHelper Create a new instance of the Daily REST helper. daily_api_key string required Your Daily API key daily_api_url string default: "https://api.daily.co/v1" The Daily API base URL aiohttp_session aiohttp.ClientSession required An aiohttp client session for making HTTP requests Copy Ask AI helper = DailyRESTHelper( daily_api_key = "your-api-key" , aiohttp_session = session ) Create Room Creates a new Daily room with specified parameters. params DailyRoomParams required Room configuration parameters including name, privacy, and properties Copy Ask AI # Create a room that expires in 1 hour params = DailyRoomParams( name = "my-room" , privacy = "private" , properties = DailyRoomProperties( exp = time.time() + 3600 , enable_chat = True ) ) room = await helper.create_room(params) print ( f "Room URL: { room.url } " ) Get Room From URL Retrieves room information using a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room = await helper.get_room_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room.name } " ) Get Token Generates a meeting token for a specific room. room_url string required The complete Daily room URL expiry_time float default: "3600" Token expiration time in seconds eject_at_token_exp bool default: "False" Whether to eject user when token expires owner bool default: "True" Whether the token should have owner privileges (overrides any setting in params) params DailyMeetingTokenParams Additional token configuration. Note that room_name , exp , eject_at_token_exp , and is_owner will be set based on the other function parameters. Copy Ask AI # Basic token generation token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , # 30 minutes owner = True , eject_at_token_exp = True ) # Advanced token generation with additional properties token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , start_video_off = True ) ) token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , owner = False , eject_at_token_exp = True , params = token_params ) Delete Room By URL Deletes a room using its URL. room_url string required The complete Daily room URL Copy Ask AI success = await helper.delete_room_by_url( "https://your-domain.daily.co/my-room" ) if success: print ( "Room deleted successfully" ) Delete Room By Name Deletes a room using its name. room_name string required The name of the Daily room Copy Ask AI success = await helper.delete_room_by_name( "my-room" ) if success: print ( "Room deleted successfully" ) Get Name From URL Extracts the room name from a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room_name = helper.get_name_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room_name } " ) # Outputs: "my-room" Turn Tracking Observer Smart Turn Overview On this page Classes DailyRoomSipParams RecordingsBucketConfig DailyRoomProperties DailyRoomParams DailyRoomObject DailyMeetingTokenProperties DailyMeetingTokenParams Initialize DailyRESTHelper Create Room Get Room From URL Get Token Delete Room By URL Delete Room By Name Get Name From URL Assistant Responses are generated using AI and may contain mistakes.
|
daily_rest-helpers_35407073.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/daily/rest-helpers#param-properties
|
| 2 |
+
Title: Daily REST Helper - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Daily REST Helper - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Service Utilities Daily REST Helper Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing Frame Filters Metrics and Telemetry MCP Observers Service Utilities Daily REST Helper Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Daily REST API Documentation For complete Daily REST API reference and additional details Classes DailyRoomSipParams Configuration for SIP (Session Initiation Protocol) parameters. display_name string default: "sw-sip-dialin" Display name for the SIP endpoint video boolean default: false Whether video is enabled for SIP sip_mode string default: "dial-in" SIP connection mode num_endpoints integer default: 1 Number of SIP endpoints Copy Ask AI from pipecat.transports.services.helpers.daily_rest import DailyRoomSipParams sip_params = DailyRoomSipParams( display_name = "conference-line" , video = True , num_endpoints = 2 ) RecordingsBucketConfig Configuration for storing Daily recordings in a custom S3 bucket. bucket_name string required Name of the S3 bucket for storing recordings bucket_region string required AWS region where the S3 bucket is located assume_role_arn string required ARN of the IAM role to assume for S3 access allow_api_access boolean default: false Whether to allow API access to the recordings Copy Ask AI from pipecat.transports.services.helpers.daily_rest import RecordingsBucketConfig bucket_config = RecordingsBucketConfig( bucket_name = "my-recordings-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRecordingsRole" , allow_api_access = True ) DailyRoomProperties Properties that configure a Daily room’s behavior and features. exp float Room expiration time as Unix timestamp (e.g., time.time() + 300 for 5 minutes) enable_chat boolean default: false Whether chat is enabled in the room enable_prejoin_ui boolean default: false Whether the prejoin lobby UI is enabled enable_emoji_reactions boolean default: false Whether emoji reactions are enabled eject_at_room_exp boolean default: false Whether to eject participants when room expires enable_dialout boolean Whether dial-out is enabled enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) geo string Geographic region for room max_participants number Maximum number of participants allowed in the room recordings_bucket RecordingsBucketConfig Configuration for custom S3 bucket recordings sip DailyRoomSipParams SIP configuration parameters sip_uri dict SIP URI configuration (returned by Daily) start_video_off boolean default: false Whether the camera video is turned off by default The class also includes a sip_endpoint property that returns the SIP endpoint URI if available. Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomProperties, DailyRoomSipParams, RecordingsBucketConfig, ) properties = DailyRoomProperties( exp = time.time() + 3600 , # 1 hour from now enable_chat = True , enable_emoji_reactions = True , enable_recording = "cloud" , geo = "us-west" , max_participants = 50 , sip = DailyRoomSipParams( display_name = "conference" ), recordings_bucket = RecordingsBucketConfig( bucket_name = "my-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRole" ) ) # Access SIP endpoint if available if properties.sip_endpoint: print ( f "SIP endpoint: { properties.sip_endpoint } " ) DailyRoomParams Parameters for creating a new Daily room. name string Room name (if not provided, one will be generated) privacy string default: "public" Room privacy setting (“private” or “public”) properties DailyRoomProperties Room configuration properties Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomParams, DailyRoomProperties, ) params = DailyRoomParams( name = "team-meeting" , privacy = "private" , properties = DailyRoomProperties( enable_chat = True , exp = time.time() + 7200 # 2 hours from now ) ) DailyRoomObject Response object representing a Daily room. id string Unique room identifier name string Room name api_created boolean Whether the room was created via API privacy string Room privacy setting url string Complete room URL created_at string Room creation timestamp in ISO 8601 format config DailyRoomProperties Room configuration Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyRoomObject, DailyRoomProperties, ) # Example of what a DailyRoomObject looks like when received room = DailyRoomObject( id = "abc123" , name = "team-meeting" , api_created = True , privacy = "private" , url = "https://your-domain.daily.co/team-meeting" , created_at = "2024-01-20T10:00:00.000Z" , config = DailyRoomProperties( enable_chat = True , exp = 1705743600 ) ) DailyMeetingTokenProperties Properties for configuring a Daily meeting token. room_name string The room this token is valid for. If not set, token is valid for all rooms. eject_at_token_exp boolean Whether to eject user when token expires eject_after_elapsed integer Eject user after this many seconds nbf integer “Not before” timestamp - users cannot join before this time exp integer Expiration timestamp - users cannot join after this time is_owner boolean Whether token grants owner privileges user_name string User’s display name in the meeting user_id string Unique identifier for the user (36 char limit) enable_screenshare boolean Whether user can share their screen start_video_off boolean Whether to join with video off start_audio_off boolean Whether to join with audio off enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) enable_prejoin_ui boolean Whether to show prejoin UI start_cloud_recording boolean Whether to start cloud recording when user joins permissions dict Initial default permissions for a non-meeting-owner participant DailyMeetingTokenParams Parameters for creating a Daily meeting token. properties DailyMeetingTokenProperties Token configuration properties Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyMeetingTokenParams, DailyMeetingTokenProperties, ) token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , enable_screenshare = True , start_video_off = True , permissions = { "canSend" : [ "video" , "audio" ]} ) ) Initialize DailyRESTHelper Create a new instance of the Daily REST helper. daily_api_key string required Your Daily API key daily_api_url string default: "https://api.daily.co/v1" The Daily API base URL aiohttp_session aiohttp.ClientSession required An aiohttp client session for making HTTP requests Copy Ask AI helper = DailyRESTHelper( daily_api_key = "your-api-key" , aiohttp_session = session ) Create Room Creates a new Daily room with specified parameters. params DailyRoomParams required Room configuration parameters including name, privacy, and properties Copy Ask AI # Create a room that expires in 1 hour params = DailyRoomParams( name = "my-room" , privacy = "private" , properties = DailyRoomProperties( exp = time.time() + 3600 , enable_chat = True ) ) room = await helper.create_room(params) print ( f "Room URL: { room.url } " ) Get Room From URL Retrieves room information using a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room = await helper.get_room_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room.name } " ) Get Token Generates a meeting token for a specific room. room_url string required The complete Daily room URL expiry_time float default: "3600" Token expiration time in seconds eject_at_token_exp bool default: "False" Whether to eject user when token expires owner bool default: "True" Whether the token should have owner privileges (overrides any setting in params) params DailyMeetingTokenParams Additional token configuration. Note that room_name , exp , eject_at_token_exp , and is_owner will be set based on the other function parameters. Copy Ask AI # Basic token generation token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , # 30 minutes owner = True , eject_at_token_exp = True ) # Advanced token generation with additional properties token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , start_video_off = True ) ) token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , owner = False , eject_at_token_exp = True , params = token_params ) Delete Room By URL Deletes a room using its URL. room_url string required The complete Daily room URL Copy Ask AI success = await helper.delete_room_by_url( "https://your-domain.daily.co/my-room" ) if success: print ( "Room deleted successfully" ) Delete Room By Name Deletes a room using its name. room_name string required The name of the Daily room Copy Ask AI success = await helper.delete_room_by_name( "my-room" ) if success: print ( "Room deleted successfully" ) Get Name From URL Extracts the room name from a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room_name = helper.get_name_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room_name } " ) # Outputs: "my-room" Turn Tracking Observer Smart Turn Overview On this page Classes DailyRoomSipParams RecordingsBucketConfig DailyRoomProperties DailyRoomParams DailyRoomObject DailyMeetingTokenProperties DailyMeetingTokenParams Initialize DailyRESTHelper Create Room Get Room From URL Get Token Delete Room By URL Delete Room By Name Get Name From URL Assistant Responses are generated using AI and may contain mistakes.
|
daily_rest-helpers_40141281.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/daily/rest-helpers#param-config
|
| 2 |
+
Title: Daily REST Helper - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Daily REST Helper - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Service Utilities Daily REST Helper Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing Frame Filters Metrics and Telemetry MCP Observers Service Utilities Daily REST Helper Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Daily REST API Documentation For complete Daily REST API reference and additional details Classes DailyRoomSipParams Configuration for SIP (Session Initiation Protocol) parameters. display_name string default: "sw-sip-dialin" Display name for the SIP endpoint video boolean default: false Whether video is enabled for SIP sip_mode string default: "dial-in" SIP connection mode num_endpoints integer default: 1 Number of SIP endpoints Copy Ask AI from pipecat.transports.services.helpers.daily_rest import DailyRoomSipParams sip_params = DailyRoomSipParams( display_name = "conference-line" , video = True , num_endpoints = 2 ) RecordingsBucketConfig Configuration for storing Daily recordings in a custom S3 bucket. bucket_name string required Name of the S3 bucket for storing recordings bucket_region string required AWS region where the S3 bucket is located assume_role_arn string required ARN of the IAM role to assume for S3 access allow_api_access boolean default: false Whether to allow API access to the recordings Copy Ask AI from pipecat.transports.services.helpers.daily_rest import RecordingsBucketConfig bucket_config = RecordingsBucketConfig( bucket_name = "my-recordings-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRecordingsRole" , allow_api_access = True ) DailyRoomProperties Properties that configure a Daily room’s behavior and features. exp float Room expiration time as Unix timestamp (e.g., time.time() + 300 for 5 minutes) enable_chat boolean default: false Whether chat is enabled in the room enable_prejoin_ui boolean default: false Whether the prejoin lobby UI is enabled enable_emoji_reactions boolean default: false Whether emoji reactions are enabled eject_at_room_exp boolean default: false Whether to eject participants when room expires enable_dialout boolean Whether dial-out is enabled enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) geo string Geographic region for room max_participants number Maximum number of participants allowed in the room recordings_bucket RecordingsBucketConfig Configuration for custom S3 bucket recordings sip DailyRoomSipParams SIP configuration parameters sip_uri dict SIP URI configuration (returned by Daily) start_video_off boolean default: false Whether the camera video is turned off by default The class also includes a sip_endpoint property that returns the SIP endpoint URI if available. Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomProperties, DailyRoomSipParams, RecordingsBucketConfig, ) properties = DailyRoomProperties( exp = time.time() + 3600 , # 1 hour from now enable_chat = True , enable_emoji_reactions = True , enable_recording = "cloud" , geo = "us-west" , max_participants = 50 , sip = DailyRoomSipParams( display_name = "conference" ), recordings_bucket = RecordingsBucketConfig( bucket_name = "my-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRole" ) ) # Access SIP endpoint if available if properties.sip_endpoint: print ( f "SIP endpoint: { properties.sip_endpoint } " ) DailyRoomParams Parameters for creating a new Daily room. name string Room name (if not provided, one will be generated) privacy string default: "public" Room privacy setting (“private” or “public”) properties DailyRoomProperties Room configuration properties Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomParams, DailyRoomProperties, ) params = DailyRoomParams( name = "team-meeting" , privacy = "private" , properties = DailyRoomProperties( enable_chat = True , exp = time.time() + 7200 # 2 hours from now ) ) DailyRoomObject Response object representing a Daily room. id string Unique room identifier name string Room name api_created boolean Whether the room was created via API privacy string Room privacy setting url string Complete room URL created_at string Room creation timestamp in ISO 8601 format config DailyRoomProperties Room configuration Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyRoomObject, DailyRoomProperties, ) # Example of what a DailyRoomObject looks like when received room = DailyRoomObject( id = "abc123" , name = "team-meeting" , api_created = True , privacy = "private" , url = "https://your-domain.daily.co/team-meeting" , created_at = "2024-01-20T10:00:00.000Z" , config = DailyRoomProperties( enable_chat = True , exp = 1705743600 ) ) DailyMeetingTokenProperties Properties for configuring a Daily meeting token. room_name string The room this token is valid for. If not set, token is valid for all rooms. eject_at_token_exp boolean Whether to eject user when token expires eject_after_elapsed integer Eject user after this many seconds nbf integer “Not before” timestamp - users cannot join before this time exp integer Expiration timestamp - users cannot join after this time is_owner boolean Whether token grants owner privileges user_name string User’s display name in the meeting user_id string Unique identifier for the user (36 char limit) enable_screenshare boolean Whether user can share their screen start_video_off boolean Whether to join with video off start_audio_off boolean Whether to join with audio off enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) enable_prejoin_ui boolean Whether to show prejoin UI start_cloud_recording boolean Whether to start cloud recording when user joins permissions dict Initial default permissions for a non-meeting-owner participant DailyMeetingTokenParams Parameters for creating a Daily meeting token. properties DailyMeetingTokenProperties Token configuration properties Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyMeetingTokenParams, DailyMeetingTokenProperties, ) token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , enable_screenshare = True , start_video_off = True , permissions = { "canSend" : [ "video" , "audio" ]} ) ) Initialize DailyRESTHelper Create a new instance of the Daily REST helper. daily_api_key string required Your Daily API key daily_api_url string default: "https://api.daily.co/v1" The Daily API base URL aiohttp_session aiohttp.ClientSession required An aiohttp client session for making HTTP requests Copy Ask AI helper = DailyRESTHelper( daily_api_key = "your-api-key" , aiohttp_session = session ) Create Room Creates a new Daily room with specified parameters. params DailyRoomParams required Room configuration parameters including name, privacy, and properties Copy Ask AI # Create a room that expires in 1 hour params = DailyRoomParams( name = "my-room" , privacy = "private" , properties = DailyRoomProperties( exp = time.time() + 3600 , enable_chat = True ) ) room = await helper.create_room(params) print ( f "Room URL: { room.url } " ) Get Room From URL Retrieves room information using a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room = await helper.get_room_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room.name } " ) Get Token Generates a meeting token for a specific room. room_url string required The complete Daily room URL expiry_time float default: "3600" Token expiration time in seconds eject_at_token_exp bool default: "False" Whether to eject user when token expires owner bool default: "True" Whether the token should have owner privileges (overrides any setting in params) params DailyMeetingTokenParams Additional token configuration. Note that room_name , exp , eject_at_token_exp , and is_owner will be set based on the other function parameters. Copy Ask AI # Basic token generation token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , # 30 minutes owner = True , eject_at_token_exp = True ) # Advanced token generation with additional properties token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , start_video_off = True ) ) token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , owner = False , eject_at_token_exp = True , params = token_params ) Delete Room By URL Deletes a room using its URL. room_url string required The complete Daily room URL Copy Ask AI success = await helper.delete_room_by_url( "https://your-domain.daily.co/my-room" ) if success: print ( "Room deleted successfully" ) Delete Room By Name Deletes a room using its name. room_name string required The name of the Daily room Copy Ask AI success = await helper.delete_room_by_name( "my-room" ) if success: print ( "Room deleted successfully" ) Get Name From URL Extracts the room name from a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room_name = helper.get_name_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room_name } " ) # Outputs: "my-room" Turn Tracking Observer Smart Turn Overview On this page Classes DailyRoomSipParams RecordingsBucketConfig DailyRoomProperties DailyRoomParams DailyRoomObject DailyMeetingTokenProperties DailyMeetingTokenParams Initialize DailyRESTHelper Create Room Get Room From URL Get Token Delete Room By URL Delete Room By Name Get Name From URL Assistant Responses are generated using AI and may contain mistakes.
|
daily_rest-helpers_4c97fee6.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/daily/rest-helpers#param-room-name-1
|
| 2 |
+
Title: Daily REST Helper - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Daily REST Helper - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Service Utilities Daily REST Helper Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing Frame Filters Metrics and Telemetry MCP Observers Service Utilities Daily REST Helper Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Daily REST API Documentation For complete Daily REST API reference and additional details Classes DailyRoomSipParams Configuration for SIP (Session Initiation Protocol) parameters. display_name string default: "sw-sip-dialin" Display name for the SIP endpoint video boolean default: false Whether video is enabled for SIP sip_mode string default: "dial-in" SIP connection mode num_endpoints integer default: 1 Number of SIP endpoints Copy Ask AI from pipecat.transports.services.helpers.daily_rest import DailyRoomSipParams sip_params = DailyRoomSipParams( display_name = "conference-line" , video = True , num_endpoints = 2 ) RecordingsBucketConfig Configuration for storing Daily recordings in a custom S3 bucket. bucket_name string required Name of the S3 bucket for storing recordings bucket_region string required AWS region where the S3 bucket is located assume_role_arn string required ARN of the IAM role to assume for S3 access allow_api_access boolean default: false Whether to allow API access to the recordings Copy Ask AI from pipecat.transports.services.helpers.daily_rest import RecordingsBucketConfig bucket_config = RecordingsBucketConfig( bucket_name = "my-recordings-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRecordingsRole" , allow_api_access = True ) DailyRoomProperties Properties that configure a Daily room’s behavior and features. exp float Room expiration time as Unix timestamp (e.g., time.time() + 300 for 5 minutes) enable_chat boolean default: false Whether chat is enabled in the room enable_prejoin_ui boolean default: false Whether the prejoin lobby UI is enabled enable_emoji_reactions boolean default: false Whether emoji reactions are enabled eject_at_room_exp boolean default: false Whether to eject participants when room expires enable_dialout boolean Whether dial-out is enabled enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) geo string Geographic region for room max_participants number Maximum number of participants allowed in the room recordings_bucket RecordingsBucketConfig Configuration for custom S3 bucket recordings sip DailyRoomSipParams SIP configuration parameters sip_uri dict SIP URI configuration (returned by Daily) start_video_off boolean default: false Whether the camera video is turned off by default The class also includes a sip_endpoint property that returns the SIP endpoint URI if available. Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomProperties, DailyRoomSipParams, RecordingsBucketConfig, ) properties = DailyRoomProperties( exp = time.time() + 3600 , # 1 hour from now enable_chat = True , enable_emoji_reactions = True , enable_recording = "cloud" , geo = "us-west" , max_participants = 50 , sip = DailyRoomSipParams( display_name = "conference" ), recordings_bucket = RecordingsBucketConfig( bucket_name = "my-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRole" ) ) # Access SIP endpoint if available if properties.sip_endpoint: print ( f "SIP endpoint: { properties.sip_endpoint } " ) DailyRoomParams Parameters for creating a new Daily room. name string Room name (if not provided, one will be generated) privacy string default: "public" Room privacy setting (“private” or “public”) properties DailyRoomProperties Room configuration properties Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomParams, DailyRoomProperties, ) params = DailyRoomParams( name = "team-meeting" , privacy = "private" , properties = DailyRoomProperties( enable_chat = True , exp = time.time() + 7200 # 2 hours from now ) ) DailyRoomObject Response object representing a Daily room. id string Unique room identifier name string Room name api_created boolean Whether the room was created via API privacy string Room privacy setting url string Complete room URL created_at string Room creation timestamp in ISO 8601 format config DailyRoomProperties Room configuration Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyRoomObject, DailyRoomProperties, ) # Example of what a DailyRoomObject looks like when received room = DailyRoomObject( id = "abc123" , name = "team-meeting" , api_created = True , privacy = "private" , url = "https://your-domain.daily.co/team-meeting" , created_at = "2024-01-20T10:00:00.000Z" , config = DailyRoomProperties( enable_chat = True , exp = 1705743600 ) ) DailyMeetingTokenProperties Properties for configuring a Daily meeting token. room_name string The room this token is valid for. If not set, token is valid for all rooms. eject_at_token_exp boolean Whether to eject user when token expires eject_after_elapsed integer Eject user after this many seconds nbf integer “Not before” timestamp - users cannot join before this time exp integer Expiration timestamp - users cannot join after this time is_owner boolean Whether token grants owner privileges user_name string User’s display name in the meeting user_id string Unique identifier for the user (36 char limit) enable_screenshare boolean Whether user can share their screen start_video_off boolean Whether to join with video off start_audio_off boolean Whether to join with audio off enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) enable_prejoin_ui boolean Whether to show prejoin UI start_cloud_recording boolean Whether to start cloud recording when user joins permissions dict Initial default permissions for a non-meeting-owner participant DailyMeetingTokenParams Parameters for creating a Daily meeting token. properties DailyMeetingTokenProperties Token configuration properties Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyMeetingTokenParams, DailyMeetingTokenProperties, ) token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , enable_screenshare = True , start_video_off = True , permissions = { "canSend" : [ "video" , "audio" ]} ) ) Initialize DailyRESTHelper Create a new instance of the Daily REST helper. daily_api_key string required Your Daily API key daily_api_url string default: "https://api.daily.co/v1" The Daily API base URL aiohttp_session aiohttp.ClientSession required An aiohttp client session for making HTTP requests Copy Ask AI helper = DailyRESTHelper( daily_api_key = "your-api-key" , aiohttp_session = session ) Create Room Creates a new Daily room with specified parameters. params DailyRoomParams required Room configuration parameters including name, privacy, and properties Copy Ask AI # Create a room that expires in 1 hour params = DailyRoomParams( name = "my-room" , privacy = "private" , properties = DailyRoomProperties( exp = time.time() + 3600 , enable_chat = True ) ) room = await helper.create_room(params) print ( f "Room URL: { room.url } " ) Get Room From URL Retrieves room information using a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room = await helper.get_room_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room.name } " ) Get Token Generates a meeting token for a specific room. room_url string required The complete Daily room URL expiry_time float default: "3600" Token expiration time in seconds eject_at_token_exp bool default: "False" Whether to eject user when token expires owner bool default: "True" Whether the token should have owner privileges (overrides any setting in params) params DailyMeetingTokenParams Additional token configuration. Note that room_name , exp , eject_at_token_exp , and is_owner will be set based on the other function parameters. Copy Ask AI # Basic token generation token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , # 30 minutes owner = True , eject_at_token_exp = True ) # Advanced token generation with additional properties token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , start_video_off = True ) ) token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , owner = False , eject_at_token_exp = True , params = token_params ) Delete Room By URL Deletes a room using its URL. room_url string required The complete Daily room URL Copy Ask AI success = await helper.delete_room_by_url( "https://your-domain.daily.co/my-room" ) if success: print ( "Room deleted successfully" ) Delete Room By Name Deletes a room using its name. room_name string required The name of the Daily room Copy Ask AI success = await helper.delete_room_by_name( "my-room" ) if success: print ( "Room deleted successfully" ) Get Name From URL Extracts the room name from a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room_name = helper.get_name_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room_name } " ) # Outputs: "my-room" Turn Tracking Observer Smart Turn Overview On this page Classes DailyRoomSipParams RecordingsBucketConfig DailyRoomProperties DailyRoomParams DailyRoomObject DailyMeetingTokenProperties DailyMeetingTokenParams Initialize DailyRESTHelper Create Room Get Room From URL Get Token Delete Room By URL Delete Room By Name Get Name From URL Assistant Responses are generated using AI and may contain mistakes.
|
daily_rest-helpers_a9d99269.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/daily/rest-helpers#classes
|
| 2 |
+
Title: Daily REST Helper - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Daily REST Helper - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Service Utilities Daily REST Helper Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing Frame Filters Metrics and Telemetry MCP Observers Service Utilities Daily REST Helper Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Daily REST API Documentation For complete Daily REST API reference and additional details Classes DailyRoomSipParams Configuration for SIP (Session Initiation Protocol) parameters. display_name string default: "sw-sip-dialin" Display name for the SIP endpoint video boolean default: false Whether video is enabled for SIP sip_mode string default: "dial-in" SIP connection mode num_endpoints integer default: 1 Number of SIP endpoints Copy Ask AI from pipecat.transports.services.helpers.daily_rest import DailyRoomSipParams sip_params = DailyRoomSipParams( display_name = "conference-line" , video = True , num_endpoints = 2 ) RecordingsBucketConfig Configuration for storing Daily recordings in a custom S3 bucket. bucket_name string required Name of the S3 bucket for storing recordings bucket_region string required AWS region where the S3 bucket is located assume_role_arn string required ARN of the IAM role to assume for S3 access allow_api_access boolean default: false Whether to allow API access to the recordings Copy Ask AI from pipecat.transports.services.helpers.daily_rest import RecordingsBucketConfig bucket_config = RecordingsBucketConfig( bucket_name = "my-recordings-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRecordingsRole" , allow_api_access = True ) DailyRoomProperties Properties that configure a Daily room’s behavior and features. exp float Room expiration time as Unix timestamp (e.g., time.time() + 300 for 5 minutes) enable_chat boolean default: false Whether chat is enabled in the room enable_prejoin_ui boolean default: false Whether the prejoin lobby UI is enabled enable_emoji_reactions boolean default: false Whether emoji reactions are enabled eject_at_room_exp boolean default: false Whether to eject participants when room expires enable_dialout boolean Whether dial-out is enabled enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) geo string Geographic region for room max_participants number Maximum number of participants allowed in the room recordings_bucket RecordingsBucketConfig Configuration for custom S3 bucket recordings sip DailyRoomSipParams SIP configuration parameters sip_uri dict SIP URI configuration (returned by Daily) start_video_off boolean default: false Whether the camera video is turned off by default The class also includes a sip_endpoint property that returns the SIP endpoint URI if available. Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomProperties, DailyRoomSipParams, RecordingsBucketConfig, ) properties = DailyRoomProperties( exp = time.time() + 3600 , # 1 hour from now enable_chat = True , enable_emoji_reactions = True , enable_recording = "cloud" , geo = "us-west" , max_participants = 50 , sip = DailyRoomSipParams( display_name = "conference" ), recordings_bucket = RecordingsBucketConfig( bucket_name = "my-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRole" ) ) # Access SIP endpoint if available if properties.sip_endpoint: print ( f "SIP endpoint: { properties.sip_endpoint } " ) DailyRoomParams Parameters for creating a new Daily room. name string Room name (if not provided, one will be generated) privacy string default: "public" Room privacy setting (“private” or “public”) properties DailyRoomProperties Room configuration properties Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomParams, DailyRoomProperties, ) params = DailyRoomParams( name = "team-meeting" , privacy = "private" , properties = DailyRoomProperties( enable_chat = True , exp = time.time() + 7200 # 2 hours from now ) ) DailyRoomObject Response object representing a Daily room. id string Unique room identifier name string Room name api_created boolean Whether the room was created via API privacy string Room privacy setting url string Complete room URL created_at string Room creation timestamp in ISO 8601 format config DailyRoomProperties Room configuration Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyRoomObject, DailyRoomProperties, ) # Example of what a DailyRoomObject looks like when received room = DailyRoomObject( id = "abc123" , name = "team-meeting" , api_created = True , privacy = "private" , url = "https://your-domain.daily.co/team-meeting" , created_at = "2024-01-20T10:00:00.000Z" , config = DailyRoomProperties( enable_chat = True , exp = 1705743600 ) ) DailyMeetingTokenProperties Properties for configuring a Daily meeting token. room_name string The room this token is valid for. If not set, token is valid for all rooms. eject_at_token_exp boolean Whether to eject user when token expires eject_after_elapsed integer Eject user after this many seconds nbf integer “Not before” timestamp - users cannot join before this time exp integer Expiration timestamp - users cannot join after this time is_owner boolean Whether token grants owner privileges user_name string User’s display name in the meeting user_id string Unique identifier for the user (36 char limit) enable_screenshare boolean Whether user can share their screen start_video_off boolean Whether to join with video off start_audio_off boolean Whether to join with audio off enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) enable_prejoin_ui boolean Whether to show prejoin UI start_cloud_recording boolean Whether to start cloud recording when user joins permissions dict Initial default permissions for a non-meeting-owner participant DailyMeetingTokenParams Parameters for creating a Daily meeting token. properties DailyMeetingTokenProperties Token configuration properties Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyMeetingTokenParams, DailyMeetingTokenProperties, ) token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , enable_screenshare = True , start_video_off = True , permissions = { "canSend" : [ "video" , "audio" ]} ) ) Initialize DailyRESTHelper Create a new instance of the Daily REST helper. daily_api_key string required Your Daily API key daily_api_url string default: "https://api.daily.co/v1" The Daily API base URL aiohttp_session aiohttp.ClientSession required An aiohttp client session for making HTTP requests Copy Ask AI helper = DailyRESTHelper( daily_api_key = "your-api-key" , aiohttp_session = session ) Create Room Creates a new Daily room with specified parameters. params DailyRoomParams required Room configuration parameters including name, privacy, and properties Copy Ask AI # Create a room that expires in 1 hour params = DailyRoomParams( name = "my-room" , privacy = "private" , properties = DailyRoomProperties( exp = time.time() + 3600 , enable_chat = True ) ) room = await helper.create_room(params) print ( f "Room URL: { room.url } " ) Get Room From URL Retrieves room information using a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room = await helper.get_room_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room.name } " ) Get Token Generates a meeting token for a specific room. room_url string required The complete Daily room URL expiry_time float default: "3600" Token expiration time in seconds eject_at_token_exp bool default: "False" Whether to eject user when token expires owner bool default: "True" Whether the token should have owner privileges (overrides any setting in params) params DailyMeetingTokenParams Additional token configuration. Note that room_name , exp , eject_at_token_exp , and is_owner will be set based on the other function parameters. Copy Ask AI # Basic token generation token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , # 30 minutes owner = True , eject_at_token_exp = True ) # Advanced token generation with additional properties token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , start_video_off = True ) ) token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , owner = False , eject_at_token_exp = True , params = token_params ) Delete Room By URL Deletes a room using its URL. room_url string required The complete Daily room URL Copy Ask AI success = await helper.delete_room_by_url( "https://your-domain.daily.co/my-room" ) if success: print ( "Room deleted successfully" ) Delete Room By Name Deletes a room using its name. room_name string required The name of the Daily room Copy Ask AI success = await helper.delete_room_by_name( "my-room" ) if success: print ( "Room deleted successfully" ) Get Name From URL Extracts the room name from a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room_name = helper.get_name_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room_name } " ) # Outputs: "my-room" Turn Tracking Observer Smart Turn Overview On this page Classes DailyRoomSipParams RecordingsBucketConfig DailyRoomProperties DailyRoomParams DailyRoomObject DailyMeetingTokenProperties DailyMeetingTokenParams Initialize DailyRESTHelper Create Room Get Room From URL Get Token Delete Room By URL Delete Room By Name Get Name From URL Assistant Responses are generated using AI and may contain mistakes.
|
daily_rest-helpers_cbb5a2ed.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/daily/rest-helpers#param-geo
|
| 2 |
+
Title: Daily REST Helper - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Daily REST Helper - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Service Utilities Daily REST Helper Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing Frame Filters Metrics and Telemetry MCP Observers Service Utilities Daily REST Helper Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Daily REST API Documentation For complete Daily REST API reference and additional details Classes DailyRoomSipParams Configuration for SIP (Session Initiation Protocol) parameters. display_name string default: "sw-sip-dialin" Display name for the SIP endpoint video boolean default: false Whether video is enabled for SIP sip_mode string default: "dial-in" SIP connection mode num_endpoints integer default: 1 Number of SIP endpoints Copy Ask AI from pipecat.transports.services.helpers.daily_rest import DailyRoomSipParams sip_params = DailyRoomSipParams( display_name = "conference-line" , video = True , num_endpoints = 2 ) RecordingsBucketConfig Configuration for storing Daily recordings in a custom S3 bucket. bucket_name string required Name of the S3 bucket for storing recordings bucket_region string required AWS region where the S3 bucket is located assume_role_arn string required ARN of the IAM role to assume for S3 access allow_api_access boolean default: false Whether to allow API access to the recordings Copy Ask AI from pipecat.transports.services.helpers.daily_rest import RecordingsBucketConfig bucket_config = RecordingsBucketConfig( bucket_name = "my-recordings-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRecordingsRole" , allow_api_access = True ) DailyRoomProperties Properties that configure a Daily room’s behavior and features. exp float Room expiration time as Unix timestamp (e.g., time.time() + 300 for 5 minutes) enable_chat boolean default: false Whether chat is enabled in the room enable_prejoin_ui boolean default: false Whether the prejoin lobby UI is enabled enable_emoji_reactions boolean default: false Whether emoji reactions are enabled eject_at_room_exp boolean default: false Whether to eject participants when room expires enable_dialout boolean Whether dial-out is enabled enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) geo string Geographic region for room max_participants number Maximum number of participants allowed in the room recordings_bucket RecordingsBucketConfig Configuration for custom S3 bucket recordings sip DailyRoomSipParams SIP configuration parameters sip_uri dict SIP URI configuration (returned by Daily) start_video_off boolean default: false Whether the camera video is turned off by default The class also includes a sip_endpoint property that returns the SIP endpoint URI if available. Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomProperties, DailyRoomSipParams, RecordingsBucketConfig, ) properties = DailyRoomProperties( exp = time.time() + 3600 , # 1 hour from now enable_chat = True , enable_emoji_reactions = True , enable_recording = "cloud" , geo = "us-west" , max_participants = 50 , sip = DailyRoomSipParams( display_name = "conference" ), recordings_bucket = RecordingsBucketConfig( bucket_name = "my-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRole" ) ) # Access SIP endpoint if available if properties.sip_endpoint: print ( f "SIP endpoint: { properties.sip_endpoint } " ) DailyRoomParams Parameters for creating a new Daily room. name string Room name (if not provided, one will be generated) privacy string default: "public" Room privacy setting (“private” or “public”) properties DailyRoomProperties Room configuration properties Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomParams, DailyRoomProperties, ) params = DailyRoomParams( name = "team-meeting" , privacy = "private" , properties = DailyRoomProperties( enable_chat = True , exp = time.time() + 7200 # 2 hours from now ) ) DailyRoomObject Response object representing a Daily room. id string Unique room identifier name string Room name api_created boolean Whether the room was created via API privacy string Room privacy setting url string Complete room URL created_at string Room creation timestamp in ISO 8601 format config DailyRoomProperties Room configuration Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyRoomObject, DailyRoomProperties, ) # Example of what a DailyRoomObject looks like when received room = DailyRoomObject( id = "abc123" , name = "team-meeting" , api_created = True , privacy = "private" , url = "https://your-domain.daily.co/team-meeting" , created_at = "2024-01-20T10:00:00.000Z" , config = DailyRoomProperties( enable_chat = True , exp = 1705743600 ) ) DailyMeetingTokenProperties Properties for configuring a Daily meeting token. room_name string The room this token is valid for. If not set, token is valid for all rooms. eject_at_token_exp boolean Whether to eject user when token expires eject_after_elapsed integer Eject user after this many seconds nbf integer “Not before” timestamp - users cannot join before this time exp integer Expiration timestamp - users cannot join after this time is_owner boolean Whether token grants owner privileges user_name string User’s display name in the meeting user_id string Unique identifier for the user (36 char limit) enable_screenshare boolean Whether user can share their screen start_video_off boolean Whether to join with video off start_audio_off boolean Whether to join with audio off enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) enable_prejoin_ui boolean Whether to show prejoin UI start_cloud_recording boolean Whether to start cloud recording when user joins permissions dict Initial default permissions for a non-meeting-owner participant DailyMeetingTokenParams Parameters for creating a Daily meeting token. properties DailyMeetingTokenProperties Token configuration properties Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyMeetingTokenParams, DailyMeetingTokenProperties, ) token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , enable_screenshare = True , start_video_off = True , permissions = { "canSend" : [ "video" , "audio" ]} ) ) Initialize DailyRESTHelper Create a new instance of the Daily REST helper. daily_api_key string required Your Daily API key daily_api_url string default: "https://api.daily.co/v1" The Daily API base URL aiohttp_session aiohttp.ClientSession required An aiohttp client session for making HTTP requests Copy Ask AI helper = DailyRESTHelper( daily_api_key = "your-api-key" , aiohttp_session = session ) Create Room Creates a new Daily room with specified parameters. params DailyRoomParams required Room configuration parameters including name, privacy, and properties Copy Ask AI # Create a room that expires in 1 hour params = DailyRoomParams( name = "my-room" , privacy = "private" , properties = DailyRoomProperties( exp = time.time() + 3600 , enable_chat = True ) ) room = await helper.create_room(params) print ( f "Room URL: { room.url } " ) Get Room From URL Retrieves room information using a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room = await helper.get_room_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room.name } " ) Get Token Generates a meeting token for a specific room. room_url string required The complete Daily room URL expiry_time float default: "3600" Token expiration time in seconds eject_at_token_exp bool default: "False" Whether to eject user when token expires owner bool default: "True" Whether the token should have owner privileges (overrides any setting in params) params DailyMeetingTokenParams Additional token configuration. Note that room_name , exp , eject_at_token_exp , and is_owner will be set based on the other function parameters. Copy Ask AI # Basic token generation token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , # 30 minutes owner = True , eject_at_token_exp = True ) # Advanced token generation with additional properties token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , start_video_off = True ) ) token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , owner = False , eject_at_token_exp = True , params = token_params ) Delete Room By URL Deletes a room using its URL. room_url string required The complete Daily room URL Copy Ask AI success = await helper.delete_room_by_url( "https://your-domain.daily.co/my-room" ) if success: print ( "Room deleted successfully" ) Delete Room By Name Deletes a room using its name. room_name string required The name of the Daily room Copy Ask AI success = await helper.delete_room_by_name( "my-room" ) if success: print ( "Room deleted successfully" ) Get Name From URL Extracts the room name from a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room_name = helper.get_name_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room_name } " ) # Outputs: "my-room" Turn Tracking Observer Smart Turn Overview On this page Classes DailyRoomSipParams RecordingsBucketConfig DailyRoomProperties DailyRoomParams DailyRoomObject DailyMeetingTokenProperties DailyMeetingTokenParams Initialize DailyRESTHelper Create Room Get Room From URL Get Token Delete Room By URL Delete Room By Name Get Name From URL Assistant Responses are generated using AI and may contain mistakes.
|
daily_rest-helpers_df8e58ba.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/daily/rest-helpers#param-user-id
|
| 2 |
+
Title: Daily REST Helper - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Daily REST Helper - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Service Utilities Daily REST Helper Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing Frame Filters Metrics and Telemetry MCP Observers Service Utilities Daily REST Helper Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Daily REST API Documentation For complete Daily REST API reference and additional details Classes DailyRoomSipParams Configuration for SIP (Session Initiation Protocol) parameters. display_name string default: "sw-sip-dialin" Display name for the SIP endpoint video boolean default: false Whether video is enabled for SIP sip_mode string default: "dial-in" SIP connection mode num_endpoints integer default: 1 Number of SIP endpoints Copy Ask AI from pipecat.transports.services.helpers.daily_rest import DailyRoomSipParams sip_params = DailyRoomSipParams( display_name = "conference-line" , video = True , num_endpoints = 2 ) RecordingsBucketConfig Configuration for storing Daily recordings in a custom S3 bucket. bucket_name string required Name of the S3 bucket for storing recordings bucket_region string required AWS region where the S3 bucket is located assume_role_arn string required ARN of the IAM role to assume for S3 access allow_api_access boolean default: false Whether to allow API access to the recordings Copy Ask AI from pipecat.transports.services.helpers.daily_rest import RecordingsBucketConfig bucket_config = RecordingsBucketConfig( bucket_name = "my-recordings-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRecordingsRole" , allow_api_access = True ) DailyRoomProperties Properties that configure a Daily room’s behavior and features. exp float Room expiration time as Unix timestamp (e.g., time.time() + 300 for 5 minutes) enable_chat boolean default: false Whether chat is enabled in the room enable_prejoin_ui boolean default: false Whether the prejoin lobby UI is enabled enable_emoji_reactions boolean default: false Whether emoji reactions are enabled eject_at_room_exp boolean default: false Whether to eject participants when room expires enable_dialout boolean Whether dial-out is enabled enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) geo string Geographic region for room max_participants number Maximum number of participants allowed in the room recordings_bucket RecordingsBucketConfig Configuration for custom S3 bucket recordings sip DailyRoomSipParams SIP configuration parameters sip_uri dict SIP URI configuration (returned by Daily) start_video_off boolean default: false Whether the camera video is turned off by default The class also includes a sip_endpoint property that returns the SIP endpoint URI if available. Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomProperties, DailyRoomSipParams, RecordingsBucketConfig, ) properties = DailyRoomProperties( exp = time.time() + 3600 , # 1 hour from now enable_chat = True , enable_emoji_reactions = True , enable_recording = "cloud" , geo = "us-west" , max_participants = 50 , sip = DailyRoomSipParams( display_name = "conference" ), recordings_bucket = RecordingsBucketConfig( bucket_name = "my-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRole" ) ) # Access SIP endpoint if available if properties.sip_endpoint: print ( f "SIP endpoint: { properties.sip_endpoint } " ) DailyRoomParams Parameters for creating a new Daily room. name string Room name (if not provided, one will be generated) privacy string default: "public" Room privacy setting (“private” or “public”) properties DailyRoomProperties Room configuration properties Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomParams, DailyRoomProperties, ) params = DailyRoomParams( name = "team-meeting" , privacy = "private" , properties = DailyRoomProperties( enable_chat = True , exp = time.time() + 7200 # 2 hours from now ) ) DailyRoomObject Response object representing a Daily room. id string Unique room identifier name string Room name api_created boolean Whether the room was created via API privacy string Room privacy setting url string Complete room URL created_at string Room creation timestamp in ISO 8601 format config DailyRoomProperties Room configuration Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyRoomObject, DailyRoomProperties, ) # Example of what a DailyRoomObject looks like when received room = DailyRoomObject( id = "abc123" , name = "team-meeting" , api_created = True , privacy = "private" , url = "https://your-domain.daily.co/team-meeting" , created_at = "2024-01-20T10:00:00.000Z" , config = DailyRoomProperties( enable_chat = True , exp = 1705743600 ) ) DailyMeetingTokenProperties Properties for configuring a Daily meeting token. room_name string The room this token is valid for. If not set, token is valid for all rooms. eject_at_token_exp boolean Whether to eject user when token expires eject_after_elapsed integer Eject user after this many seconds nbf integer “Not before” timestamp - users cannot join before this time exp integer Expiration timestamp - users cannot join after this time is_owner boolean Whether token grants owner privileges user_name string User’s display name in the meeting user_id string Unique identifier for the user (36 char limit) enable_screenshare boolean Whether user can share their screen start_video_off boolean Whether to join with video off start_audio_off boolean Whether to join with audio off enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) enable_prejoin_ui boolean Whether to show prejoin UI start_cloud_recording boolean Whether to start cloud recording when user joins permissions dict Initial default permissions for a non-meeting-owner participant DailyMeetingTokenParams Parameters for creating a Daily meeting token. properties DailyMeetingTokenProperties Token configuration properties Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyMeetingTokenParams, DailyMeetingTokenProperties, ) token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , enable_screenshare = True , start_video_off = True , permissions = { "canSend" : [ "video" , "audio" ]} ) ) Initialize DailyRESTHelper Create a new instance of the Daily REST helper. daily_api_key string required Your Daily API key daily_api_url string default: "https://api.daily.co/v1" The Daily API base URL aiohttp_session aiohttp.ClientSession required An aiohttp client session for making HTTP requests Copy Ask AI helper = DailyRESTHelper( daily_api_key = "your-api-key" , aiohttp_session = session ) Create Room Creates a new Daily room with specified parameters. params DailyRoomParams required Room configuration parameters including name, privacy, and properties Copy Ask AI # Create a room that expires in 1 hour params = DailyRoomParams( name = "my-room" , privacy = "private" , properties = DailyRoomProperties( exp = time.time() + 3600 , enable_chat = True ) ) room = await helper.create_room(params) print ( f "Room URL: { room.url } " ) Get Room From URL Retrieves room information using a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room = await helper.get_room_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room.name } " ) Get Token Generates a meeting token for a specific room. room_url string required The complete Daily room URL expiry_time float default: "3600" Token expiration time in seconds eject_at_token_exp bool default: "False" Whether to eject user when token expires owner bool default: "True" Whether the token should have owner privileges (overrides any setting in params) params DailyMeetingTokenParams Additional token configuration. Note that room_name , exp , eject_at_token_exp , and is_owner will be set based on the other function parameters. Copy Ask AI # Basic token generation token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , # 30 minutes owner = True , eject_at_token_exp = True ) # Advanced token generation with additional properties token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , start_video_off = True ) ) token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , owner = False , eject_at_token_exp = True , params = token_params ) Delete Room By URL Deletes a room using its URL. room_url string required The complete Daily room URL Copy Ask AI success = await helper.delete_room_by_url( "https://your-domain.daily.co/my-room" ) if success: print ( "Room deleted successfully" ) Delete Room By Name Deletes a room using its name. room_name string required The name of the Daily room Copy Ask AI success = await helper.delete_room_by_name( "my-room" ) if success: print ( "Room deleted successfully" ) Get Name From URL Extracts the room name from a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room_name = helper.get_name_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room_name } " ) # Outputs: "my-room" Turn Tracking Observer Smart Turn Overview On this page Classes DailyRoomSipParams RecordingsBucketConfig DailyRoomProperties DailyRoomParams DailyRoomObject DailyMeetingTokenProperties DailyMeetingTokenParams Initialize DailyRESTHelper Create Room Get Room From URL Get Token Delete Room By URL Delete Room By Name Get Name From URL Assistant Responses are generated using AI and may contain mistakes.
|
daily_rest-helpers_e36053a2.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/daily/rest-helpers#get-room-from-url
|
| 2 |
+
Title: Daily REST Helper - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Daily REST Helper - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Service Utilities Daily REST Helper Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing Frame Filters Metrics and Telemetry MCP Observers Service Utilities Daily REST Helper Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Daily REST API Documentation For complete Daily REST API reference and additional details Classes DailyRoomSipParams Configuration for SIP (Session Initiation Protocol) parameters. display_name string default: "sw-sip-dialin" Display name for the SIP endpoint video boolean default: false Whether video is enabled for SIP sip_mode string default: "dial-in" SIP connection mode num_endpoints integer default: 1 Number of SIP endpoints Copy Ask AI from pipecat.transports.services.helpers.daily_rest import DailyRoomSipParams sip_params = DailyRoomSipParams( display_name = "conference-line" , video = True , num_endpoints = 2 ) RecordingsBucketConfig Configuration for storing Daily recordings in a custom S3 bucket. bucket_name string required Name of the S3 bucket for storing recordings bucket_region string required AWS region where the S3 bucket is located assume_role_arn string required ARN of the IAM role to assume for S3 access allow_api_access boolean default: false Whether to allow API access to the recordings Copy Ask AI from pipecat.transports.services.helpers.daily_rest import RecordingsBucketConfig bucket_config = RecordingsBucketConfig( bucket_name = "my-recordings-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRecordingsRole" , allow_api_access = True ) DailyRoomProperties Properties that configure a Daily room’s behavior and features. exp float Room expiration time as Unix timestamp (e.g., time.time() + 300 for 5 minutes) enable_chat boolean default: false Whether chat is enabled in the room enable_prejoin_ui boolean default: false Whether the prejoin lobby UI is enabled enable_emoji_reactions boolean default: false Whether emoji reactions are enabled eject_at_room_exp boolean default: false Whether to eject participants when room expires enable_dialout boolean Whether dial-out is enabled enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) geo string Geographic region for room max_participants number Maximum number of participants allowed in the room recordings_bucket RecordingsBucketConfig Configuration for custom S3 bucket recordings sip DailyRoomSipParams SIP configuration parameters sip_uri dict SIP URI configuration (returned by Daily) start_video_off boolean default: false Whether the camera video is turned off by default The class also includes a sip_endpoint property that returns the SIP endpoint URI if available. Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomProperties, DailyRoomSipParams, RecordingsBucketConfig, ) properties = DailyRoomProperties( exp = time.time() + 3600 , # 1 hour from now enable_chat = True , enable_emoji_reactions = True , enable_recording = "cloud" , geo = "us-west" , max_participants = 50 , sip = DailyRoomSipParams( display_name = "conference" ), recordings_bucket = RecordingsBucketConfig( bucket_name = "my-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRole" ) ) # Access SIP endpoint if available if properties.sip_endpoint: print ( f "SIP endpoint: { properties.sip_endpoint } " ) DailyRoomParams Parameters for creating a new Daily room. name string Room name (if not provided, one will be generated) privacy string default: "public" Room privacy setting (“private” or “public”) properties DailyRoomProperties Room configuration properties Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomParams, DailyRoomProperties, ) params = DailyRoomParams( name = "team-meeting" , privacy = "private" , properties = DailyRoomProperties( enable_chat = True , exp = time.time() + 7200 # 2 hours from now ) ) DailyRoomObject Response object representing a Daily room. id string Unique room identifier name string Room name api_created boolean Whether the room was created via API privacy string Room privacy setting url string Complete room URL created_at string Room creation timestamp in ISO 8601 format config DailyRoomProperties Room configuration Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyRoomObject, DailyRoomProperties, ) # Example of what a DailyRoomObject looks like when received room = DailyRoomObject( id = "abc123" , name = "team-meeting" , api_created = True , privacy = "private" , url = "https://your-domain.daily.co/team-meeting" , created_at = "2024-01-20T10:00:00.000Z" , config = DailyRoomProperties( enable_chat = True , exp = 1705743600 ) ) DailyMeetingTokenProperties Properties for configuring a Daily meeting token. room_name string The room this token is valid for. If not set, token is valid for all rooms. eject_at_token_exp boolean Whether to eject user when token expires eject_after_elapsed integer Eject user after this many seconds nbf integer “Not before” timestamp - users cannot join before this time exp integer Expiration timestamp - users cannot join after this time is_owner boolean Whether token grants owner privileges user_name string User’s display name in the meeting user_id string Unique identifier for the user (36 char limit) enable_screenshare boolean Whether user can share their screen start_video_off boolean Whether to join with video off start_audio_off boolean Whether to join with audio off enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) enable_prejoin_ui boolean Whether to show prejoin UI start_cloud_recording boolean Whether to start cloud recording when user joins permissions dict Initial default permissions for a non-meeting-owner participant DailyMeetingTokenParams Parameters for creating a Daily meeting token. properties DailyMeetingTokenProperties Token configuration properties Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyMeetingTokenParams, DailyMeetingTokenProperties, ) token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , enable_screenshare = True , start_video_off = True , permissions = { "canSend" : [ "video" , "audio" ]} ) ) Initialize DailyRESTHelper Create a new instance of the Daily REST helper. daily_api_key string required Your Daily API key daily_api_url string default: "https://api.daily.co/v1" The Daily API base URL aiohttp_session aiohttp.ClientSession required An aiohttp client session for making HTTP requests Copy Ask AI helper = DailyRESTHelper( daily_api_key = "your-api-key" , aiohttp_session = session ) Create Room Creates a new Daily room with specified parameters. params DailyRoomParams required Room configuration parameters including name, privacy, and properties Copy Ask AI # Create a room that expires in 1 hour params = DailyRoomParams( name = "my-room" , privacy = "private" , properties = DailyRoomProperties( exp = time.time() + 3600 , enable_chat = True ) ) room = await helper.create_room(params) print ( f "Room URL: { room.url } " ) Get Room From URL Retrieves room information using a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room = await helper.get_room_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room.name } " ) Get Token Generates a meeting token for a specific room. room_url string required The complete Daily room URL expiry_time float default: "3600" Token expiration time in seconds eject_at_token_exp bool default: "False" Whether to eject user when token expires owner bool default: "True" Whether the token should have owner privileges (overrides any setting in params) params DailyMeetingTokenParams Additional token configuration. Note that room_name , exp , eject_at_token_exp , and is_owner will be set based on the other function parameters. Copy Ask AI # Basic token generation token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , # 30 minutes owner = True , eject_at_token_exp = True ) # Advanced token generation with additional properties token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , start_video_off = True ) ) token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , owner = False , eject_at_token_exp = True , params = token_params ) Delete Room By URL Deletes a room using its URL. room_url string required The complete Daily room URL Copy Ask AI success = await helper.delete_room_by_url( "https://your-domain.daily.co/my-room" ) if success: print ( "Room deleted successfully" ) Delete Room By Name Deletes a room using its name. room_name string required The name of the Daily room Copy Ask AI success = await helper.delete_room_by_name( "my-room" ) if success: print ( "Room deleted successfully" ) Get Name From URL Extracts the room name from a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room_name = helper.get_name_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room_name } " ) # Outputs: "my-room" Turn Tracking Observer Smart Turn Overview On this page Classes DailyRoomSipParams RecordingsBucketConfig DailyRoomProperties DailyRoomParams DailyRoomObject DailyMeetingTokenProperties DailyMeetingTokenParams Initialize DailyRESTHelper Create Room Get Room From URL Get Token Delete Room By URL Delete Room By Name Get Name From URL Assistant Responses are generated using AI and may contain mistakes.
|
daily_rest-helpers_e67003ac.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/daily/rest-helpers#param-eject-after-elapsed
|
| 2 |
+
Title: Daily REST Helper - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Daily REST Helper - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Service Utilities Daily REST Helper Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing Frame Filters Metrics and Telemetry MCP Observers Service Utilities Daily REST Helper Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Daily REST API Documentation For complete Daily REST API reference and additional details Classes DailyRoomSipParams Configuration for SIP (Session Initiation Protocol) parameters. display_name string default: "sw-sip-dialin" Display name for the SIP endpoint video boolean default: false Whether video is enabled for SIP sip_mode string default: "dial-in" SIP connection mode num_endpoints integer default: 1 Number of SIP endpoints Copy Ask AI from pipecat.transports.services.helpers.daily_rest import DailyRoomSipParams sip_params = DailyRoomSipParams( display_name = "conference-line" , video = True , num_endpoints = 2 ) RecordingsBucketConfig Configuration for storing Daily recordings in a custom S3 bucket. bucket_name string required Name of the S3 bucket for storing recordings bucket_region string required AWS region where the S3 bucket is located assume_role_arn string required ARN of the IAM role to assume for S3 access allow_api_access boolean default: false Whether to allow API access to the recordings Copy Ask AI from pipecat.transports.services.helpers.daily_rest import RecordingsBucketConfig bucket_config = RecordingsBucketConfig( bucket_name = "my-recordings-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRecordingsRole" , allow_api_access = True ) DailyRoomProperties Properties that configure a Daily room’s behavior and features. exp float Room expiration time as Unix timestamp (e.g., time.time() + 300 for 5 minutes) enable_chat boolean default: false Whether chat is enabled in the room enable_prejoin_ui boolean default: false Whether the prejoin lobby UI is enabled enable_emoji_reactions boolean default: false Whether emoji reactions are enabled eject_at_room_exp boolean default: false Whether to eject participants when room expires enable_dialout boolean Whether dial-out is enabled enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) geo string Geographic region for room max_participants number Maximum number of participants allowed in the room recordings_bucket RecordingsBucketConfig Configuration for custom S3 bucket recordings sip DailyRoomSipParams SIP configuration parameters sip_uri dict SIP URI configuration (returned by Daily) start_video_off boolean default: false Whether the camera video is turned off by default The class also includes a sip_endpoint property that returns the SIP endpoint URI if available. Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomProperties, DailyRoomSipParams, RecordingsBucketConfig, ) properties = DailyRoomProperties( exp = time.time() + 3600 , # 1 hour from now enable_chat = True , enable_emoji_reactions = True , enable_recording = "cloud" , geo = "us-west" , max_participants = 50 , sip = DailyRoomSipParams( display_name = "conference" ), recordings_bucket = RecordingsBucketConfig( bucket_name = "my-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRole" ) ) # Access SIP endpoint if available if properties.sip_endpoint: print ( f "SIP endpoint: { properties.sip_endpoint } " ) DailyRoomParams Parameters for creating a new Daily room. name string Room name (if not provided, one will be generated) privacy string default: "public" Room privacy setting (“private” or “public”) properties DailyRoomProperties Room configuration properties Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomParams, DailyRoomProperties, ) params = DailyRoomParams( name = "team-meeting" , privacy = "private" , properties = DailyRoomProperties( enable_chat = True , exp = time.time() + 7200 # 2 hours from now ) ) DailyRoomObject Response object representing a Daily room. id string Unique room identifier name string Room name api_created boolean Whether the room was created via API privacy string Room privacy setting url string Complete room URL created_at string Room creation timestamp in ISO 8601 format config DailyRoomProperties Room configuration Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyRoomObject, DailyRoomProperties, ) # Example of what a DailyRoomObject looks like when received room = DailyRoomObject( id = "abc123" , name = "team-meeting" , api_created = True , privacy = "private" , url = "https://your-domain.daily.co/team-meeting" , created_at = "2024-01-20T10:00:00.000Z" , config = DailyRoomProperties( enable_chat = True , exp = 1705743600 ) ) DailyMeetingTokenProperties Properties for configuring a Daily meeting token. room_name string The room this token is valid for. If not set, token is valid for all rooms. eject_at_token_exp boolean Whether to eject user when token expires eject_after_elapsed integer Eject user after this many seconds nbf integer “Not before” timestamp - users cannot join before this time exp integer Expiration timestamp - users cannot join after this time is_owner boolean Whether token grants owner privileges user_name string User’s display name in the meeting user_id string Unique identifier for the user (36 char limit) enable_screenshare boolean Whether user can share their screen start_video_off boolean Whether to join with video off start_audio_off boolean Whether to join with audio off enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) enable_prejoin_ui boolean Whether to show prejoin UI start_cloud_recording boolean Whether to start cloud recording when user joins permissions dict Initial default permissions for a non-meeting-owner participant DailyMeetingTokenParams Parameters for creating a Daily meeting token. properties DailyMeetingTokenProperties Token configuration properties Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyMeetingTokenParams, DailyMeetingTokenProperties, ) token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , enable_screenshare = True , start_video_off = True , permissions = { "canSend" : [ "video" , "audio" ]} ) ) Initialize DailyRESTHelper Create a new instance of the Daily REST helper. daily_api_key string required Your Daily API key daily_api_url string default: "https://api.daily.co/v1" The Daily API base URL aiohttp_session aiohttp.ClientSession required An aiohttp client session for making HTTP requests Copy Ask AI helper = DailyRESTHelper( daily_api_key = "your-api-key" , aiohttp_session = session ) Create Room Creates a new Daily room with specified parameters. params DailyRoomParams required Room configuration parameters including name, privacy, and properties Copy Ask AI # Create a room that expires in 1 hour params = DailyRoomParams( name = "my-room" , privacy = "private" , properties = DailyRoomProperties( exp = time.time() + 3600 , enable_chat = True ) ) room = await helper.create_room(params) print ( f "Room URL: { room.url } " ) Get Room From URL Retrieves room information using a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room = await helper.get_room_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room.name } " ) Get Token Generates a meeting token for a specific room. room_url string required The complete Daily room URL expiry_time float default: "3600" Token expiration time in seconds eject_at_token_exp bool default: "False" Whether to eject user when token expires owner bool default: "True" Whether the token should have owner privileges (overrides any setting in params) params DailyMeetingTokenParams Additional token configuration. Note that room_name , exp , eject_at_token_exp , and is_owner will be set based on the other function parameters. Copy Ask AI # Basic token generation token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , # 30 minutes owner = True , eject_at_token_exp = True ) # Advanced token generation with additional properties token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , start_video_off = True ) ) token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , owner = False , eject_at_token_exp = True , params = token_params ) Delete Room By URL Deletes a room using its URL. room_url string required The complete Daily room URL Copy Ask AI success = await helper.delete_room_by_url( "https://your-domain.daily.co/my-room" ) if success: print ( "Room deleted successfully" ) Delete Room By Name Deletes a room using its name. room_name string required The name of the Daily room Copy Ask AI success = await helper.delete_room_by_name( "my-room" ) if success: print ( "Room deleted successfully" ) Get Name From URL Extracts the room name from a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room_name = helper.get_name_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room_name } " ) # Outputs: "my-room" Turn Tracking Observer Smart Turn Overview On this page Classes DailyRoomSipParams RecordingsBucketConfig DailyRoomProperties DailyRoomParams DailyRoomObject DailyMeetingTokenProperties DailyMeetingTokenParams Initialize DailyRESTHelper Create Room Get Room From URL Get Token Delete Room By URL Delete Room By Name Get Name From URL Assistant Responses are generated using AI and may contain mistakes.
|
daily_rest-helpers_f7ab8d86.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/server/utilities/daily/rest-helpers#param-privacy
|
| 2 |
+
Title: Daily REST Helper - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Daily REST Helper - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Service Utilities Daily REST Helper Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing Frame Filters Metrics and Telemetry MCP Observers Service Utilities Daily REST Helper Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline Daily REST API Documentation For complete Daily REST API reference and additional details Classes DailyRoomSipParams Configuration for SIP (Session Initiation Protocol) parameters. display_name string default: "sw-sip-dialin" Display name for the SIP endpoint video boolean default: false Whether video is enabled for SIP sip_mode string default: "dial-in" SIP connection mode num_endpoints integer default: 1 Number of SIP endpoints Copy Ask AI from pipecat.transports.services.helpers.daily_rest import DailyRoomSipParams sip_params = DailyRoomSipParams( display_name = "conference-line" , video = True , num_endpoints = 2 ) RecordingsBucketConfig Configuration for storing Daily recordings in a custom S3 bucket. bucket_name string required Name of the S3 bucket for storing recordings bucket_region string required AWS region where the S3 bucket is located assume_role_arn string required ARN of the IAM role to assume for S3 access allow_api_access boolean default: false Whether to allow API access to the recordings Copy Ask AI from pipecat.transports.services.helpers.daily_rest import RecordingsBucketConfig bucket_config = RecordingsBucketConfig( bucket_name = "my-recordings-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRecordingsRole" , allow_api_access = True ) DailyRoomProperties Properties that configure a Daily room’s behavior and features. exp float Room expiration time as Unix timestamp (e.g., time.time() + 300 for 5 minutes) enable_chat boolean default: false Whether chat is enabled in the room enable_prejoin_ui boolean default: false Whether the prejoin lobby UI is enabled enable_emoji_reactions boolean default: false Whether emoji reactions are enabled eject_at_room_exp boolean default: false Whether to eject participants when room expires enable_dialout boolean Whether dial-out is enabled enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) geo string Geographic region for room max_participants number Maximum number of participants allowed in the room recordings_bucket RecordingsBucketConfig Configuration for custom S3 bucket recordings sip DailyRoomSipParams SIP configuration parameters sip_uri dict SIP URI configuration (returned by Daily) start_video_off boolean default: false Whether the camera video is turned off by default The class also includes a sip_endpoint property that returns the SIP endpoint URI if available. Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomProperties, DailyRoomSipParams, RecordingsBucketConfig, ) properties = DailyRoomProperties( exp = time.time() + 3600 , # 1 hour from now enable_chat = True , enable_emoji_reactions = True , enable_recording = "cloud" , geo = "us-west" , max_participants = 50 , sip = DailyRoomSipParams( display_name = "conference" ), recordings_bucket = RecordingsBucketConfig( bucket_name = "my-bucket" , bucket_region = "us-west-2" , assume_role_arn = "arn:aws:iam::123456789012:role/DailyRole" ) ) # Access SIP endpoint if available if properties.sip_endpoint: print ( f "SIP endpoint: { properties.sip_endpoint } " ) DailyRoomParams Parameters for creating a new Daily room. name string Room name (if not provided, one will be generated) privacy string default: "public" Room privacy setting (“private” or “public”) properties DailyRoomProperties Room configuration properties Copy Ask AI import time from pipecat.transports.services.helpers.daily_rest import ( DailyRoomParams, DailyRoomProperties, ) params = DailyRoomParams( name = "team-meeting" , privacy = "private" , properties = DailyRoomProperties( enable_chat = True , exp = time.time() + 7200 # 2 hours from now ) ) DailyRoomObject Response object representing a Daily room. id string Unique room identifier name string Room name api_created boolean Whether the room was created via API privacy string Room privacy setting url string Complete room URL created_at string Room creation timestamp in ISO 8601 format config DailyRoomProperties Room configuration Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyRoomObject, DailyRoomProperties, ) # Example of what a DailyRoomObject looks like when received room = DailyRoomObject( id = "abc123" , name = "team-meeting" , api_created = True , privacy = "private" , url = "https://your-domain.daily.co/team-meeting" , created_at = "2024-01-20T10:00:00.000Z" , config = DailyRoomProperties( enable_chat = True , exp = 1705743600 ) ) DailyMeetingTokenProperties Properties for configuring a Daily meeting token. room_name string The room this token is valid for. If not set, token is valid for all rooms. eject_at_token_exp boolean Whether to eject user when token expires eject_after_elapsed integer Eject user after this many seconds nbf integer “Not before” timestamp - users cannot join before this time exp integer Expiration timestamp - users cannot join after this time is_owner boolean Whether token grants owner privileges user_name string User’s display name in the meeting user_id string Unique identifier for the user (36 char limit) enable_screenshare boolean Whether user can share their screen start_video_off boolean Whether to join with video off start_audio_off boolean Whether to join with audio off enable_recording string Recording settings (“cloud”, “local”, or “raw-tracks”) enable_prejoin_ui boolean Whether to show prejoin UI start_cloud_recording boolean Whether to start cloud recording when user joins permissions dict Initial default permissions for a non-meeting-owner participant DailyMeetingTokenParams Parameters for creating a Daily meeting token. properties DailyMeetingTokenProperties Token configuration properties Copy Ask AI from pipecat.transports.services.helpers.daily_rest import ( DailyMeetingTokenParams, DailyMeetingTokenProperties, ) token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , enable_screenshare = True , start_video_off = True , permissions = { "canSend" : [ "video" , "audio" ]} ) ) Initialize DailyRESTHelper Create a new instance of the Daily REST helper. daily_api_key string required Your Daily API key daily_api_url string default: "https://api.daily.co/v1" The Daily API base URL aiohttp_session aiohttp.ClientSession required An aiohttp client session for making HTTP requests Copy Ask AI helper = DailyRESTHelper( daily_api_key = "your-api-key" , aiohttp_session = session ) Create Room Creates a new Daily room with specified parameters. params DailyRoomParams required Room configuration parameters including name, privacy, and properties Copy Ask AI # Create a room that expires in 1 hour params = DailyRoomParams( name = "my-room" , privacy = "private" , properties = DailyRoomProperties( exp = time.time() + 3600 , enable_chat = True ) ) room = await helper.create_room(params) print ( f "Room URL: { room.url } " ) Get Room From URL Retrieves room information using a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room = await helper.get_room_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room.name } " ) Get Token Generates a meeting token for a specific room. room_url string required The complete Daily room URL expiry_time float default: "3600" Token expiration time in seconds eject_at_token_exp bool default: "False" Whether to eject user when token expires owner bool default: "True" Whether the token should have owner privileges (overrides any setting in params) params DailyMeetingTokenParams Additional token configuration. Note that room_name , exp , eject_at_token_exp , and is_owner will be set based on the other function parameters. Copy Ask AI # Basic token generation token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , # 30 minutes owner = True , eject_at_token_exp = True ) # Advanced token generation with additional properties token_params = DailyMeetingTokenParams( properties = DailyMeetingTokenProperties( user_name = "John Doe" , start_video_off = True ) ) token = await helper.get_token( room_url = "https://your-domain.daily.co/my-room" , expiry_time = 1800 , owner = False , eject_at_token_exp = True , params = token_params ) Delete Room By URL Deletes a room using its URL. room_url string required The complete Daily room URL Copy Ask AI success = await helper.delete_room_by_url( "https://your-domain.daily.co/my-room" ) if success: print ( "Room deleted successfully" ) Delete Room By Name Deletes a room using its name. room_name string required The name of the Daily room Copy Ask AI success = await helper.delete_room_by_name( "my-room" ) if success: print ( "Room deleted successfully" ) Get Name From URL Extracts the room name from a Daily room URL. room_url string required The complete Daily room URL Copy Ask AI room_name = helper.get_name_from_url( "https://your-domain.daily.co/my-room" ) print ( f "Room name: { room_name } " ) # Outputs: "my-room" Turn Tracking Observer Smart Turn Overview On this page Classes DailyRoomSipParams RecordingsBucketConfig DailyRoomProperties DailyRoomParams DailyRoomObject DailyMeetingTokenProperties DailyMeetingTokenParams Initialize DailyRESTHelper Create Room Get Room From URL Get Token Delete Room By URL Delete Room By Name Get Name From URL Assistant Responses are generated using AI and may contain mistakes.
|
deployment_cerebrium_31600fa3.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/guides/deployment/cerebrium#install-the-cerebrium-cli
|
| 2 |
+
Title: Example: Cerebrium - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Example: Cerebrium - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Deploying your bot Example: Cerebrium Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Guides Fundamentals Context Management Custom FrameProcessor Detecting Idle Users Ending a Pipeline Function Calling Muting User Input Recording Audio Recording Transcripts Features Gemini Multimodal Live Metrics Noise cancellation with Krisp OpenAI Audio Models and APIs Pipecat Flows Telephony Overview Dial-in: WebRTC (Daily) Dial-in: WebRTC (Twilio + Daily) Dial-in: Twilio (Media Streams) Dialout: WebRTC (Daily) Deploying your bot Overview Deployment pattern Example: Pipecat Cloud Example: Fly.io Example: Cerebrium Example: Modal Cerebrium is a serverless Infrastructure platform that makes it easy for companies to build, deploy and scale AI applications. Cerebrium offers both CPUs and GPUs (H100s, A100s etc) with extremely low cold start times allowing us to create highly performant applications in the most cost efficient manner. Install the Cerebrium CLI To get started, let us run the following commands: Run pip install cerebrium to install the Python package. Run cerebrium login to authenticate yourself. If you don’t have a Cerebrium account, you can create one and get started with $30 in free credits. Create a Cerebrium project Create a new Cerebrium project: Copy Ask AI cerebrium init pipecat-agent This will create two key files: main.py - Your application entrypoint cerebrium.toml - Configuration for build and environment settings Update your cerebrium.toml with the necessary configuration: Copy Ask AI [ cerebrium . hardware ] region = "us-east-1" provider = "aws" compute = "CPU" cpu = 4 memory = 18.0 [ cerebrium . dependencies . pip ] torch = ">=2.0.0" " pipecat-ai[silero, daily, openai, cartesia] " = "latest" aiohttp = "latest" torchaudio = "latest" In order for our application to work, we need to copy our API keys from the various platforms. Navigate to the Secrets section in your Cerebrium dashboard to store your API keys: OPENAI_API_KEY - We use OpenAI For the LLM. You can get your API key from here DAILY_TOKEN - For WebRTC communication. You can get your token from here CARTERSIA_API_KEY - For text-to-speech services. You can get your API key from here We access these secrets in our code as if they are normal ENV vars. For the above, You can swap in any LLM or TTS service you wish to use. Agent setup We create a basic pipeline setup in our main.py that combines our LLM, TTS and Daily WebRTC transport layer. Copy Ask AI import os import sys import time import aiohttp from loguru import logger from pipecat.frames.frames import EndFrame from pipecat.pipeline.pipeline import Pipeline from pipecat.pipeline.runner import PipelineRunner from pipecat.pipeline.task import PipelineParams, PipelineTask from pipecat.services.openai.llm import OpenAILLMService from pipecat.processors.aggregators.openai_llm_context import ( OpenAILLMContext, ) from pipecat.services.cartesia.tts import CartesiaTTSService from pipecat.transports.services.daily import DailyParams, DailyTransport from pipecat.vad.silero import SileroVADAnalyzer from pipecat.vad.vad_analyzer import VADParams logger.remove( 0 ) logger.add(sys.stderr, level = "DEBUG" ) async def main ( room_url : str , token : str ): async with aiohttp.ClientSession() as session: transport = DailyTransport( room_url, token, "Friendly bot" , DailyParams( audio_in_enabled = True , audio_out_enabled = True , transcription_enabled = True , vad_analyzer = SileroVADAnalyzer( params = VADParams( stop_secs = 0.6 )), ), ) messages = [ { "role" : "system" , "content" : "You are a helpful AI assistant that can switch between two services to showcase the difference in performance and cost: 'openai_realtime' and 'custom'. Respond to user queries and switch services when asked." , }, ] llm = OpenAILLMService( name = "LLM" , api_key = os.environ.get( "OPENAI_API_KEY" ), model = "gpt-4" , ) tts = CartesiaTTSService( api_key = os.getenv( "CARTESIA_API_KEY" ), voice_id = "79a125e8-cd45-4c13-8a67-188112f4dd22" , # British Lady ) custom_context = OpenAILLMContext( messages = messages) context_aggregator_custom = llm.create_context_aggregator(custom_context) pipeline = Pipeline( [ transport.input(), # Transport user input context_aggregator_custom.user(), llm, tts, context_aggregator_custom.assistant(), transport.output(), # Transport bot output ] ) task = PipelineTask( pipeline, params = PipelineParams( allow_interruptions = True , enable_metrics = True , enable_usage_metrics = True , ), ) @transport.event_handler ( "on_first_participant_joined" ) async def on_first_participant_joined ( transport , participant ): transport.capture_participant_transcription(participant[ "id" ]) time.sleep( 1.5 ) messages.append( { "role" : "system" , "content" : "Introduce yourself." , } ) await task.queue_frame(LLMMessagesFrame(messages)) @transport.event_handler ( "on_participant_left" ) async def on_participant_left ( transport , participant , reason ): await task.queue_frame(EndFrame()) @transport.event_handler ( "on_call_state_updated" ) async def on_call_state_updated ( transport , state ): if state == "left" : await task.queue_frame(EndFrame()) runner = PipelineRunner() await runner.run(task) await session.close() First, in our main function, we initialize the daily transport layer to receive/send the audio/video data from the Daily room we will connect to. You can see we pass the room_url we would like to join as well as a token to authenticate us programmatically joining. We also set our VAD stop seconds which is the amount of time we wait for a pause before our bot will respond - in this example, we set it to 600 milliseconds. Next we connect to our LLM (OpenAI) as well as our TTS model (Cartesia). By setting ‘transcription_enabled=true’ we are using the STT from Daily itself. This is where the Pipecat framework helps convert audio data to text and vice versa. We then put this all together as a PipelineTask which is what Pipecat runs all together. The make up of a task is completely customizable and has support for Image and Vision use cases. Lastly, we have some event handlers for when a user joins/leaves the room. Deploy bot Deploy your application to Cerebrium: Copy Ask AI cerebrium deploy You will then see that an endpoint is created for your bot at POST \<BASE_URL\>/main that you can call with your room_url and token. Let us test it. Test it out Copy Ask AI def create_room (): url = "https://api.daily.co/v1/rooms/" headers = { "Content-Type" : "application/json" , "Authorization" : f "Bearer { os.environ.get( 'DAILY_TOKEN' ) } " , } data = { "properties" : { "exp" : int (time.time()) + 60 * 5 , ##5 mins "eject_at_room_exp" : True , } } response = requests.post(url, headers = headers, json = data) if response.status_code == 200 : room_info = response.json() token = create_token(room_info[ "name" ]) if token and "token" in token: room_info[ "token" ] = token[ "token" ] else : logger.error( "Failed to create token" ) return { "message" : "There was an error creating your room" , "status_code" : 500 , } return room_info else : logger.error( f "Failed to create room: { response.status_code } " ) return { "message" : "There was an error creating your room" , "status_code" : 500 } def create_token ( room_name : str ): url = "https://api.daily.co/v1/meeting-tokens" headers = { "Content-Type" : "application/json" , "Authorization" : f "Bearer { os.environ.get( 'DAILY_TOKEN' ) } " , } data = { "properties" : { "room_name" : room_name, "is_owner" : True }} response = requests.post(url, headers = headers, json = data) if response.status_code == 200 : token_info = response.json() return token_info else : logger.error( f "Failed to create token: { response.status_code } " ) return None if __name__ == "__main__" : room_info = create_room() print ( f "Join room: { room_info[ "url" ] } " ) asyncio.run(main(room_info[ "url" ], room_info[ "token" ])) Future Considerations Since Cerebrium supports both CPU and GPU workloads if you would like to lower the latency of your application then the best would be to get model weights from various providers and run them locally. You can do this for: LLM: Run any OpenSource model using a framework such as vLLM TTS: Both PlayHt and Deepgram offer TTS models that can be run locally STT: Deepgram offers a local model that can be run locally If you implement all three models locally, you should have much better performance. We have been able to get ~300ms voice-to-voice responses. Examples Fastest voice agent : Local only implementation RAG voice agent : Create a voice agent that can do RAG using Cerebrium + OpenAI + Pinecone Twilio voice agent : Create a voice agent that can receive phone calls via Twilio OpenAI Realtime API implementation : Create a voice agent that can receive phone calls via OpenAI Realtime API Example: Fly.io Example: Modal On this page Install the Cerebrium CLI Create a Cerebrium project Agent setup Deploy bot Test it out Future Considerations Examples Assistant Responses are generated using AI and may contain mistakes.
|
deployment_cerebrium_53a507d6.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/deployment/cerebrium#next-steps
|
| 2 |
+
Title: Overview - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Overview - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Get Started Overview Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Get Started Overview Installation & Setup Quickstart Core Concepts Next Steps & Examples Pipecat is an open source Python framework that handles the complex orchestration of AI services, network transport, audio processing, and multimodal interactions. “Multimodal” means you can use any combination of audio, video, images, and/or text in your interactions. And “real-time” means that things are happening quickly enough that it feels conversational—a “back-and-forth” with a bot, not submitting a query and waiting for results. What You Can Build Voice Assistants Natural, real-time conversations with AI using speech recognition and synthesis Interactive Agents Personal coaches and meeting assistants that can understand context and provide guidance Multimodal Apps Applications that combine voice, video, images, and text for rich interactions Creative Tools Storytelling experiences and social companions that engage users Business Solutions Customer intake flows and support bots for automated business processes Complex Flows Structured conversations using Pipecat Flows for managing complex interactions How It Works The flow of interactions in a Pipecat application is typically straightforward: The bot says something The user says something The bot says something The user says something This continues until the conversation naturally ends. While this flow seems simple, making it feel natural requires sophisticated real-time processing. Real-time Processing Pipecat’s pipeline architecture handles both simple voice interactions and complex multimodal processing. Let’s look at how data flows through the system: Voice app Multimodal app 1 Send Audio Transmit and capture streamed audio from the user 2 Transcribe Speech Convert speech to text as the user is talking 3 Process with LLM Generate responses using a large language model 4 Convert to Speech Transform text responses into natural speech 5 Play Audio Stream the audio response back to the user 1 Send Audio Transmit and capture streamed audio from the user 2 Transcribe Speech Convert speech to text as the user is talking 3 Process with LLM Generate responses using a large language model 4 Convert to Speech Transform text responses into natural speech 5 Play Audio Stream the audio response back to the user 1 Send Audio and Video Transmit and capture audio, video, and image inputs simultaneously 2 Process Streams Handle multiple input streams in parallel 3 Model Processing Send combined inputs to multimodal models (like GPT-4V) 4 Generate Outputs Create various outputs (text, images, audio, etc.) 5 Coordinate Presentation Synchronize and present multiple output types In both cases, Pipecat: Processes responses as they stream in Handles multiple input/output modalities concurrently Manages resource allocation and synchronization Coordinates parallel processing tasks This architecture creates fluid, natural interactions without noticeable delays, whether you’re building a simple voice assistant or a complex multimodal application. Pipecat’s pipeline architecture is particularly valuable for managing the complexity of real-time, multimodal interactions, ensuring smooth data flow and proper synchronization regardless of the input/output types involved. Pipecat handles all this complexity for you, letting you focus on building your application rather than managing the underlying infrastructure. Next Steps Ready to build your first Pipecat application? Installation & Setup Prepare your environment and install required dependencies Quickstart Build and run your first Pipecat application Core Concepts Learn about pipelines, frames, and real-time processing Use Cases Explore example implementations and patterns Join Our Community Discord Community Connect with other developers, share your projects, and get support from the Pipecat team. Installation & Setup On this page What You Can Build How It Works Real-time Processing Next Steps Join Our Community Assistant Responses are generated using AI and may contain mistakes.
|
deployment_modal_f35ace44.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/guides/deployment/modal#deploy-a-self-serve-llm
|
| 2 |
+
Title: Example: Modal - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Example: Modal - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Deploying your bot Example: Modal Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Guides Fundamentals Context Management Custom FrameProcessor Detecting Idle Users Ending a Pipeline Function Calling Muting User Input Recording Audio Recording Transcripts Features Gemini Multimodal Live Metrics Noise cancellation with Krisp OpenAI Audio Models and APIs Pipecat Flows Telephony Overview Dial-in: WebRTC (Daily) Dial-in: WebRTC (Twilio + Daily) Dial-in: Twilio (Media Streams) Dialout: WebRTC (Daily) Deploying your bot Overview Deployment pattern Example: Pipecat Cloud Example: Fly.io Example: Cerebrium Example: Modal Modal is well-suited for Pipecat deployments because it handles container orchestration, scaling, and cold starts efficiently. This makes it a good choice for production Pipecat bots that need reliable performance. This guide walks through the Modal example included in the Pipecat repository, which follows the same deployment pattern . Modal example View the complete Modal deployment example in our GitHub repository Install the Modal CLI Set up Modal Follow Modal’s official instructions for creating an account and setting up the CLI Deploy a self-serve LLM Deploy Modal’s OpenAI-compatible LLM service: Copy Ask AI git clone https://github.com/modal-labs/modal-examples cd modal-examples modal deploy 06_gpu_and_ml/llm-serving/vllm_inference.py Refer to Modal’s guide and example for Deploying an OpenAI-compatible LLM service with vLLM for more details. Take note of the endpoint URL from the previous step, which will look like: Copy Ask AI https://{your-workspace}--example-vllm-openai-compatible-serve.modal.run You’ll need this for the bot_vllm.py file in the next section. The default Modal LLM example uses Llama-3.1 and will shut down after 15 minutes of inactivity. Cold starts take 5-10 minutes. To prepare the service, we recommend visiting the /docs endpoint ( https://<Modal workspace>--example-vllm-openai-compatible-serve.modal.run/docs ) for your deployed LLM and wait for it to fully load before connecting your client. Deploy FastAPI App and Pipecat pipeline to Modal Setup environment variables: Copy Ask AI cd server cp env.example .env # Modify .env to provide your service API Keys Alternatively, you can configure your Modal app to use secrets . Update the modal_url in server/src/bot_vllm.py to point to the URL you received from the self-serve LLM deployment in the previous step. From within the server directory, test the app locally: Copy Ask AI modal serve app.py Deploy to production: Copy Ask AI modal deploy app.py Note the endpoint URL produced from this deployment. It will look like: Copy Ask AI https:// {your-workspace} --pipecat-modal-fastapi-app.modal.run You’ll need this URL for the client’s app.js configuration mentioned in its README. Launch your bots on Modal Option 1: Direct Link Simply click on the URL displayed after running the server or deploy step to launch an agent and be redirected to a Daily room to talk with the launched bot. This will use the OpenAI pipeline. Option 2: Connect via an RTVI Client Follow the instructions provided in the client folder’s README for building and running a custom client that connects to your Modal endpoint. The provided client includes a dropdown for choosing which bot pipeline to run. Navigating your LLM, server, and Pipecat logs On your Modal dashboard , you should have two Apps listed under Live Apps: example-vllm-openai-compatible : This App contains the containers and logs used to run your self-hosted LLM. There will be just one App Function listed: serve . Click on this function to view logs for your LLM. pipecat-modal : This App contains the containers and logs used to run your connect endpoints and Pipecat pipelines. It will list two App Functions: fastapi_app : This function is running the endpoints that your client will interact with and initiate starting a new pipeline ( / , /connect , /status ). Click on this function to see logs for each endpoint hit. bot_runner : This function handles launching and running a bot pipeline. Click on this function to get a list of all pipeline runs and access each run’s logs. Modal & Pipecat Tips In most other Pipecat examples, we use Popen to launch the pipeline process from the /connect endpoint. In this example, we use a Modal function instead. This allows us to run the pipelines using a separately defined Modal image as well as run each pipeline in an isolated container. For the FastAPI and most common Pipecat Pipeline containers, a default debian_slim CPU-only should be all that’s required to run. GPU containers are needed for self-hosted services. To minimize cold starts of the pipeline and reduce latency for users, set min_containers=1 on the Modal Function that launches the pipeline to ensure at least one warm instance of your function is always available. Next steps Explore Modal's LLM Examples For next steps on running a self-hosted LLM and reducing latency, check out all of Modal’s LLM examples Example: Cerebrium On this page Install the Modal CLI Deploy a self-serve LLM Deploy FastAPI App and Pipecat pipeline to Modal Launch your bots on Modal Option 1: Direct Link Option 2: Connect via an RTVI Client Navigating your LLM, server, and Pipecat logs Modal & Pipecat Tips Next steps Assistant Responses are generated using AI and may contain mistakes.
|
deployment_pattern_786babb2.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/guides/deployment/pattern#bot-runner
|
| 2 |
+
Title: Deployment pattern - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Deployment pattern - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Deploying your bot Deployment pattern Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Guides Fundamentals Context Management Custom FrameProcessor Detecting Idle Users Ending a Pipeline Function Calling Muting User Input Recording Audio Recording Transcripts Features Gemini Multimodal Live Metrics Noise cancellation with Krisp OpenAI Audio Models and APIs Pipecat Flows Telephony Overview Dial-in: WebRTC (Daily) Dial-in: WebRTC (Twilio + Daily) Dial-in: Twilio (Media Streams) Dialout: WebRTC (Daily) Deploying your bot Overview Deployment pattern Example: Pipecat Cloud Example: Fly.io Example: Cerebrium Example: Modal Project structure A Pipecat project will often consist of the following: 1. Bot file E.g. bot.py . Your Pipecat bot / agent, containing all the pipelines that you want to run in order to communicate with an end-user. A bot file may take some command line arguments, such as a transport URL and configuration. 2. Bot runner E.g. bot_runner.py . Typically a basic HTTP service that listens for incoming user requests and spawns the relevant bot file in response. You can call these files whatever you like! We use bot.py and bot_runner.py for simplicity. Typical user / bot flow There are many ways to approach connecting users to bots. Pipecat is unopinionated about how exactly you should do this, but it’s helpful to put an idea forward. At a very basic level, it may look something like this: 1 User requests to join session via client / app Client initiates a HTTP request to a hosted bot runner service. 2 Bot runner handles the request Authenticates, configures and instantiates everything necessary for the session to commence (e.g. a new WebSocket channel, or WebRTC room, etc.) 3 Bot runner spawns bot / agent A new bot process / VM is created for the user to connect with (passing across any necessary configuration.) Your project may load just one bot file, contextually swap between multiple, or launch many at once. 4 Bot instantiates and joins session via specified transport credentials Bot initializes, connects to the session (e.g. locally or via WebSockets, WebRTC etc) and runs your bot code. 5 Bot runner returns status to client Once the bot is ready, the runner resolves the HTTP request with details for the client to connect. Bot runner The majority of use-cases require a way to trigger and manage a bot session over the internet. We call these bot runners; a HTTP service that provides a gateway for spawning bots on-demand. The anatomy of a bot runner service is entirery arbitrary, but at very least will have a method that spawns a new bot process, for example: Copy Ask AI import uvicorn from fastapi import FastAPI, Request, HTTPException from fastapi.responses import JSONResponse app = FastAPI() @app.post ( "/start_bot" ) async def start_bot ( request : Request) -> JSONResponse: # ... handle / authenticate the request # ... setup the transport session # Spawn a new bot process try : #... create a new bot instance except Exception as e: raise HTTPException( status_code = 500 , detail = f "Failed to start bot: { e } " ) # Return a URL for the user to join return JSONResponse({ ... }) if __name__ == "__main__" : uvicorn.run( "bot_runner:app" , host = "0.0.0.0" , port = 7860 ) This pseudo code defines a /start_bot/ endpoint which listens for incoming user POST requests or webhooks, then configures the session (such as creating rooms on your transport provider) and instantiates a new bot process. A client will typically require some information regarding the newly spawned bot, such as a web address, so we also return some JSON with the necessary details. Data transport Your transport layer is responsible for sending and receiving audio and video data over the internet. You will have implemented a transport layer as part of your bot.py pipeline. This may be a service that you want to host and include in your deployment, or it may be a third-party service waiting for peers to connect (such as Daily , or a websocket.) For this example, we will make use of Daily’s WebRTC transport. This will mean that our bot_runner.py will need to do some configuration when it spawns a new bot: Create and configure a new Daily room for the session to take place in. Issue both the bot and the user an authentication token to join the session. Whatever you use for your transport layer, you’ll likely need to setup some environmental variables and run some custom code before spawning the agent. Best practice for bot files A good pattern to work to is the assumption that your bot.py is an encapsulated entity and does not have any knowledge of the bot_runner.py . You should provide the bot everything it needs to operate during instantiation. Sticking to this approach helps keep things simple and makes it easier to step through debugging (if the bot launched and something goes wrong, you know to look for errors in your bot file.) Example Let’s assume we have a fully service-driven bot.py that connects to a WebRTC session, passes audio transcription to GPT4 and returns audio text-to-speech with ElevenLabs. We’ll also use Silero voice activity detection, to better know when the user has stopped talking. bot.py Copy Ask AI import asyncio import aiohttp import os import sys import argparse from pipecat.pipeline.pipeline import Pipeline from pipecat.pipeline.runner import PipelineRunner from pipecat.pipeline.task import PipelineParams, PipelineTask from pipecat.processors.aggregators.llm_response import LLMAssistantResponseAggregator, LLMUserResponseAggregator from pipecat.frames.frames import LLMMessagesFrame, EndFrame from pipecat.services.openai.llm import OpenAILLMService from pipecat.services.elevenlabs.tts import ElevenLabsTTSService from pipecat.transports.services.daily import DailyParams, DailyTransport from pipecat.vad.silero import SileroVADAnalyzer from loguru import logger from dotenv import load_dotenv load_dotenv( override = True ) logger.remove( 0 ) logger.add(sys.stderr, level = "DEBUG" ) daily_api_key = os.getenv( "DAILY_API_KEY" , "" ) daily_api_url = os.getenv( "DAILY_API_URL" , "https://api.daily.co/v1" ) async def main ( room_url : str , token : str ): async with aiohttp.ClientSession() as session: transport = DailyTransport( room_url, token, "Chatbot" , DailyParams( api_url = daily_api_url, api_key = daily_api_key, audio_in_enabled = True , audio_out_enabled = True , video_out_enabled = False , vad_analyzer = SileroVADAnalyzer(), transcription_enabled = True , ) ) tts = ElevenLabsTTSService( aiohttp_session = session, api_key = os.getenv( "ELEVENLABS_API_KEY" , "" ), voice_id = os.getenv( "ELEVENLABS_VOICE_ID" , "" ), ) llm = OpenAILLMService( api_key = os.getenv( "OPENAI_API_KEY" ), model = "gpt-4o" ) messages = [ { "role" : "system" , "content" : "You are Chatbot, a friendly, helpful robot. Your output will be converted to audio so don't include special characters other than '!' or '?' in your answers. Respond to what the user said in a creative and helpful way, but keep your responses brief. Start by saying hello." , }, ] tma_in = LLMUserResponseAggregator(messages) tma_out = LLMAssistantResponseAggregator(messages) pipeline = Pipeline([ transport.input(), tma_in, llm, tts, transport.output(), tma_out, ]) task = PipelineTask(pipeline, params = PipelineParams( allow_interruptions = True )) @transport.event_handler ( "on_first_participant_joined" ) async def on_first_participant_joined ( transport , participant ): transport.capture_participant_transcription(participant[ "id" ]) await task.queue_frames([LLMMessagesFrame(messages)]) @transport.event_handler ( "on_participant_left" ) async def on_participant_left ( transport , participant , reason ): await task.queue_frame(EndFrame()) @transport.event_handler ( "on_call_state_updated" ) async def on_call_state_updated ( transport , state ): if state == "left" : await task.queue_frame(EndFrame()) runner = PipelineRunner() await runner.run(task) if __name__ == "__main__" : parser = argparse.ArgumentParser( description = "Pipecat Bot" ) parser.add_argument( "-u" , type = str , help = "Room URL" ) parser.add_argument( "-t" , type = str , help = "Token" ) config = parser.parse_args() asyncio.run(main(config.u, config.t)) HTTP API To launch this bot, let’s create a bot_runner.py that: Creates an API for users to send requests to. Launches a bot as a subprocess. bot_runner.py Copy Ask AI import os import argparse import subprocess from pipecat.transports.services.helpers.daily_rest import DailyRESTHelper, DailyRoomObject, DailyRoomProperties, DailyRoomParams from fastapi import FastAPI, Request, HTTPException from fastapi.middleware.cors import CORSMiddleware from fastapi.responses import JSONResponse # Load API keys from env from dotenv import load_dotenv load_dotenv( override = True ) # ------------ Configuration ------------ # MAX_SESSION_TIME = 5 * 60 # 5 minutes # List of require env vars our bot requires REQUIRED_ENV_VARS = [ 'DAILY_API_KEY' , 'OPENAI_API_KEY' , 'ELEVENLABS_API_KEY' , 'ELEVENLABS_VOICE_ID' ] daily_rest_helper = DailyRESTHelper( os.getenv( "DAILY_API_KEY" , "" ), os.getenv( "DAILY_API_URL" , 'https://api.daily.co/v1' )) # ----------------- API ----------------- # app = FastAPI() app.add_middleware( CORSMiddleware, allow_origins = [ "*" ], allow_credentials = True , allow_methods = [ "*" ], allow_headers = [ "*" ] ) # ----------------- Main ----------------- # @app.post ( "/start_bot" ) async def start_bot ( request : Request) -> JSONResponse: try : # Grab any data included in the post request data = await request.json() except Exception as e: pass # Create a new Daily WebRTC room for the session to take place in try : params = DailyRoomParams( properties = DailyRoomProperties() ) room: DailyRoomObject = daily_rest_helper.create_room( params = params) except Exception as e: raise HTTPException( status_code = 500 , detail = f "Unable to provision room { e } " ) # Give the agent a token to join the session token = daily_rest_helper.get_token(room.url, MAX_SESSION_TIME ) # Return an error if we were unable to create a room or a token if not room or not token: raise HTTPException( status_code = 500 , detail = f "Failed to get token for room: { room_url } " ) try : # Start a new subprocess, passing the room and token to the bot file subprocess.Popen( [ f "python3 -m bot -u { room.url } -t { token } " ], shell = True , bufsize = 1 , cwd = os.path.dirname(os.path.abspath( __file__ ))) except Exception as e: raise HTTPException( status_code = 500 , detail = f "Failed to start subprocess: { e } " ) # Grab a token for the user to join with user_token = daily_rest_helper.get_token(room.url, MAX_SESSION_TIME ) # Return the room url and user token back to the user return JSONResponse({ "room_url" : room.url, "token" : user_token, }) if __name__ == "__main__" : # Check for required environment variables for env_var in REQUIRED_ENV_VARS : if env_var not in os.environ: raise Exception ( f "Missing environment variable: { env_var } ." ) parser = argparse.ArgumentParser( description = "Pipecat Bot Runner" ) parser.add_argument( "--host" , type = str , default = os.getenv( "HOST" , "0.0.0.0" ), help = "Host address" ) parser.add_argument( "--port" , type = int , default = os.getenv( "PORT" , 7860 ), help = "Port number" ) parser.add_argument( "--reload" , action = "store_true" , default = False , help = "Reload code on change" ) config = parser.parse_args() try : import uvicorn uvicorn.run( "bot_runner:app" , host = config.host, port = config.port, reload = config.reload ) except KeyboardInterrupt : print ( "Pipecat runner shutting down..." ) Dockerfile Since our bot is just using Python, our Dockerfile can be quite simple: Dockerfile install_deps.py Copy Ask AI FROM python:3.11-bullseye # Open port 7860 for http service ENV FAST_API_PORT= 7860 EXPOSE 7860 # Install Python dependencies COPY \* .py . COPY ./requirements.txt requirements.txt RUN pip3 install --no-cache-dir --upgrade -r requirements.txt # Install models RUN python3 install_deps.py # Start the FastAPI server CMD python3 bot_runner.py --port ${ FAST_API_PORT } The bot runner and bot requirements.txt : requirements.txt Copy Ask AI pipecat-ai[daily,openai,silero] fastapi uvicorn python-dotenv And finally, let’s create a .env file with our service keys .env Copy Ask AI DAILY_API_KEY = ... OPENAI_API_KEY = ... ELEVENLABS_API_KEY = ... ELEVENLABS_VOICE_ID = ... How it works Right now, this runner is spawning bot.py as a subprocess. When spawning the process, we pass through the transport room and token as system arguments to our bot, so it knows where to connect. Subprocesses serve as a great way to test out your bot in the cloud without too much hassle, but depending on the size of the host machine, it will likely not hold up well under load. Whilst some bots are just simple operators between the transport and third-party AI services (such as OpenAI), others have somewhat CPU-intensive operations, such as running and loading VAD models, so you may find you’re only able to scale this to support up to 5-10 concurrent bots. Scaling your setup would require virtualizing your bot with it’s own set of system resources, the process of which depends on your cloud provider. Best practices In an ideal world, we’d recommend containerizing your bot and bot runner independently so you can deploy each without any unnecessary dependencies or models. Most cloud providers will offer a way to deploy various images programmatically, which we explore in the various provider examples in these docs. For the sake of simplicity defining this pattern, we’re just using one container for everything. Build and run We should now have a project that contains the following files: bot.py bot_runner.py requirements.txt .env Dockerfile You can now docker build ... and deploy your container. Of course, you can still work with your bot in local development too: Copy Ask AI # Install and activate a virtual env python -m venv venv source venv/bin/activate # or OS equivalent pip install -r requirements.txt python bot_runner.py --host localhost --reload Overview Example: Pipecat Cloud On this page Project structure Typical user / bot flow Bot runner Data transport Best practice for bot files Example HTTP API Dockerfile How it works Best practices Build and run Assistant Responses are generated using AI and may contain mistakes.
|
deployment_pattern_81874897.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/deployment/pattern#how-it-works
|
| 2 |
+
Title: Overview - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Overview - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Get Started Overview Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Get Started Overview Installation & Setup Quickstart Core Concepts Next Steps & Examples Pipecat is an open source Python framework that handles the complex orchestration of AI services, network transport, audio processing, and multimodal interactions. “Multimodal” means you can use any combination of audio, video, images, and/or text in your interactions. And “real-time” means that things are happening quickly enough that it feels conversational—a “back-and-forth” with a bot, not submitting a query and waiting for results. What You Can Build Voice Assistants Natural, real-time conversations with AI using speech recognition and synthesis Interactive Agents Personal coaches and meeting assistants that can understand context and provide guidance Multimodal Apps Applications that combine voice, video, images, and text for rich interactions Creative Tools Storytelling experiences and social companions that engage users Business Solutions Customer intake flows and support bots for automated business processes Complex Flows Structured conversations using Pipecat Flows for managing complex interactions How It Works The flow of interactions in a Pipecat application is typically straightforward: The bot says something The user says something The bot says something The user says something This continues until the conversation naturally ends. While this flow seems simple, making it feel natural requires sophisticated real-time processing. Real-time Processing Pipecat’s pipeline architecture handles both simple voice interactions and complex multimodal processing. Let’s look at how data flows through the system: Voice app Multimodal app 1 Send Audio Transmit and capture streamed audio from the user 2 Transcribe Speech Convert speech to text as the user is talking 3 Process with LLM Generate responses using a large language model 4 Convert to Speech Transform text responses into natural speech 5 Play Audio Stream the audio response back to the user 1 Send Audio Transmit and capture streamed audio from the user 2 Transcribe Speech Convert speech to text as the user is talking 3 Process with LLM Generate responses using a large language model 4 Convert to Speech Transform text responses into natural speech 5 Play Audio Stream the audio response back to the user 1 Send Audio and Video Transmit and capture audio, video, and image inputs simultaneously 2 Process Streams Handle multiple input streams in parallel 3 Model Processing Send combined inputs to multimodal models (like GPT-4V) 4 Generate Outputs Create various outputs (text, images, audio, etc.) 5 Coordinate Presentation Synchronize and present multiple output types In both cases, Pipecat: Processes responses as they stream in Handles multiple input/output modalities concurrently Manages resource allocation and synchronization Coordinates parallel processing tasks This architecture creates fluid, natural interactions without noticeable delays, whether you’re building a simple voice assistant or a complex multimodal application. Pipecat’s pipeline architecture is particularly valuable for managing the complexity of real-time, multimodal interactions, ensuring smooth data flow and proper synchronization regardless of the input/output types involved. Pipecat handles all this complexity for you, letting you focus on building your application rather than managing the underlying infrastructure. Next Steps Ready to build your first Pipecat application? Installation & Setup Prepare your environment and install required dependencies Quickstart Build and run your first Pipecat application Core Concepts Learn about pipelines, frames, and real-time processing Use Cases Explore example implementations and patterns Join Our Community Discord Community Connect with other developers, share your projects, and get support from the Pipecat team. Installation & Setup On this page What You Can Build How It Works Real-time Processing Next Steps Join Our Community Assistant Responses are generated using AI and may contain mistakes.
|
deployment_pattern_a1fae09a.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/guides/deployment/pattern#project-structure
|
| 2 |
+
Title: Deployment pattern - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Deployment pattern - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Deploying your bot Deployment pattern Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Guides Fundamentals Context Management Custom FrameProcessor Detecting Idle Users Ending a Pipeline Function Calling Muting User Input Recording Audio Recording Transcripts Features Gemini Multimodal Live Metrics Noise cancellation with Krisp OpenAI Audio Models and APIs Pipecat Flows Telephony Overview Dial-in: WebRTC (Daily) Dial-in: WebRTC (Twilio + Daily) Dial-in: Twilio (Media Streams) Dialout: WebRTC (Daily) Deploying your bot Overview Deployment pattern Example: Pipecat Cloud Example: Fly.io Example: Cerebrium Example: Modal Project structure A Pipecat project will often consist of the following: 1. Bot file E.g. bot.py . Your Pipecat bot / agent, containing all the pipelines that you want to run in order to communicate with an end-user. A bot file may take some command line arguments, such as a transport URL and configuration. 2. Bot runner E.g. bot_runner.py . Typically a basic HTTP service that listens for incoming user requests and spawns the relevant bot file in response. You can call these files whatever you like! We use bot.py and bot_runner.py for simplicity. Typical user / bot flow There are many ways to approach connecting users to bots. Pipecat is unopinionated about how exactly you should do this, but it’s helpful to put an idea forward. At a very basic level, it may look something like this: 1 User requests to join session via client / app Client initiates a HTTP request to a hosted bot runner service. 2 Bot runner handles the request Authenticates, configures and instantiates everything necessary for the session to commence (e.g. a new WebSocket channel, or WebRTC room, etc.) 3 Bot runner spawns bot / agent A new bot process / VM is created for the user to connect with (passing across any necessary configuration.) Your project may load just one bot file, contextually swap between multiple, or launch many at once. 4 Bot instantiates and joins session via specified transport credentials Bot initializes, connects to the session (e.g. locally or via WebSockets, WebRTC etc) and runs your bot code. 5 Bot runner returns status to client Once the bot is ready, the runner resolves the HTTP request with details for the client to connect. Bot runner The majority of use-cases require a way to trigger and manage a bot session over the internet. We call these bot runners; a HTTP service that provides a gateway for spawning bots on-demand. The anatomy of a bot runner service is entirery arbitrary, but at very least will have a method that spawns a new bot process, for example: Copy Ask AI import uvicorn from fastapi import FastAPI, Request, HTTPException from fastapi.responses import JSONResponse app = FastAPI() @app.post ( "/start_bot" ) async def start_bot ( request : Request) -> JSONResponse: # ... handle / authenticate the request # ... setup the transport session # Spawn a new bot process try : #... create a new bot instance except Exception as e: raise HTTPException( status_code = 500 , detail = f "Failed to start bot: { e } " ) # Return a URL for the user to join return JSONResponse({ ... }) if __name__ == "__main__" : uvicorn.run( "bot_runner:app" , host = "0.0.0.0" , port = 7860 ) This pseudo code defines a /start_bot/ endpoint which listens for incoming user POST requests or webhooks, then configures the session (such as creating rooms on your transport provider) and instantiates a new bot process. A client will typically require some information regarding the newly spawned bot, such as a web address, so we also return some JSON with the necessary details. Data transport Your transport layer is responsible for sending and receiving audio and video data over the internet. You will have implemented a transport layer as part of your bot.py pipeline. This may be a service that you want to host and include in your deployment, or it may be a third-party service waiting for peers to connect (such as Daily , or a websocket.) For this example, we will make use of Daily’s WebRTC transport. This will mean that our bot_runner.py will need to do some configuration when it spawns a new bot: Create and configure a new Daily room for the session to take place in. Issue both the bot and the user an authentication token to join the session. Whatever you use for your transport layer, you’ll likely need to setup some environmental variables and run some custom code before spawning the agent. Best practice for bot files A good pattern to work to is the assumption that your bot.py is an encapsulated entity and does not have any knowledge of the bot_runner.py . You should provide the bot everything it needs to operate during instantiation. Sticking to this approach helps keep things simple and makes it easier to step through debugging (if the bot launched and something goes wrong, you know to look for errors in your bot file.) Example Let’s assume we have a fully service-driven bot.py that connects to a WebRTC session, passes audio transcription to GPT4 and returns audio text-to-speech with ElevenLabs. We’ll also use Silero voice activity detection, to better know when the user has stopped talking. bot.py Copy Ask AI import asyncio import aiohttp import os import sys import argparse from pipecat.pipeline.pipeline import Pipeline from pipecat.pipeline.runner import PipelineRunner from pipecat.pipeline.task import PipelineParams, PipelineTask from pipecat.processors.aggregators.llm_response import LLMAssistantResponseAggregator, LLMUserResponseAggregator from pipecat.frames.frames import LLMMessagesFrame, EndFrame from pipecat.services.openai.llm import OpenAILLMService from pipecat.services.elevenlabs.tts import ElevenLabsTTSService from pipecat.transports.services.daily import DailyParams, DailyTransport from pipecat.vad.silero import SileroVADAnalyzer from loguru import logger from dotenv import load_dotenv load_dotenv( override = True ) logger.remove( 0 ) logger.add(sys.stderr, level = "DEBUG" ) daily_api_key = os.getenv( "DAILY_API_KEY" , "" ) daily_api_url = os.getenv( "DAILY_API_URL" , "https://api.daily.co/v1" ) async def main ( room_url : str , token : str ): async with aiohttp.ClientSession() as session: transport = DailyTransport( room_url, token, "Chatbot" , DailyParams( api_url = daily_api_url, api_key = daily_api_key, audio_in_enabled = True , audio_out_enabled = True , video_out_enabled = False , vad_analyzer = SileroVADAnalyzer(), transcription_enabled = True , ) ) tts = ElevenLabsTTSService( aiohttp_session = session, api_key = os.getenv( "ELEVENLABS_API_KEY" , "" ), voice_id = os.getenv( "ELEVENLABS_VOICE_ID" , "" ), ) llm = OpenAILLMService( api_key = os.getenv( "OPENAI_API_KEY" ), model = "gpt-4o" ) messages = [ { "role" : "system" , "content" : "You are Chatbot, a friendly, helpful robot. Your output will be converted to audio so don't include special characters other than '!' or '?' in your answers. Respond to what the user said in a creative and helpful way, but keep your responses brief. Start by saying hello." , }, ] tma_in = LLMUserResponseAggregator(messages) tma_out = LLMAssistantResponseAggregator(messages) pipeline = Pipeline([ transport.input(), tma_in, llm, tts, transport.output(), tma_out, ]) task = PipelineTask(pipeline, params = PipelineParams( allow_interruptions = True )) @transport.event_handler ( "on_first_participant_joined" ) async def on_first_participant_joined ( transport , participant ): transport.capture_participant_transcription(participant[ "id" ]) await task.queue_frames([LLMMessagesFrame(messages)]) @transport.event_handler ( "on_participant_left" ) async def on_participant_left ( transport , participant , reason ): await task.queue_frame(EndFrame()) @transport.event_handler ( "on_call_state_updated" ) async def on_call_state_updated ( transport , state ): if state == "left" : await task.queue_frame(EndFrame()) runner = PipelineRunner() await runner.run(task) if __name__ == "__main__" : parser = argparse.ArgumentParser( description = "Pipecat Bot" ) parser.add_argument( "-u" , type = str , help = "Room URL" ) parser.add_argument( "-t" , type = str , help = "Token" ) config = parser.parse_args() asyncio.run(main(config.u, config.t)) HTTP API To launch this bot, let’s create a bot_runner.py that: Creates an API for users to send requests to. Launches a bot as a subprocess. bot_runner.py Copy Ask AI import os import argparse import subprocess from pipecat.transports.services.helpers.daily_rest import DailyRESTHelper, DailyRoomObject, DailyRoomProperties, DailyRoomParams from fastapi import FastAPI, Request, HTTPException from fastapi.middleware.cors import CORSMiddleware from fastapi.responses import JSONResponse # Load API keys from env from dotenv import load_dotenv load_dotenv( override = True ) # ------------ Configuration ------------ # MAX_SESSION_TIME = 5 * 60 # 5 minutes # List of require env vars our bot requires REQUIRED_ENV_VARS = [ 'DAILY_API_KEY' , 'OPENAI_API_KEY' , 'ELEVENLABS_API_KEY' , 'ELEVENLABS_VOICE_ID' ] daily_rest_helper = DailyRESTHelper( os.getenv( "DAILY_API_KEY" , "" ), os.getenv( "DAILY_API_URL" , 'https://api.daily.co/v1' )) # ----------------- API ----------------- # app = FastAPI() app.add_middleware( CORSMiddleware, allow_origins = [ "*" ], allow_credentials = True , allow_methods = [ "*" ], allow_headers = [ "*" ] ) # ----------------- Main ----------------- # @app.post ( "/start_bot" ) async def start_bot ( request : Request) -> JSONResponse: try : # Grab any data included in the post request data = await request.json() except Exception as e: pass # Create a new Daily WebRTC room for the session to take place in try : params = DailyRoomParams( properties = DailyRoomProperties() ) room: DailyRoomObject = daily_rest_helper.create_room( params = params) except Exception as e: raise HTTPException( status_code = 500 , detail = f "Unable to provision room { e } " ) # Give the agent a token to join the session token = daily_rest_helper.get_token(room.url, MAX_SESSION_TIME ) # Return an error if we were unable to create a room or a token if not room or not token: raise HTTPException( status_code = 500 , detail = f "Failed to get token for room: { room_url } " ) try : # Start a new subprocess, passing the room and token to the bot file subprocess.Popen( [ f "python3 -m bot -u { room.url } -t { token } " ], shell = True , bufsize = 1 , cwd = os.path.dirname(os.path.abspath( __file__ ))) except Exception as e: raise HTTPException( status_code = 500 , detail = f "Failed to start subprocess: { e } " ) # Grab a token for the user to join with user_token = daily_rest_helper.get_token(room.url, MAX_SESSION_TIME ) # Return the room url and user token back to the user return JSONResponse({ "room_url" : room.url, "token" : user_token, }) if __name__ == "__main__" : # Check for required environment variables for env_var in REQUIRED_ENV_VARS : if env_var not in os.environ: raise Exception ( f "Missing environment variable: { env_var } ." ) parser = argparse.ArgumentParser( description = "Pipecat Bot Runner" ) parser.add_argument( "--host" , type = str , default = os.getenv( "HOST" , "0.0.0.0" ), help = "Host address" ) parser.add_argument( "--port" , type = int , default = os.getenv( "PORT" , 7860 ), help = "Port number" ) parser.add_argument( "--reload" , action = "store_true" , default = False , help = "Reload code on change" ) config = parser.parse_args() try : import uvicorn uvicorn.run( "bot_runner:app" , host = config.host, port = config.port, reload = config.reload ) except KeyboardInterrupt : print ( "Pipecat runner shutting down..." ) Dockerfile Since our bot is just using Python, our Dockerfile can be quite simple: Dockerfile install_deps.py Copy Ask AI FROM python:3.11-bullseye # Open port 7860 for http service ENV FAST_API_PORT= 7860 EXPOSE 7860 # Install Python dependencies COPY \* .py . COPY ./requirements.txt requirements.txt RUN pip3 install --no-cache-dir --upgrade -r requirements.txt # Install models RUN python3 install_deps.py # Start the FastAPI server CMD python3 bot_runner.py --port ${ FAST_API_PORT } The bot runner and bot requirements.txt : requirements.txt Copy Ask AI pipecat-ai[daily,openai,silero] fastapi uvicorn python-dotenv And finally, let’s create a .env file with our service keys .env Copy Ask AI DAILY_API_KEY = ... OPENAI_API_KEY = ... ELEVENLABS_API_KEY = ... ELEVENLABS_VOICE_ID = ... How it works Right now, this runner is spawning bot.py as a subprocess. When spawning the process, we pass through the transport room and token as system arguments to our bot, so it knows where to connect. Subprocesses serve as a great way to test out your bot in the cloud without too much hassle, but depending on the size of the host machine, it will likely not hold up well under load. Whilst some bots are just simple operators between the transport and third-party AI services (such as OpenAI), others have somewhat CPU-intensive operations, such as running and loading VAD models, so you may find you’re only able to scale this to support up to 5-10 concurrent bots. Scaling your setup would require virtualizing your bot with it’s own set of system resources, the process of which depends on your cloud provider. Best practices In an ideal world, we’d recommend containerizing your bot and bot runner independently so you can deploy each without any unnecessary dependencies or models. Most cloud providers will offer a way to deploy various images programmatically, which we explore in the various provider examples in these docs. For the sake of simplicity defining this pattern, we’re just using one container for everything. Build and run We should now have a project that contains the following files: bot.py bot_runner.py requirements.txt .env Dockerfile You can now docker build ... and deploy your container. Of course, you can still work with your bot in local development too: Copy Ask AI # Install and activate a virtual env python -m venv venv source venv/bin/activate # or OS equivalent pip install -r requirements.txt python bot_runner.py --host localhost --reload Overview Example: Pipecat Cloud On this page Project structure Typical user / bot flow Bot runner Data transport Best practice for bot files Example HTTP API Dockerfile How it works Best practices Build and run Assistant Responses are generated using AI and may contain mistakes.
|
deployment_pipecat-cloud_0dc09447.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/guides/deployment/pipecat-cloud#prerequisites
|
| 2 |
+
Title: Example: Pipecat Cloud - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Example: Pipecat Cloud - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Deploying your bot Example: Pipecat Cloud Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Guides Fundamentals Context Management Custom FrameProcessor Detecting Idle Users Ending a Pipeline Function Calling Muting User Input Recording Audio Recording Transcripts Features Gemini Multimodal Live Metrics Noise cancellation with Krisp OpenAI Audio Models and APIs Pipecat Flows Telephony Overview Dial-in: WebRTC (Daily) Dial-in: WebRTC (Twilio + Daily) Dial-in: Twilio (Media Streams) Dialout: WebRTC (Daily) Deploying your bot Overview Deployment pattern Example: Pipecat Cloud Example: Fly.io Example: Cerebrium Example: Modal Pipecat Cloud is a managed platform for hosting and scaling Pipecat agents in production. Prerequisites Before you begin, you’ll need: A Pipecat Cloud account Docker installed Python 3.10+ The Pipecat Cloud CLI: pip install pipecatcloud Quickstart Guide Follow a step-by-step guided experience to deploy your first agent Choosing a starting point Pipecat Cloud offers several ways to get started: Use a starter template - Pre-built agent configurations for common use cases Build from the base image - Create a custom agent using the official base image Clone the starter project - A bare-bones project template to customize Starter templates Pipecat Cloud provides several ready-made templates for common agent types: Template Description voice Voice conversation agent with STT, LLM and TTS twilio Telephony agent that works with Twilio natural_conversation Agent focused on natural dialogue flow, allowing a user time to think openai_realtime Agent using OpenAI’s Realtime API gemini_multimodal_live Multimodal agent using Google’s Gemini Multimodal Live API vision Computer vision agent that can analyze images These templates include a functioning implementation and Dockerfile. You can use them directly: Copy Ask AI # Clone the repository git clone https://github.com/daily-co/pipecat-cloud-images.git # Navigate to a starter template cd pipecat-cloud-images/pipecat-starters/voice # Customize the agent for your needs Project structure Whether using a starter template or building from scratch, a basic Pipecat Cloud project typically includes: Copy Ask AI my-agent/ ├── bot.py # Your Pipecat pipeline ├── Dockerfile # Container definition ├── requirements.txt # Python dependencies └── pcc-deploy.toml # Deployment config (optional) Agent implementation with bot.py Your agent’s bot.py code must include a specific bot() function that serves as the entry point for Pipecat Cloud. This function has different signatures depending on the transport method: For WebRTC/Daily transports Copy Ask AI async def bot ( args : DailySessionArguments): """Main bot entry point compatible with the FastAPI route handler. Args: config: The configuration object from the request body room_url: The Daily room URL token: The Daily room token session_id: The session ID for logging """ logger.info( f "Bot process initialized { args.room_url } { args.token } " ) try : await main(args.room_url, args.token) logger.info( "Bot process completed" ) except Exception as e: logger.exception( f "Error in bot process: { str (e) } " ) raise For WebSocket transports (e.g., Twilio) Copy Ask AI async def bot ( args : WebSocketSessionArguments): """Main bot entry point for WebSocket connections. Args: ws: The WebSocket connection session_id: The session ID for logging """ logger.info( "WebSocket bot process initialized" ) try : await main(args.websocket) logger.info( "WebSocket bot process completed" ) except Exception as e: logger.exception( f "Error in WebSocket bot process: { str (e) } " ) raise Complete Example: Voice Bot View a complete WebRTC voice bot implementation Complete Example: Twilio Bot View a complete Twilio WebSocket bot implementation Example Dockerfile Pipecat Cloud provides base images that include common dependencies for Pipecat agents: Copy Ask AI FROM dailyco/pipecat-base:latest COPY ./requirements.txt requirements.txt RUN pip install --no-cache-dir --upgrade -r requirements.txt COPY ./bot.py bot.py This Dockerfile: Uses the official Pipecat base image Installs Python dependencies from requirements.txt Copies your bot.py file to the container The base image (dailyco/pipecat-base) includes the HTTP API server, session management, and platform integration required to run on Pipecat Cloud. See the base image source code for details. Building and pushing your Docker image With your project structure in place, build and push your Docker image: Copy Ask AI # Build the image docker build --platform=linux/arm64 -t my-first-agent:latest . # Tag the image for your repository docker tag my-first-agent:latest your-username/my-first-agent:0.1 # Push the tagged image to the repository docker push your-username/my-first-agent:0.1 Pipecat Cloud requires ARM64 images. Make sure to specify the --platform=linux/arm64 flag when building. Managing secrets Your agent likely requires API keys and other credentials. Create a secret set to store them securely: Copy Ask AI # Create an .env file with your credentials touch .env # Create a secret set from the file pcc secrets set my-first-agent-secrets --file .env Deploying your agent Deploy your agent with the CLI: Copy Ask AI pcc deploy my-first-agent your-username/my-first-agent:0.1 --secret-set my-first-agent-secrets For a more maintainable approach, create a pcc-deploy.toml file: Copy Ask AI agent_name = "my-first-agent" image = "your-username/my-first-agent:0.1" secret_set = "my-first-agent-secrets" image_credentials = "my-first-agent-image-credentials" # For private repos [ scaling ] min_instances = 0 Then deploy using: Copy Ask AI pcc deploy Starting a session Once deployed, you can start a session with your agent: Copy Ask AI # Create and set a public access key if needed pcc organizations keys create pcc organizations keys use # Start a session using Daily for WebRTC pcc agent start my-first-agent --use-daily This will open a Daily room where you can interact with your agent. Checking deployment status Monitor your agent deployment: Copy Ask AI # Check deployment status pcc agent status my-first-agent # View deployment logs pcc agent logs my-first-agent Next steps Agent Images Learn about containerizing your agent Secrets Managing sensitive information Scaling Configure scaling for production workloads Active Sessions Understand how sessions work Deployment pattern Example: Fly.io On this page Prerequisites Choosing a starting point Starter templates Project structure Agent implementation with bot.py For WebRTC/Daily transports For WebSocket transports (e.g., Twilio) Example Dockerfile Building and pushing your Docker image Managing secrets Deploying your agent Starting a session Checking deployment status Next steps Assistant Responses are generated using AI and may contain mistakes.
|
deployment_pipecat-cloud_90216d23.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/guides/deployment/pipecat-cloud#for-webrtc%2Fdaily-transports
|
| 2 |
+
Title: Example: Pipecat Cloud - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Example: Pipecat Cloud - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Deploying your bot Example: Pipecat Cloud Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Guides Fundamentals Context Management Custom FrameProcessor Detecting Idle Users Ending a Pipeline Function Calling Muting User Input Recording Audio Recording Transcripts Features Gemini Multimodal Live Metrics Noise cancellation with Krisp OpenAI Audio Models and APIs Pipecat Flows Telephony Overview Dial-in: WebRTC (Daily) Dial-in: WebRTC (Twilio + Daily) Dial-in: Twilio (Media Streams) Dialout: WebRTC (Daily) Deploying your bot Overview Deployment pattern Example: Pipecat Cloud Example: Fly.io Example: Cerebrium Example: Modal Pipecat Cloud is a managed platform for hosting and scaling Pipecat agents in production. Prerequisites Before you begin, you’ll need: A Pipecat Cloud account Docker installed Python 3.10+ The Pipecat Cloud CLI: pip install pipecatcloud Quickstart Guide Follow a step-by-step guided experience to deploy your first agent Choosing a starting point Pipecat Cloud offers several ways to get started: Use a starter template - Pre-built agent configurations for common use cases Build from the base image - Create a custom agent using the official base image Clone the starter project - A bare-bones project template to customize Starter templates Pipecat Cloud provides several ready-made templates for common agent types: Template Description voice Voice conversation agent with STT, LLM and TTS twilio Telephony agent that works with Twilio natural_conversation Agent focused on natural dialogue flow, allowing a user time to think openai_realtime Agent using OpenAI’s Realtime API gemini_multimodal_live Multimodal agent using Google’s Gemini Multimodal Live API vision Computer vision agent that can analyze images These templates include a functioning implementation and Dockerfile. You can use them directly: Copy Ask AI # Clone the repository git clone https://github.com/daily-co/pipecat-cloud-images.git # Navigate to a starter template cd pipecat-cloud-images/pipecat-starters/voice # Customize the agent for your needs Project structure Whether using a starter template or building from scratch, a basic Pipecat Cloud project typically includes: Copy Ask AI my-agent/ ├── bot.py # Your Pipecat pipeline ├── Dockerfile # Container definition ├── requirements.txt # Python dependencies └── pcc-deploy.toml # Deployment config (optional) Agent implementation with bot.py Your agent’s bot.py code must include a specific bot() function that serves as the entry point for Pipecat Cloud. This function has different signatures depending on the transport method: For WebRTC/Daily transports Copy Ask AI async def bot ( args : DailySessionArguments): """Main bot entry point compatible with the FastAPI route handler. Args: config: The configuration object from the request body room_url: The Daily room URL token: The Daily room token session_id: The session ID for logging """ logger.info( f "Bot process initialized { args.room_url } { args.token } " ) try : await main(args.room_url, args.token) logger.info( "Bot process completed" ) except Exception as e: logger.exception( f "Error in bot process: { str (e) } " ) raise For WebSocket transports (e.g., Twilio) Copy Ask AI async def bot ( args : WebSocketSessionArguments): """Main bot entry point for WebSocket connections. Args: ws: The WebSocket connection session_id: The session ID for logging """ logger.info( "WebSocket bot process initialized" ) try : await main(args.websocket) logger.info( "WebSocket bot process completed" ) except Exception as e: logger.exception( f "Error in WebSocket bot process: { str (e) } " ) raise Complete Example: Voice Bot View a complete WebRTC voice bot implementation Complete Example: Twilio Bot View a complete Twilio WebSocket bot implementation Example Dockerfile Pipecat Cloud provides base images that include common dependencies for Pipecat agents: Copy Ask AI FROM dailyco/pipecat-base:latest COPY ./requirements.txt requirements.txt RUN pip install --no-cache-dir --upgrade -r requirements.txt COPY ./bot.py bot.py This Dockerfile: Uses the official Pipecat base image Installs Python dependencies from requirements.txt Copies your bot.py file to the container The base image (dailyco/pipecat-base) includes the HTTP API server, session management, and platform integration required to run on Pipecat Cloud. See the base image source code for details. Building and pushing your Docker image With your project structure in place, build and push your Docker image: Copy Ask AI # Build the image docker build --platform=linux/arm64 -t my-first-agent:latest . # Tag the image for your repository docker tag my-first-agent:latest your-username/my-first-agent:0.1 # Push the tagged image to the repository docker push your-username/my-first-agent:0.1 Pipecat Cloud requires ARM64 images. Make sure to specify the --platform=linux/arm64 flag when building. Managing secrets Your agent likely requires API keys and other credentials. Create a secret set to store them securely: Copy Ask AI # Create an .env file with your credentials touch .env # Create a secret set from the file pcc secrets set my-first-agent-secrets --file .env Deploying your agent Deploy your agent with the CLI: Copy Ask AI pcc deploy my-first-agent your-username/my-first-agent:0.1 --secret-set my-first-agent-secrets For a more maintainable approach, create a pcc-deploy.toml file: Copy Ask AI agent_name = "my-first-agent" image = "your-username/my-first-agent:0.1" secret_set = "my-first-agent-secrets" image_credentials = "my-first-agent-image-credentials" # For private repos [ scaling ] min_instances = 0 Then deploy using: Copy Ask AI pcc deploy Starting a session Once deployed, you can start a session with your agent: Copy Ask AI # Create and set a public access key if needed pcc organizations keys create pcc organizations keys use # Start a session using Daily for WebRTC pcc agent start my-first-agent --use-daily This will open a Daily room where you can interact with your agent. Checking deployment status Monitor your agent deployment: Copy Ask AI # Check deployment status pcc agent status my-first-agent # View deployment logs pcc agent logs my-first-agent Next steps Agent Images Learn about containerizing your agent Secrets Managing sensitive information Scaling Configure scaling for production workloads Active Sessions Understand how sessions work Deployment pattern Example: Fly.io On this page Prerequisites Choosing a starting point Starter templates Project structure Agent implementation with bot.py For WebRTC/Daily transports For WebSocket transports (e.g., Twilio) Example Dockerfile Building and pushing your Docker image Managing secrets Deploying your agent Starting a session Checking deployment status Next steps Assistant Responses are generated using AI and may contain mistakes.
|
deployment_pipecat-cloud_fb17bfdb.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/guides/deployment/pipecat-cloud#starter-templates
|
| 2 |
+
Title: Example: Pipecat Cloud - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Example: Pipecat Cloud - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Deploying your bot Example: Pipecat Cloud Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Guides Fundamentals Context Management Custom FrameProcessor Detecting Idle Users Ending a Pipeline Function Calling Muting User Input Recording Audio Recording Transcripts Features Gemini Multimodal Live Metrics Noise cancellation with Krisp OpenAI Audio Models and APIs Pipecat Flows Telephony Overview Dial-in: WebRTC (Daily) Dial-in: WebRTC (Twilio + Daily) Dial-in: Twilio (Media Streams) Dialout: WebRTC (Daily) Deploying your bot Overview Deployment pattern Example: Pipecat Cloud Example: Fly.io Example: Cerebrium Example: Modal Pipecat Cloud is a managed platform for hosting and scaling Pipecat agents in production. Prerequisites Before you begin, you’ll need: A Pipecat Cloud account Docker installed Python 3.10+ The Pipecat Cloud CLI: pip install pipecatcloud Quickstart Guide Follow a step-by-step guided experience to deploy your first agent Choosing a starting point Pipecat Cloud offers several ways to get started: Use a starter template - Pre-built agent configurations for common use cases Build from the base image - Create a custom agent using the official base image Clone the starter project - A bare-bones project template to customize Starter templates Pipecat Cloud provides several ready-made templates for common agent types: Template Description voice Voice conversation agent with STT, LLM and TTS twilio Telephony agent that works with Twilio natural_conversation Agent focused on natural dialogue flow, allowing a user time to think openai_realtime Agent using OpenAI’s Realtime API gemini_multimodal_live Multimodal agent using Google’s Gemini Multimodal Live API vision Computer vision agent that can analyze images These templates include a functioning implementation and Dockerfile. You can use them directly: Copy Ask AI # Clone the repository git clone https://github.com/daily-co/pipecat-cloud-images.git # Navigate to a starter template cd pipecat-cloud-images/pipecat-starters/voice # Customize the agent for your needs Project structure Whether using a starter template or building from scratch, a basic Pipecat Cloud project typically includes: Copy Ask AI my-agent/ ├── bot.py # Your Pipecat pipeline ├── Dockerfile # Container definition ├── requirements.txt # Python dependencies └── pcc-deploy.toml # Deployment config (optional) Agent implementation with bot.py Your agent’s bot.py code must include a specific bot() function that serves as the entry point for Pipecat Cloud. This function has different signatures depending on the transport method: For WebRTC/Daily transports Copy Ask AI async def bot ( args : DailySessionArguments): """Main bot entry point compatible with the FastAPI route handler. Args: config: The configuration object from the request body room_url: The Daily room URL token: The Daily room token session_id: The session ID for logging """ logger.info( f "Bot process initialized { args.room_url } { args.token } " ) try : await main(args.room_url, args.token) logger.info( "Bot process completed" ) except Exception as e: logger.exception( f "Error in bot process: { str (e) } " ) raise For WebSocket transports (e.g., Twilio) Copy Ask AI async def bot ( args : WebSocketSessionArguments): """Main bot entry point for WebSocket connections. Args: ws: The WebSocket connection session_id: The session ID for logging """ logger.info( "WebSocket bot process initialized" ) try : await main(args.websocket) logger.info( "WebSocket bot process completed" ) except Exception as e: logger.exception( f "Error in WebSocket bot process: { str (e) } " ) raise Complete Example: Voice Bot View a complete WebRTC voice bot implementation Complete Example: Twilio Bot View a complete Twilio WebSocket bot implementation Example Dockerfile Pipecat Cloud provides base images that include common dependencies for Pipecat agents: Copy Ask AI FROM dailyco/pipecat-base:latest COPY ./requirements.txt requirements.txt RUN pip install --no-cache-dir --upgrade -r requirements.txt COPY ./bot.py bot.py This Dockerfile: Uses the official Pipecat base image Installs Python dependencies from requirements.txt Copies your bot.py file to the container The base image (dailyco/pipecat-base) includes the HTTP API server, session management, and platform integration required to run on Pipecat Cloud. See the base image source code for details. Building and pushing your Docker image With your project structure in place, build and push your Docker image: Copy Ask AI # Build the image docker build --platform=linux/arm64 -t my-first-agent:latest . # Tag the image for your repository docker tag my-first-agent:latest your-username/my-first-agent:0.1 # Push the tagged image to the repository docker push your-username/my-first-agent:0.1 Pipecat Cloud requires ARM64 images. Make sure to specify the --platform=linux/arm64 flag when building. Managing secrets Your agent likely requires API keys and other credentials. Create a secret set to store them securely: Copy Ask AI # Create an .env file with your credentials touch .env # Create a secret set from the file pcc secrets set my-first-agent-secrets --file .env Deploying your agent Deploy your agent with the CLI: Copy Ask AI pcc deploy my-first-agent your-username/my-first-agent:0.1 --secret-set my-first-agent-secrets For a more maintainable approach, create a pcc-deploy.toml file: Copy Ask AI agent_name = "my-first-agent" image = "your-username/my-first-agent:0.1" secret_set = "my-first-agent-secrets" image_credentials = "my-first-agent-image-credentials" # For private repos [ scaling ] min_instances = 0 Then deploy using: Copy Ask AI pcc deploy Starting a session Once deployed, you can start a session with your agent: Copy Ask AI # Create and set a public access key if needed pcc organizations keys create pcc organizations keys use # Start a session using Daily for WebRTC pcc agent start my-first-agent --use-daily This will open a Daily room where you can interact with your agent. Checking deployment status Monitor your agent deployment: Copy Ask AI # Check deployment status pcc agent status my-first-agent # View deployment logs pcc agent logs my-first-agent Next steps Agent Images Learn about containerizing your agent Secrets Managing sensitive information Scaling Configure scaling for production workloads Active Sessions Understand how sessions work Deployment pattern Example: Fly.io On this page Prerequisites Choosing a starting point Starter templates Project structure Agent implementation with bot.py For WebRTC/Daily transports For WebSocket transports (e.g., Twilio) Example Dockerfile Building and pushing your Docker image Managing secrets Deploying your agent Starting a session Checking deployment status Next steps Assistant Responses are generated using AI and may contain mistakes.
|
deployment_wwwflyio_db5d82f0.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/guides/deployment/www.fly.io
|
| 2 |
+
Title: Overview - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Overview - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Get Started Overview Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Get Started Overview Installation & Setup Quickstart Core Concepts Next Steps & Examples Pipecat is an open source Python framework that handles the complex orchestration of AI services, network transport, audio processing, and multimodal interactions. “Multimodal” means you can use any combination of audio, video, images, and/or text in your interactions. And “real-time” means that things are happening quickly enough that it feels conversational—a “back-and-forth” with a bot, not submitting a query and waiting for results. What You Can Build Voice Assistants Natural, real-time conversations with AI using speech recognition and synthesis Interactive Agents Personal coaches and meeting assistants that can understand context and provide guidance Multimodal Apps Applications that combine voice, video, images, and text for rich interactions Creative Tools Storytelling experiences and social companions that engage users Business Solutions Customer intake flows and support bots for automated business processes Complex Flows Structured conversations using Pipecat Flows for managing complex interactions How It Works The flow of interactions in a Pipecat application is typically straightforward: The bot says something The user says something The bot says something The user says something This continues until the conversation naturally ends. While this flow seems simple, making it feel natural requires sophisticated real-time processing. Real-time Processing Pipecat’s pipeline architecture handles both simple voice interactions and complex multimodal processing. Let’s look at how data flows through the system: Voice app Multimodal app 1 Send Audio Transmit and capture streamed audio from the user 2 Transcribe Speech Convert speech to text as the user is talking 3 Process with LLM Generate responses using a large language model 4 Convert to Speech Transform text responses into natural speech 5 Play Audio Stream the audio response back to the user 1 Send Audio Transmit and capture streamed audio from the user 2 Transcribe Speech Convert speech to text as the user is talking 3 Process with LLM Generate responses using a large language model 4 Convert to Speech Transform text responses into natural speech 5 Play Audio Stream the audio response back to the user 1 Send Audio and Video Transmit and capture audio, video, and image inputs simultaneously 2 Process Streams Handle multiple input streams in parallel 3 Model Processing Send combined inputs to multimodal models (like GPT-4V) 4 Generate Outputs Create various outputs (text, images, audio, etc.) 5 Coordinate Presentation Synchronize and present multiple output types In both cases, Pipecat: Processes responses as they stream in Handles multiple input/output modalities concurrently Manages resource allocation and synchronization Coordinates parallel processing tasks This architecture creates fluid, natural interactions without noticeable delays, whether you’re building a simple voice assistant or a complex multimodal application. Pipecat’s pipeline architecture is particularly valuable for managing the complexity of real-time, multimodal interactions, ensuring smooth data flow and proper synchronization regardless of the input/output types involved. Pipecat handles all this complexity for you, letting you focus on building your application rather than managing the underlying infrastructure. Next Steps Ready to build your first Pipecat application? Installation & Setup Prepare your environment and install required dependencies Quickstart Build and run your first Pipecat application Core Concepts Learn about pipelines, frames, and real-time processing Use Cases Explore example implementations and patterns Join Our Community Discord Community Connect with other developers, share your projects, and get support from the Pipecat team. Installation & Setup On this page What You Can Build How It Works Real-time Processing Next Steps Join Our Community Assistant Responses are generated using AI and may contain mistakes.
|
features_gemini-multimodal-live_3e9df7b7.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/guides/features/gemini-multimodal-live#gemini-integration
|
| 2 |
+
Title: Building with Gemini Multimodal Live - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Building with Gemini Multimodal Live - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Features Building with Gemini Multimodal Live Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Guides Fundamentals Context Management Custom FrameProcessor Detecting Idle Users Ending a Pipeline Function Calling Muting User Input Recording Audio Recording Transcripts Features Gemini Multimodal Live Metrics Noise cancellation with Krisp OpenAI Audio Models and APIs Pipecat Flows Telephony Overview Dial-in: WebRTC (Daily) Dial-in: WebRTC (Twilio + Daily) Dial-in: Twilio (Media Streams) Dialout: WebRTC (Daily) Deploying your bot Overview Deployment pattern Example: Pipecat Cloud Example: Fly.io Example: Cerebrium Example: Modal This guide will walk you through building a real-time AI chatbot using Gemini Multimodal Live and Pipecat. We’ll create a complete application with a Pipecat server and a Pipecat React client that enables natural conversations with an AI assistant. API Reference Gemini Multimodal Live API documentation Example Code Find the complete client and server code in Github Client SDK Pipecat React SDK documentation What We’ll Build In this guide, you’ll create: A FastAPI server that manages bot instances A Gemini-powered conversational AI bot A React client with real-time audio/video A complete pipeline for speech-to-speech interaction Key Concepts Before we dive into implementation, let’s cover some important concepts that will help you understand how Pipecat and Gemini work together. Understanding Pipelines At the heart of Pipecat is the pipeline system. A pipeline is a sequence of processors that handle different aspects of the conversation flow. Think of it like an assembly line where each station (processor) performs a specific task. For our chatbot, the pipeline looks like this: Copy Ask AI pipeline = Pipeline([ transport.input(), # Receives audio/video from the user via WebRTC rtvi, # Handles client/server messaging and events context_aggregator.user(), # Manages user message history llm, # Processes speech through Gemini talking_animation, # Controls bot's avatar transport.output(), # Sends audio/video back to the user via WebRTC context_aggregator.assistant(), # Manages bot message history ]) Processors Each processor in the pipeline handles a specific task: Transport transport.input() and transport.output() handle media streaming with Daily Context context_aggregator maintains conversation history for natural dialogue Speech Processing rtvi_user_transcription and rtvi_bot_transcription handle speech-to-text Animation talking_animation controls the bot’s visual state based on speaking activity The order of processors matters! Data flows through the pipeline in sequence, so each processor should receive the data it needs from previous processors. Learn more about the Core Concepts to Pipecat server. Gemini Integration The GeminiMultimodalLiveLLMService is a speech-to-speech LLM service that interfaces with the Gemini Multimodal Live API. It provides: Real-time speech-to-speech conversation Context management Voice activity detection Tool use Pipecat manages two types of connections: A WebRTC connection between the Pipecat client and server for reliable audio/video streaming A WebSocket connection between the Pipecat server and Gemini for real-time AI processing This architecture ensures stable media streaming while maintaining responsive AI interactions. Prerequisites Before we begin, you’ll need: Python 3.10 or higher Node.js 16 or higher A Daily API key A Google API key with Gemini Multimodal Live access Clone the Pipecat repo: Copy Ask AI git clone git@github.com:pipecat-ai/pipecat.git Server Implementation Let’s start by setting up the server components. Our server will handle bot management, room creation, and client connections. Environment Setup Navigate to the simple-chatbot’s server directory: Copy Ask AI cd examples/simple-chatbot/server Set up a python virtual environment: Copy Ask AI python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate Install requirements: Copy Ask AI pip install -r requirements.txt Copy env.example to .env and make a few changes: Copy Ask AI # Remove the hard-coded example room URL DAILY_SAMPLE_ROOM_URL = # Add your Daily and Gemini API keys DAILY_API_KEY = [your key here] GEMINI_API_KEY = [your key here] # Use Gemini implementation BOT_IMPLEMENTATION = gemini Server Setup (server.py) server.py is a FastAPI server that creates the meeting room where clients and bots interact, manages bot instances, and handles client connections. It’s the orchestrator that brings everything on the server-side together. Creating Meeting Room The server uses Daily’s API via a REST API helper to create rooms where clients and bots can meet. Each room is a secure space for audio/video communication: server/server.py Copy Ask AI async def create_room_and_token (): """Create a Daily room and generate access credentials.""" room = await daily_helpers[ "rest" ].create_room(DailyRoomParams()) token = await daily_helpers[ "rest" ].get_token(room.url) return room.url, token Managing Bot Instances When a client connects, the server starts a new bot instance configured specifically for that room. It keeps track of running bots and ensures there’s only one bot per room: server/server.py Copy Ask AI # Start the bot process for a specific room bot_file = "bot-gemini.py" proc = subprocess.Popen([ f "python3 -m { bot_file } -u { room_url } -t { token } " ]) bot_procs[proc.pid] = (proc, room_url) Connection Endpoints The server provides two ways to connect: Browser Access (/) Creates a room, starts a bot, and redirects the browser to the Daily meeting URL. Perfect for quick testing and development. RTVI Client (/connect) Creates a room, starts a bot, and returns connection credentials. Used by RTVI clients for custom implementations. Bot Implementation (bot-gemini.py) The bot implementation connects all the pieces: Daily transport, Gemini service, conversation context, and processors. Let’s break down each component: Transport Setup First, we configure the Daily transport, which handles WebRTC communication between the client and server. server/bot-gemini.py Copy Ask AI transport = DailyTransport( room_url, token, "Chatbot" , DailyParams( audio_in_enabled = True , # Enable audio input audio_out_enabled = True , # Enable audio output video_out_enabled = True , # Enable video output vad_analyzer = SileroVADAnalyzer( params = VADParams( stop_secs = 0.5 )), ), ) Gemini Multimodal Live audio requirements: Input: 16 kHz sample rate Output: 24 kHz sample rate Gemini Service Configuration Next, we initialize the Gemini service which will provide speech-to-speech inference and communication: server/bot-gemini.py Copy Ask AI llm = GeminiMultimodalLiveLLMService( api_key = os.getenv( "GEMINI_API_KEY" ), voice_id = "Puck" , # Choose your bot's voice params = InputParams( temperature = 0.7 ) # Set model input params ) Conversation Context We give our bot its personality and initial instructions: server/bot-gemini.py Copy Ask AI messages = [{ "role" : "user" , "content" : """You are Chatbot, a friendly, helpful robot. Keep responses brief and avoid special characters since output will be converted to audio.""" }] context = OpenAILLMContext(messages) context_aggregator = llm.create_context_aggregator(context) OpenAILLMContext is used as a common LLM base service for context management. In the future, we may add a specific context manager for Gemini. The context aggregator automatically maintains conversation history, helping the bot remember previous interactions. Processor Setup We initialize two additional processors in our pipeline to handle different aspects of the interaction: RTVI Processors RTVIProcessor : Handles all client communication events including transcriptions, speaking states, and performance metrics Animation TalkingAnimation : Controls the bot’s visual state, switching between static and animated frames based on speaking status Learn more about the RTVI framework and available processors. Pipeline Assembly Finally, we bring everything together in a pipeline: server/bot-gemini.py Copy Ask AI pipeline = Pipeline([ transport.input(), # Receive media rtvi, # Client UI events context_aggregator.user(), # Process user context llm, # Gemini processing ta, # Animation (talking/quiet states) transport.output(), # Send media context_aggregator.assistant() # Process bot context ]) task = PipelineTask( pipeline, params = PipelineParams( allow_interruptions = True , enable_metrics = True , enable_usage_metrics = True , ), observers = [RTVIObserver(rtvi)], ) The order of processors is crucial! For example, the RTVI processor should be early in the pipeline to capture all relevant events. The RTVIObserver monitors the entire pipeline and automatically collects relevant events to send to the client. Client Implementation Our React client uses the Pipecat React SDK to communicate with the bot. Let’s explore how the client connects and interacts with our Pipecat server. Connection Setup The client needs to connect to our bot server using the same transport type (Daily WebRTC) that we configured on the server: examples/react/src/providers/PipecatProvider.tsx Copy Ask AI const client = new PipecatClient ({ transport: new DailyTransport (), enableMic: true , // Enable audio input enableCam: false , // Disable video input enableScreenShare: false , // Disable screen sharing }); client . connect ({ endpoint: "http://localhost:7860/connect" , // Your bot connection endpoint }); The connection configuration must match your server: DailyTransport : Matches the WebRTC transport used in bot-gemini.py connect endpoint: Matches the /connect route in server.py Media settings: Controls which devices are enabled on join Media Handling Pipecat’s React components handle all the complex media stream management for you: Copy Ask AI function App () { return ( < PipecatClientProvider client = { client } > < div className = "app" > < PipecatClientVideo participant = "bot" /> { /* Bot's video feed */ } < PipecatClientAudio /> { /* Audio input/output */ } </ div > </ PipecatClientProvider > ); } The PipecatClientProvider is the root component for providing Pipecat client context to your application. By wrapping your PipecatClientAudio and PipecatClientVideo components in this provider, they can access the client instance and receive and process the streams received from the Pipecat server. Real-time Events The RTVI processors we configured in the pipeline emit events that we can handle in our client: Copy Ask AI // Listen for transcription events useRTVIClientEvent ( RTVIEvent . UserTranscript , ( data : TranscriptData ) => { if ( data . final ) { console . log ( `User said: ${ data . text } ` ); } }); // Listen for bot responses useRTVIClientEvent ( RTVIEvent . BotTranscript , ( data : BotLLMTextData ) => { console . log ( `Bot responded: ${ data . text } ` ); }); Available Events Speaking state changes Transcription updates Bot responses Connection status Performance metrics Event Usage Use these events to: Show speaking indicators Display transcripts Update UI state Monitor performance Optionally, uses callbacks to handle events in your application. Learn more in the Pipecat client docs. Complete Example Here’s a basic client implementation with connection status and transcription display: Copy Ask AI function ChatApp () { return ( < PipecatClientProvider client = { client } > < div className = "app" > { /* Connection UI */ } < StatusDisplay /> < ConnectButton /> { /* Media Components */ } < BotVideo /> < PipecatClientAudio /> { /* Debug/Transcript Display */ } < DebugDisplay /> </ div > </ PipecatClientProvider > ); } Check out the example repository for a complete client implementation with styling and error handling. Running the Application From the simple-chatbot directory, start the server and client to test the chatbot: 1. Start the Server In one terminal: Copy Ask AI python server/server.py 2. Start the Client In another terminal: Copy Ask AI cd examples/react npm install npm run dev 3. Testing the Connection Open http://localhost:5173 in your browser Click “Connect” to join a room Allow microphone access when prompted Start talking with your AI assistant Troubleshooting: Check that all API keys are properly configured in .env Grant your browser access to your microphone, so it can receive your audio input Verify WebRTC ports aren’t blocked by firewalls Next Steps Now that you have a working chatbot, consider these enhancements: Add custom avatar animations Implement function calling for external integrations Add support for multiple languages Enhance error recovery and reconnection logic Examples Foundational Example A basic implementation demonstrating core Gemini Multimodal Live features and transcription capabilities Simple Chatbot A complete client/server implementation showing how to build a Pipecat JS or React client that connects to a Gemini Live Pipecat bot Learn More Gemini Multimodal Live API Reference React Client SDK Documentation Recording Transcripts Metrics On this page What We’ll Build Key Concepts Understanding Pipelines Processors Gemini Integration Prerequisites Server Implementation Environment Setup Server Setup (server.py) Creating Meeting Room Managing Bot Instances Connection Endpoints Bot Implementation (bot-gemini.py) Transport Setup Gemini Service Configuration Conversation Context Processor Setup Pipeline Assembly Client Implementation Connection Setup Media Handling Real-time Events Complete Example Running the Application 1. Start the Server 2. Start the Client 3. Testing the Connection Next Steps Examples Learn More Assistant Responses are generated using AI and may contain mistakes.
|
features_gemini-multimodal-live_a62f1d14.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/guides/features/gemini-multimodal-live#server-implementation
|
| 2 |
+
Title: Building with Gemini Multimodal Live - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Building with Gemini Multimodal Live - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Features Building with Gemini Multimodal Live Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Guides Fundamentals Context Management Custom FrameProcessor Detecting Idle Users Ending a Pipeline Function Calling Muting User Input Recording Audio Recording Transcripts Features Gemini Multimodal Live Metrics Noise cancellation with Krisp OpenAI Audio Models and APIs Pipecat Flows Telephony Overview Dial-in: WebRTC (Daily) Dial-in: WebRTC (Twilio + Daily) Dial-in: Twilio (Media Streams) Dialout: WebRTC (Daily) Deploying your bot Overview Deployment pattern Example: Pipecat Cloud Example: Fly.io Example: Cerebrium Example: Modal This guide will walk you through building a real-time AI chatbot using Gemini Multimodal Live and Pipecat. We’ll create a complete application with a Pipecat server and a Pipecat React client that enables natural conversations with an AI assistant. API Reference Gemini Multimodal Live API documentation Example Code Find the complete client and server code in Github Client SDK Pipecat React SDK documentation What We’ll Build In this guide, you’ll create: A FastAPI server that manages bot instances A Gemini-powered conversational AI bot A React client with real-time audio/video A complete pipeline for speech-to-speech interaction Key Concepts Before we dive into implementation, let’s cover some important concepts that will help you understand how Pipecat and Gemini work together. Understanding Pipelines At the heart of Pipecat is the pipeline system. A pipeline is a sequence of processors that handle different aspects of the conversation flow. Think of it like an assembly line where each station (processor) performs a specific task. For our chatbot, the pipeline looks like this: Copy Ask AI pipeline = Pipeline([ transport.input(), # Receives audio/video from the user via WebRTC rtvi, # Handles client/server messaging and events context_aggregator.user(), # Manages user message history llm, # Processes speech through Gemini talking_animation, # Controls bot's avatar transport.output(), # Sends audio/video back to the user via WebRTC context_aggregator.assistant(), # Manages bot message history ]) Processors Each processor in the pipeline handles a specific task: Transport transport.input() and transport.output() handle media streaming with Daily Context context_aggregator maintains conversation history for natural dialogue Speech Processing rtvi_user_transcription and rtvi_bot_transcription handle speech-to-text Animation talking_animation controls the bot’s visual state based on speaking activity The order of processors matters! Data flows through the pipeline in sequence, so each processor should receive the data it needs from previous processors. Learn more about the Core Concepts to Pipecat server. Gemini Integration The GeminiMultimodalLiveLLMService is a speech-to-speech LLM service that interfaces with the Gemini Multimodal Live API. It provides: Real-time speech-to-speech conversation Context management Voice activity detection Tool use Pipecat manages two types of connections: A WebRTC connection between the Pipecat client and server for reliable audio/video streaming A WebSocket connection between the Pipecat server and Gemini for real-time AI processing This architecture ensures stable media streaming while maintaining responsive AI interactions. Prerequisites Before we begin, you’ll need: Python 3.10 or higher Node.js 16 or higher A Daily API key A Google API key with Gemini Multimodal Live access Clone the Pipecat repo: Copy Ask AI git clone git@github.com:pipecat-ai/pipecat.git Server Implementation Let’s start by setting up the server components. Our server will handle bot management, room creation, and client connections. Environment Setup Navigate to the simple-chatbot’s server directory: Copy Ask AI cd examples/simple-chatbot/server Set up a python virtual environment: Copy Ask AI python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate Install requirements: Copy Ask AI pip install -r requirements.txt Copy env.example to .env and make a few changes: Copy Ask AI # Remove the hard-coded example room URL DAILY_SAMPLE_ROOM_URL = # Add your Daily and Gemini API keys DAILY_API_KEY = [your key here] GEMINI_API_KEY = [your key here] # Use Gemini implementation BOT_IMPLEMENTATION = gemini Server Setup (server.py) server.py is a FastAPI server that creates the meeting room where clients and bots interact, manages bot instances, and handles client connections. It’s the orchestrator that brings everything on the server-side together. Creating Meeting Room The server uses Daily’s API via a REST API helper to create rooms where clients and bots can meet. Each room is a secure space for audio/video communication: server/server.py Copy Ask AI async def create_room_and_token (): """Create a Daily room and generate access credentials.""" room = await daily_helpers[ "rest" ].create_room(DailyRoomParams()) token = await daily_helpers[ "rest" ].get_token(room.url) return room.url, token Managing Bot Instances When a client connects, the server starts a new bot instance configured specifically for that room. It keeps track of running bots and ensures there’s only one bot per room: server/server.py Copy Ask AI # Start the bot process for a specific room bot_file = "bot-gemini.py" proc = subprocess.Popen([ f "python3 -m { bot_file } -u { room_url } -t { token } " ]) bot_procs[proc.pid] = (proc, room_url) Connection Endpoints The server provides two ways to connect: Browser Access (/) Creates a room, starts a bot, and redirects the browser to the Daily meeting URL. Perfect for quick testing and development. RTVI Client (/connect) Creates a room, starts a bot, and returns connection credentials. Used by RTVI clients for custom implementations. Bot Implementation (bot-gemini.py) The bot implementation connects all the pieces: Daily transport, Gemini service, conversation context, and processors. Let’s break down each component: Transport Setup First, we configure the Daily transport, which handles WebRTC communication between the client and server. server/bot-gemini.py Copy Ask AI transport = DailyTransport( room_url, token, "Chatbot" , DailyParams( audio_in_enabled = True , # Enable audio input audio_out_enabled = True , # Enable audio output video_out_enabled = True , # Enable video output vad_analyzer = SileroVADAnalyzer( params = VADParams( stop_secs = 0.5 )), ), ) Gemini Multimodal Live audio requirements: Input: 16 kHz sample rate Output: 24 kHz sample rate Gemini Service Configuration Next, we initialize the Gemini service which will provide speech-to-speech inference and communication: server/bot-gemini.py Copy Ask AI llm = GeminiMultimodalLiveLLMService( api_key = os.getenv( "GEMINI_API_KEY" ), voice_id = "Puck" , # Choose your bot's voice params = InputParams( temperature = 0.7 ) # Set model input params ) Conversation Context We give our bot its personality and initial instructions: server/bot-gemini.py Copy Ask AI messages = [{ "role" : "user" , "content" : """You are Chatbot, a friendly, helpful robot. Keep responses brief and avoid special characters since output will be converted to audio.""" }] context = OpenAILLMContext(messages) context_aggregator = llm.create_context_aggregator(context) OpenAILLMContext is used as a common LLM base service for context management. In the future, we may add a specific context manager for Gemini. The context aggregator automatically maintains conversation history, helping the bot remember previous interactions. Processor Setup We initialize two additional processors in our pipeline to handle different aspects of the interaction: RTVI Processors RTVIProcessor : Handles all client communication events including transcriptions, speaking states, and performance metrics Animation TalkingAnimation : Controls the bot’s visual state, switching between static and animated frames based on speaking status Learn more about the RTVI framework and available processors. Pipeline Assembly Finally, we bring everything together in a pipeline: server/bot-gemini.py Copy Ask AI pipeline = Pipeline([ transport.input(), # Receive media rtvi, # Client UI events context_aggregator.user(), # Process user context llm, # Gemini processing ta, # Animation (talking/quiet states) transport.output(), # Send media context_aggregator.assistant() # Process bot context ]) task = PipelineTask( pipeline, params = PipelineParams( allow_interruptions = True , enable_metrics = True , enable_usage_metrics = True , ), observers = [RTVIObserver(rtvi)], ) The order of processors is crucial! For example, the RTVI processor should be early in the pipeline to capture all relevant events. The RTVIObserver monitors the entire pipeline and automatically collects relevant events to send to the client. Client Implementation Our React client uses the Pipecat React SDK to communicate with the bot. Let’s explore how the client connects and interacts with our Pipecat server. Connection Setup The client needs to connect to our bot server using the same transport type (Daily WebRTC) that we configured on the server: examples/react/src/providers/PipecatProvider.tsx Copy Ask AI const client = new PipecatClient ({ transport: new DailyTransport (), enableMic: true , // Enable audio input enableCam: false , // Disable video input enableScreenShare: false , // Disable screen sharing }); client . connect ({ endpoint: "http://localhost:7860/connect" , // Your bot connection endpoint }); The connection configuration must match your server: DailyTransport : Matches the WebRTC transport used in bot-gemini.py connect endpoint: Matches the /connect route in server.py Media settings: Controls which devices are enabled on join Media Handling Pipecat’s React components handle all the complex media stream management for you: Copy Ask AI function App () { return ( < PipecatClientProvider client = { client } > < div className = "app" > < PipecatClientVideo participant = "bot" /> { /* Bot's video feed */ } < PipecatClientAudio /> { /* Audio input/output */ } </ div > </ PipecatClientProvider > ); } The PipecatClientProvider is the root component for providing Pipecat client context to your application. By wrapping your PipecatClientAudio and PipecatClientVideo components in this provider, they can access the client instance and receive and process the streams received from the Pipecat server. Real-time Events The RTVI processors we configured in the pipeline emit events that we can handle in our client: Copy Ask AI // Listen for transcription events useRTVIClientEvent ( RTVIEvent . UserTranscript , ( data : TranscriptData ) => { if ( data . final ) { console . log ( `User said: ${ data . text } ` ); } }); // Listen for bot responses useRTVIClientEvent ( RTVIEvent . BotTranscript , ( data : BotLLMTextData ) => { console . log ( `Bot responded: ${ data . text } ` ); }); Available Events Speaking state changes Transcription updates Bot responses Connection status Performance metrics Event Usage Use these events to: Show speaking indicators Display transcripts Update UI state Monitor performance Optionally, uses callbacks to handle events in your application. Learn more in the Pipecat client docs. Complete Example Here’s a basic client implementation with connection status and transcription display: Copy Ask AI function ChatApp () { return ( < PipecatClientProvider client = { client } > < div className = "app" > { /* Connection UI */ } < StatusDisplay /> < ConnectButton /> { /* Media Components */ } < BotVideo /> < PipecatClientAudio /> { /* Debug/Transcript Display */ } < DebugDisplay /> </ div > </ PipecatClientProvider > ); } Check out the example repository for a complete client implementation with styling and error handling. Running the Application From the simple-chatbot directory, start the server and client to test the chatbot: 1. Start the Server In one terminal: Copy Ask AI python server/server.py 2. Start the Client In another terminal: Copy Ask AI cd examples/react npm install npm run dev 3. Testing the Connection Open http://localhost:5173 in your browser Click “Connect” to join a room Allow microphone access when prompted Start talking with your AI assistant Troubleshooting: Check that all API keys are properly configured in .env Grant your browser access to your microphone, so it can receive your audio input Verify WebRTC ports aren’t blocked by firewalls Next Steps Now that you have a working chatbot, consider these enhancements: Add custom avatar animations Implement function calling for external integrations Add support for multiple languages Enhance error recovery and reconnection logic Examples Foundational Example A basic implementation demonstrating core Gemini Multimodal Live features and transcription capabilities Simple Chatbot A complete client/server implementation showing how to build a Pipecat JS or React client that connects to a Gemini Live Pipecat bot Learn More Gemini Multimodal Live API Reference React Client SDK Documentation Recording Transcripts Metrics On this page What We’ll Build Key Concepts Understanding Pipelines Processors Gemini Integration Prerequisites Server Implementation Environment Setup Server Setup (server.py) Creating Meeting Room Managing Bot Instances Connection Endpoints Bot Implementation (bot-gemini.py) Transport Setup Gemini Service Configuration Conversation Context Processor Setup Pipeline Assembly Client Implementation Connection Setup Media Handling Real-time Events Complete Example Running the Application 1. Start the Server 2. Start the Client 3. Testing the Connection Next Steps Examples Learn More Assistant Responses are generated using AI and may contain mistakes.
|
features_gemini-multimodal-live_bef94131.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/guides/features/gemini-multimodal-live#complete-example
|
| 2 |
+
Title: Building with Gemini Multimodal Live - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Building with Gemini Multimodal Live - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Features Building with Gemini Multimodal Live Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Guides Fundamentals Context Management Custom FrameProcessor Detecting Idle Users Ending a Pipeline Function Calling Muting User Input Recording Audio Recording Transcripts Features Gemini Multimodal Live Metrics Noise cancellation with Krisp OpenAI Audio Models and APIs Pipecat Flows Telephony Overview Dial-in: WebRTC (Daily) Dial-in: WebRTC (Twilio + Daily) Dial-in: Twilio (Media Streams) Dialout: WebRTC (Daily) Deploying your bot Overview Deployment pattern Example: Pipecat Cloud Example: Fly.io Example: Cerebrium Example: Modal This guide will walk you through building a real-time AI chatbot using Gemini Multimodal Live and Pipecat. We’ll create a complete application with a Pipecat server and a Pipecat React client that enables natural conversations with an AI assistant. API Reference Gemini Multimodal Live API documentation Example Code Find the complete client and server code in Github Client SDK Pipecat React SDK documentation What We’ll Build In this guide, you’ll create: A FastAPI server that manages bot instances A Gemini-powered conversational AI bot A React client with real-time audio/video A complete pipeline for speech-to-speech interaction Key Concepts Before we dive into implementation, let’s cover some important concepts that will help you understand how Pipecat and Gemini work together. Understanding Pipelines At the heart of Pipecat is the pipeline system. A pipeline is a sequence of processors that handle different aspects of the conversation flow. Think of it like an assembly line where each station (processor) performs a specific task. For our chatbot, the pipeline looks like this: Copy Ask AI pipeline = Pipeline([ transport.input(), # Receives audio/video from the user via WebRTC rtvi, # Handles client/server messaging and events context_aggregator.user(), # Manages user message history llm, # Processes speech through Gemini talking_animation, # Controls bot's avatar transport.output(), # Sends audio/video back to the user via WebRTC context_aggregator.assistant(), # Manages bot message history ]) Processors Each processor in the pipeline handles a specific task: Transport transport.input() and transport.output() handle media streaming with Daily Context context_aggregator maintains conversation history for natural dialogue Speech Processing rtvi_user_transcription and rtvi_bot_transcription handle speech-to-text Animation talking_animation controls the bot’s visual state based on speaking activity The order of processors matters! Data flows through the pipeline in sequence, so each processor should receive the data it needs from previous processors. Learn more about the Core Concepts to Pipecat server. Gemini Integration The GeminiMultimodalLiveLLMService is a speech-to-speech LLM service that interfaces with the Gemini Multimodal Live API. It provides: Real-time speech-to-speech conversation Context management Voice activity detection Tool use Pipecat manages two types of connections: A WebRTC connection between the Pipecat client and server for reliable audio/video streaming A WebSocket connection between the Pipecat server and Gemini for real-time AI processing This architecture ensures stable media streaming while maintaining responsive AI interactions. Prerequisites Before we begin, you’ll need: Python 3.10 or higher Node.js 16 or higher A Daily API key A Google API key with Gemini Multimodal Live access Clone the Pipecat repo: Copy Ask AI git clone git@github.com:pipecat-ai/pipecat.git Server Implementation Let’s start by setting up the server components. Our server will handle bot management, room creation, and client connections. Environment Setup Navigate to the simple-chatbot’s server directory: Copy Ask AI cd examples/simple-chatbot/server Set up a python virtual environment: Copy Ask AI python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate Install requirements: Copy Ask AI pip install -r requirements.txt Copy env.example to .env and make a few changes: Copy Ask AI # Remove the hard-coded example room URL DAILY_SAMPLE_ROOM_URL = # Add your Daily and Gemini API keys DAILY_API_KEY = [your key here] GEMINI_API_KEY = [your key here] # Use Gemini implementation BOT_IMPLEMENTATION = gemini Server Setup (server.py) server.py is a FastAPI server that creates the meeting room where clients and bots interact, manages bot instances, and handles client connections. It’s the orchestrator that brings everything on the server-side together. Creating Meeting Room The server uses Daily’s API via a REST API helper to create rooms where clients and bots can meet. Each room is a secure space for audio/video communication: server/server.py Copy Ask AI async def create_room_and_token (): """Create a Daily room and generate access credentials.""" room = await daily_helpers[ "rest" ].create_room(DailyRoomParams()) token = await daily_helpers[ "rest" ].get_token(room.url) return room.url, token Managing Bot Instances When a client connects, the server starts a new bot instance configured specifically for that room. It keeps track of running bots and ensures there’s only one bot per room: server/server.py Copy Ask AI # Start the bot process for a specific room bot_file = "bot-gemini.py" proc = subprocess.Popen([ f "python3 -m { bot_file } -u { room_url } -t { token } " ]) bot_procs[proc.pid] = (proc, room_url) Connection Endpoints The server provides two ways to connect: Browser Access (/) Creates a room, starts a bot, and redirects the browser to the Daily meeting URL. Perfect for quick testing and development. RTVI Client (/connect) Creates a room, starts a bot, and returns connection credentials. Used by RTVI clients for custom implementations. Bot Implementation (bot-gemini.py) The bot implementation connects all the pieces: Daily transport, Gemini service, conversation context, and processors. Let’s break down each component: Transport Setup First, we configure the Daily transport, which handles WebRTC communication between the client and server. server/bot-gemini.py Copy Ask AI transport = DailyTransport( room_url, token, "Chatbot" , DailyParams( audio_in_enabled = True , # Enable audio input audio_out_enabled = True , # Enable audio output video_out_enabled = True , # Enable video output vad_analyzer = SileroVADAnalyzer( params = VADParams( stop_secs = 0.5 )), ), ) Gemini Multimodal Live audio requirements: Input: 16 kHz sample rate Output: 24 kHz sample rate Gemini Service Configuration Next, we initialize the Gemini service which will provide speech-to-speech inference and communication: server/bot-gemini.py Copy Ask AI llm = GeminiMultimodalLiveLLMService( api_key = os.getenv( "GEMINI_API_KEY" ), voice_id = "Puck" , # Choose your bot's voice params = InputParams( temperature = 0.7 ) # Set model input params ) Conversation Context We give our bot its personality and initial instructions: server/bot-gemini.py Copy Ask AI messages = [{ "role" : "user" , "content" : """You are Chatbot, a friendly, helpful robot. Keep responses brief and avoid special characters since output will be converted to audio.""" }] context = OpenAILLMContext(messages) context_aggregator = llm.create_context_aggregator(context) OpenAILLMContext is used as a common LLM base service for context management. In the future, we may add a specific context manager for Gemini. The context aggregator automatically maintains conversation history, helping the bot remember previous interactions. Processor Setup We initialize two additional processors in our pipeline to handle different aspects of the interaction: RTVI Processors RTVIProcessor : Handles all client communication events including transcriptions, speaking states, and performance metrics Animation TalkingAnimation : Controls the bot’s visual state, switching between static and animated frames based on speaking status Learn more about the RTVI framework and available processors. Pipeline Assembly Finally, we bring everything together in a pipeline: server/bot-gemini.py Copy Ask AI pipeline = Pipeline([ transport.input(), # Receive media rtvi, # Client UI events context_aggregator.user(), # Process user context llm, # Gemini processing ta, # Animation (talking/quiet states) transport.output(), # Send media context_aggregator.assistant() # Process bot context ]) task = PipelineTask( pipeline, params = PipelineParams( allow_interruptions = True , enable_metrics = True , enable_usage_metrics = True , ), observers = [RTVIObserver(rtvi)], ) The order of processors is crucial! For example, the RTVI processor should be early in the pipeline to capture all relevant events. The RTVIObserver monitors the entire pipeline and automatically collects relevant events to send to the client. Client Implementation Our React client uses the Pipecat React SDK to communicate with the bot. Let’s explore how the client connects and interacts with our Pipecat server. Connection Setup The client needs to connect to our bot server using the same transport type (Daily WebRTC) that we configured on the server: examples/react/src/providers/PipecatProvider.tsx Copy Ask AI const client = new PipecatClient ({ transport: new DailyTransport (), enableMic: true , // Enable audio input enableCam: false , // Disable video input enableScreenShare: false , // Disable screen sharing }); client . connect ({ endpoint: "http://localhost:7860/connect" , // Your bot connection endpoint }); The connection configuration must match your server: DailyTransport : Matches the WebRTC transport used in bot-gemini.py connect endpoint: Matches the /connect route in server.py Media settings: Controls which devices are enabled on join Media Handling Pipecat’s React components handle all the complex media stream management for you: Copy Ask AI function App () { return ( < PipecatClientProvider client = { client } > < div className = "app" > < PipecatClientVideo participant = "bot" /> { /* Bot's video feed */ } < PipecatClientAudio /> { /* Audio input/output */ } </ div > </ PipecatClientProvider > ); } The PipecatClientProvider is the root component for providing Pipecat client context to your application. By wrapping your PipecatClientAudio and PipecatClientVideo components in this provider, they can access the client instance and receive and process the streams received from the Pipecat server. Real-time Events The RTVI processors we configured in the pipeline emit events that we can handle in our client: Copy Ask AI // Listen for transcription events useRTVIClientEvent ( RTVIEvent . UserTranscript , ( data : TranscriptData ) => { if ( data . final ) { console . log ( `User said: ${ data . text } ` ); } }); // Listen for bot responses useRTVIClientEvent ( RTVIEvent . BotTranscript , ( data : BotLLMTextData ) => { console . log ( `Bot responded: ${ data . text } ` ); }); Available Events Speaking state changes Transcription updates Bot responses Connection status Performance metrics Event Usage Use these events to: Show speaking indicators Display transcripts Update UI state Monitor performance Optionally, uses callbacks to handle events in your application. Learn more in the Pipecat client docs. Complete Example Here’s a basic client implementation with connection status and transcription display: Copy Ask AI function ChatApp () { return ( < PipecatClientProvider client = { client } > < div className = "app" > { /* Connection UI */ } < StatusDisplay /> < ConnectButton /> { /* Media Components */ } < BotVideo /> < PipecatClientAudio /> { /* Debug/Transcript Display */ } < DebugDisplay /> </ div > </ PipecatClientProvider > ); } Check out the example repository for a complete client implementation with styling and error handling. Running the Application From the simple-chatbot directory, start the server and client to test the chatbot: 1. Start the Server In one terminal: Copy Ask AI python server/server.py 2. Start the Client In another terminal: Copy Ask AI cd examples/react npm install npm run dev 3. Testing the Connection Open http://localhost:5173 in your browser Click “Connect” to join a room Allow microphone access when prompted Start talking with your AI assistant Troubleshooting: Check that all API keys are properly configured in .env Grant your browser access to your microphone, so it can receive your audio input Verify WebRTC ports aren’t blocked by firewalls Next Steps Now that you have a working chatbot, consider these enhancements: Add custom avatar animations Implement function calling for external integrations Add support for multiple languages Enhance error recovery and reconnection logic Examples Foundational Example A basic implementation demonstrating core Gemini Multimodal Live features and transcription capabilities Simple Chatbot A complete client/server implementation showing how to build a Pipecat JS or React client that connects to a Gemini Live Pipecat bot Learn More Gemini Multimodal Live API Reference React Client SDK Documentation Recording Transcripts Metrics On this page What We’ll Build Key Concepts Understanding Pipelines Processors Gemini Integration Prerequisites Server Implementation Environment Setup Server Setup (server.py) Creating Meeting Room Managing Bot Instances Connection Endpoints Bot Implementation (bot-gemini.py) Transport Setup Gemini Service Configuration Conversation Context Processor Setup Pipeline Assembly Client Implementation Connection Setup Media Handling Real-time Events Complete Example Running the Application 1. Start the Server 2. Start the Client 3. Testing the Connection Next Steps Examples Learn More Assistant Responses are generated using AI and may contain mistakes.
|
features_gemini-multimodal-live_daa6fbbb.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
URL: https://docs.pipecat.ai/guides/features/gemini-multimodal-live#pipeline-assembly
|
| 2 |
+
Title: Building with Gemini Multimodal Live - Pipecat
|
| 3 |
+
==================================================
|
| 4 |
+
|
| 5 |
+
Building with Gemini Multimodal Live - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Features Building with Gemini Multimodal Live Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Guides Fundamentals Context Management Custom FrameProcessor Detecting Idle Users Ending a Pipeline Function Calling Muting User Input Recording Audio Recording Transcripts Features Gemini Multimodal Live Metrics Noise cancellation with Krisp OpenAI Audio Models and APIs Pipecat Flows Telephony Overview Dial-in: WebRTC (Daily) Dial-in: WebRTC (Twilio + Daily) Dial-in: Twilio (Media Streams) Dialout: WebRTC (Daily) Deploying your bot Overview Deployment pattern Example: Pipecat Cloud Example: Fly.io Example: Cerebrium Example: Modal This guide will walk you through building a real-time AI chatbot using Gemini Multimodal Live and Pipecat. We’ll create a complete application with a Pipecat server and a Pipecat React client that enables natural conversations with an AI assistant. API Reference Gemini Multimodal Live API documentation Example Code Find the complete client and server code in Github Client SDK Pipecat React SDK documentation What We’ll Build In this guide, you’ll create: A FastAPI server that manages bot instances A Gemini-powered conversational AI bot A React client with real-time audio/video A complete pipeline for speech-to-speech interaction Key Concepts Before we dive into implementation, let’s cover some important concepts that will help you understand how Pipecat and Gemini work together. Understanding Pipelines At the heart of Pipecat is the pipeline system. A pipeline is a sequence of processors that handle different aspects of the conversation flow. Think of it like an assembly line where each station (processor) performs a specific task. For our chatbot, the pipeline looks like this: Copy Ask AI pipeline = Pipeline([ transport.input(), # Receives audio/video from the user via WebRTC rtvi, # Handles client/server messaging and events context_aggregator.user(), # Manages user message history llm, # Processes speech through Gemini talking_animation, # Controls bot's avatar transport.output(), # Sends audio/video back to the user via WebRTC context_aggregator.assistant(), # Manages bot message history ]) Processors Each processor in the pipeline handles a specific task: Transport transport.input() and transport.output() handle media streaming with Daily Context context_aggregator maintains conversation history for natural dialogue Speech Processing rtvi_user_transcription and rtvi_bot_transcription handle speech-to-text Animation talking_animation controls the bot’s visual state based on speaking activity The order of processors matters! Data flows through the pipeline in sequence, so each processor should receive the data it needs from previous processors. Learn more about the Core Concepts to Pipecat server. Gemini Integration The GeminiMultimodalLiveLLMService is a speech-to-speech LLM service that interfaces with the Gemini Multimodal Live API. It provides: Real-time speech-to-speech conversation Context management Voice activity detection Tool use Pipecat manages two types of connections: A WebRTC connection between the Pipecat client and server for reliable audio/video streaming A WebSocket connection between the Pipecat server and Gemini for real-time AI processing This architecture ensures stable media streaming while maintaining responsive AI interactions. Prerequisites Before we begin, you’ll need: Python 3.10 or higher Node.js 16 or higher A Daily API key A Google API key with Gemini Multimodal Live access Clone the Pipecat repo: Copy Ask AI git clone git@github.com:pipecat-ai/pipecat.git Server Implementation Let’s start by setting up the server components. Our server will handle bot management, room creation, and client connections. Environment Setup Navigate to the simple-chatbot’s server directory: Copy Ask AI cd examples/simple-chatbot/server Set up a python virtual environment: Copy Ask AI python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate Install requirements: Copy Ask AI pip install -r requirements.txt Copy env.example to .env and make a few changes: Copy Ask AI # Remove the hard-coded example room URL DAILY_SAMPLE_ROOM_URL = # Add your Daily and Gemini API keys DAILY_API_KEY = [your key here] GEMINI_API_KEY = [your key here] # Use Gemini implementation BOT_IMPLEMENTATION = gemini Server Setup (server.py) server.py is a FastAPI server that creates the meeting room where clients and bots interact, manages bot instances, and handles client connections. It’s the orchestrator that brings everything on the server-side together. Creating Meeting Room The server uses Daily’s API via a REST API helper to create rooms where clients and bots can meet. Each room is a secure space for audio/video communication: server/server.py Copy Ask AI async def create_room_and_token (): """Create a Daily room and generate access credentials.""" room = await daily_helpers[ "rest" ].create_room(DailyRoomParams()) token = await daily_helpers[ "rest" ].get_token(room.url) return room.url, token Managing Bot Instances When a client connects, the server starts a new bot instance configured specifically for that room. It keeps track of running bots and ensures there’s only one bot per room: server/server.py Copy Ask AI # Start the bot process for a specific room bot_file = "bot-gemini.py" proc = subprocess.Popen([ f "python3 -m { bot_file } -u { room_url } -t { token } " ]) bot_procs[proc.pid] = (proc, room_url) Connection Endpoints The server provides two ways to connect: Browser Access (/) Creates a room, starts a bot, and redirects the browser to the Daily meeting URL. Perfect for quick testing and development. RTVI Client (/connect) Creates a room, starts a bot, and returns connection credentials. Used by RTVI clients for custom implementations. Bot Implementation (bot-gemini.py) The bot implementation connects all the pieces: Daily transport, Gemini service, conversation context, and processors. Let’s break down each component: Transport Setup First, we configure the Daily transport, which handles WebRTC communication between the client and server. server/bot-gemini.py Copy Ask AI transport = DailyTransport( room_url, token, "Chatbot" , DailyParams( audio_in_enabled = True , # Enable audio input audio_out_enabled = True , # Enable audio output video_out_enabled = True , # Enable video output vad_analyzer = SileroVADAnalyzer( params = VADParams( stop_secs = 0.5 )), ), ) Gemini Multimodal Live audio requirements: Input: 16 kHz sample rate Output: 24 kHz sample rate Gemini Service Configuration Next, we initialize the Gemini service which will provide speech-to-speech inference and communication: server/bot-gemini.py Copy Ask AI llm = GeminiMultimodalLiveLLMService( api_key = os.getenv( "GEMINI_API_KEY" ), voice_id = "Puck" , # Choose your bot's voice params = InputParams( temperature = 0.7 ) # Set model input params ) Conversation Context We give our bot its personality and initial instructions: server/bot-gemini.py Copy Ask AI messages = [{ "role" : "user" , "content" : """You are Chatbot, a friendly, helpful robot. Keep responses brief and avoid special characters since output will be converted to audio.""" }] context = OpenAILLMContext(messages) context_aggregator = llm.create_context_aggregator(context) OpenAILLMContext is used as a common LLM base service for context management. In the future, we may add a specific context manager for Gemini. The context aggregator automatically maintains conversation history, helping the bot remember previous interactions. Processor Setup We initialize two additional processors in our pipeline to handle different aspects of the interaction: RTVI Processors RTVIProcessor : Handles all client communication events including transcriptions, speaking states, and performance metrics Animation TalkingAnimation : Controls the bot’s visual state, switching between static and animated frames based on speaking status Learn more about the RTVI framework and available processors. Pipeline Assembly Finally, we bring everything together in a pipeline: server/bot-gemini.py Copy Ask AI pipeline = Pipeline([ transport.input(), # Receive media rtvi, # Client UI events context_aggregator.user(), # Process user context llm, # Gemini processing ta, # Animation (talking/quiet states) transport.output(), # Send media context_aggregator.assistant() # Process bot context ]) task = PipelineTask( pipeline, params = PipelineParams( allow_interruptions = True , enable_metrics = True , enable_usage_metrics = True , ), observers = [RTVIObserver(rtvi)], ) The order of processors is crucial! For example, the RTVI processor should be early in the pipeline to capture all relevant events. The RTVIObserver monitors the entire pipeline and automatically collects relevant events to send to the client. Client Implementation Our React client uses the Pipecat React SDK to communicate with the bot. Let’s explore how the client connects and interacts with our Pipecat server. Connection Setup The client needs to connect to our bot server using the same transport type (Daily WebRTC) that we configured on the server: examples/react/src/providers/PipecatProvider.tsx Copy Ask AI const client = new PipecatClient ({ transport: new DailyTransport (), enableMic: true , // Enable audio input enableCam: false , // Disable video input enableScreenShare: false , // Disable screen sharing }); client . connect ({ endpoint: "http://localhost:7860/connect" , // Your bot connection endpoint }); The connection configuration must match your server: DailyTransport : Matches the WebRTC transport used in bot-gemini.py connect endpoint: Matches the /connect route in server.py Media settings: Controls which devices are enabled on join Media Handling Pipecat’s React components handle all the complex media stream management for you: Copy Ask AI function App () { return ( < PipecatClientProvider client = { client } > < div className = "app" > < PipecatClientVideo participant = "bot" /> { /* Bot's video feed */ } < PipecatClientAudio /> { /* Audio input/output */ } </ div > </ PipecatClientProvider > ); } The PipecatClientProvider is the root component for providing Pipecat client context to your application. By wrapping your PipecatClientAudio and PipecatClientVideo components in this provider, they can access the client instance and receive and process the streams received from the Pipecat server. Real-time Events The RTVI processors we configured in the pipeline emit events that we can handle in our client: Copy Ask AI // Listen for transcription events useRTVIClientEvent ( RTVIEvent . UserTranscript , ( data : TranscriptData ) => { if ( data . final ) { console . log ( `User said: ${ data . text } ` ); } }); // Listen for bot responses useRTVIClientEvent ( RTVIEvent . BotTranscript , ( data : BotLLMTextData ) => { console . log ( `Bot responded: ${ data . text } ` ); }); Available Events Speaking state changes Transcription updates Bot responses Connection status Performance metrics Event Usage Use these events to: Show speaking indicators Display transcripts Update UI state Monitor performance Optionally, uses callbacks to handle events in your application. Learn more in the Pipecat client docs. Complete Example Here’s a basic client implementation with connection status and transcription display: Copy Ask AI function ChatApp () { return ( < PipecatClientProvider client = { client } > < div className = "app" > { /* Connection UI */ } < StatusDisplay /> < ConnectButton /> { /* Media Components */ } < BotVideo /> < PipecatClientAudio /> { /* Debug/Transcript Display */ } < DebugDisplay /> </ div > </ PipecatClientProvider > ); } Check out the example repository for a complete client implementation with styling and error handling. Running the Application From the simple-chatbot directory, start the server and client to test the chatbot: 1. Start the Server In one terminal: Copy Ask AI python server/server.py 2. Start the Client In another terminal: Copy Ask AI cd examples/react npm install npm run dev 3. Testing the Connection Open http://localhost:5173 in your browser Click “Connect” to join a room Allow microphone access when prompted Start talking with your AI assistant Troubleshooting: Check that all API keys are properly configured in .env Grant your browser access to your microphone, so it can receive your audio input Verify WebRTC ports aren’t blocked by firewalls Next Steps Now that you have a working chatbot, consider these enhancements: Add custom avatar animations Implement function calling for external integrations Add support for multiple languages Enhance error recovery and reconnection logic Examples Foundational Example A basic implementation demonstrating core Gemini Multimodal Live features and transcription capabilities Simple Chatbot A complete client/server implementation showing how to build a Pipecat JS or React client that connects to a Gemini Live Pipecat bot Learn More Gemini Multimodal Live API Reference React Client SDK Documentation Recording Transcripts Metrics On this page What We’ll Build Key Concepts Understanding Pipelines Processors Gemini Integration Prerequisites Server Implementation Environment Setup Server Setup (server.py) Creating Meeting Room Managing Bot Instances Connection Endpoints Bot Implementation (bot-gemini.py) Transport Setup Gemini Service Configuration Conversation Context Processor Setup Pipeline Assembly Client Implementation Connection Setup Media Handling Real-time Events Complete Example Running the Application 1. Start the Server 2. Start the Client 3. Testing the Connection Next Steps Examples Learn More Assistant Responses are generated using AI and may contain mistakes.
|