Skip to main content
The SDK / API deployment exposes your assistant through the Rapida React SDK and REST API. Unlike the Web Widget (which provides a pre-built UI), this deployment gives you full control over the user interface while Rapida handles the audio pipeline, LLM orchestration, and conversation state behind the scenes.
Voice capabilities (microphone input and spoken responses) are optional. You can build a text-only integration by skipping the Voice Input and Voice Output steps during configuration.

Creating an SDK / API Deployment

Navigate to your assistant, click Configure Assistant, then select Deployments from the sidebar. Click Add Deployment and choose SDK / API. The SDK deployment wizard walks you through three steps:
1

General Experience

Define how the assistant greets users and handles session lifecycle.Required fields:
  • Greeting — Opening message sent when a session starts. Supports {{variable}} syntax for dynamic content passed as query parameters
Advanced settings (expand to configure):
  • Error Message — Fallback message sent when an unexpected error occurs
  • Idle Silence Timeout — Duration of user silence before Rapida sends a prompt (15-120 seconds, default: 30s)
  • Idle Timeout Backoff — How many times the idle timeout multiplies before ending the session (0-5, default: 2)
  • Idle Message — Message sent when the user hasn’t responded (default: “Are you there?”)
  • Maximum Session Duration — Hard limit before the session is automatically ended (180-600 seconds, default: 300s)
2

Voice Input (Speech-to-Text) — Optional

Enable microphone-based voice input. When configured, the SDK streams browser audio to Rapida for real-time transcription.If enabled:
  • STT Provider — Deepgram, AssemblyAI, Google, Azure, OpenAI Whisper, AWS Transcribe, Cartesia, Rev.ai, Speechmatics, Sarvam, Groq, or Nvidia
  • Model — Provider-specific transcription model
  • Language — Primary transcription language
  • Encoding — Audio encoding format
  • Sample Rate — Audio sample rate
Advanced settings (expand to configure):
  • Voice Activity Detection (VAD) — Silero VAD with configurable threshold (0.0-1.0, default: 0.8)
  • Background Noise Removal — RNNoise for ambient noise removal
  • End of Speech Detection — Silence-based EOS with configurable timeout (default: 1000ms)
Click Skip to deploy without voice input. Your app can still send text messages via the SDK. You can enable voice input later by editing the deployment.
3

Voice Output (Text-to-Speech) — Optional

Enable spoken audio responses. When configured, the SDK receives audio streams from Rapida and plays them through the browser.If enabled:
  • TTS Provider — ElevenLabs, Deepgram, Azure, Google, OpenAI, AWS Polly, Cartesia, Resemble, Rime, Sarvam, Neuphonic, MiniMax, Groq, Speechmatics, or Nvidia
  • Model — Provider-specific voice model
  • Language — Output speech language
  • Voice ID — The specific voice from your TTS provider
Advanced settings (expand to configure):
  • Pronunciation Dictionaries — Custom pronunciation for domain-specific terms
  • Conjunction Boundaries — Natural pause points for more human-like speech
  • Pause Duration — Length of pause at conjunction boundaries (100-300ms, default: 240ms)
Click Deploy API to save and activate the deployment. Click Skip to deploy without voice output.

Integration Methods

After deployment, you can integrate via the React SDK or a public URL.

React SDK

Install the Rapida React SDK:
npm install @rapidaai/react
# or
yarn add @rapidaai/react
The SDK provides hooks and components for managing sessions, streaming audio, and receiving transcripts. Audio streams over WebRTC directly between the browser and Rapida — no relay servers or iframes required.

Public URL

Every SDK / API deployment generates a public URL you can share or embed in an iframe:
https://app.rapida.ai/preview/public/assistant/{ASSISTANT_ID}?token={PROJECT_CREDENTIAL_KEY}
Pass agent arguments as query parameters:
?token={KEY}&name=John&account_id=12345
These variables are available in your assistant’s greeting and prompt via the {{variable}} syntax.

Input and Output Modes

Voice InputVoice OutputIntegration Style
DisabledDisabledText-only chat via SDK
EnabledDisabledUsers speak, assistant replies with text
DisabledEnabledUsers type, assistant replies with voice + text
EnabledEnabledFull voice conversation with real-time transcripts

Use Cases

Custom Voice Interface

Build a fully branded voice experience inside your own React application with complete UI control.

In-App Support

Add voice-powered support directly inside your SaaS product without redirecting users.

Voice Search

Implement voice-activated search for content-rich applications.

Accessibility

Enhance web accessibility with voice navigation for visually impaired users.