Skip to main content

Choose your setup

Rapida supports two deployment modes. Pick the one that fits your use case before you start.
Not sure which to pick? Start with Voice Only (make up-all). You can add the knowledge base at any time later — just run make up-all-with-knowledge. No existing data is lost.

What is the knowledge base?

The knowledge base lets your voice assistant answer questions grounded in your own documents. When a caller asks something, the assistant searches your uploaded content and injects the most relevant passages into the LLM prompt before generating a response. This is called Retrieval-Augmented Generation (RAG).
FeatureWithout knowledge baseWith knowledge base
Voice calls (STT → LLM → TTS)
Inbound / outbound telephony
Webhooks & callbacks
LLM provider integrations
Upload documents (PDF, DOCX, CSV…)
Answer questions from your documents
Semantic / vector search
OpenSearch required
RAM requirement~4 GB~8–16 GB

What you are deploying

Rapida is a microservices platform. Self-hosting means running all services on your own infrastructure — no data leaves your environment.
ServicePortRole
nginx8080Reverse proxy, WebSocket upgrade, SSL/TLS termination
web-api9001Auth, organizations, credential vault, gRPC proxy
assistant-api9007 · 4573Voice orchestration, STT/LLM/TTS pipeline, telephony
integration-api9004LLM, STT, TTS provider integrations
endpoint-api9005Webhook delivery and event routing
document-api9010Document ingestion, embeddings, RAG search (optional — knowledge base only)
ui3000React dashboard
postgres5432Relational data (4 databases)
redis6379Cache, sessions, job queue
opensearch9200Conversation search, document indexing (optional — knowledge base only)
The recommended path for self-hosting is Docker Compose. All services, infrastructure, and Nginx are wired together in docker-compose.yml. No manual network or volume configuration is needed.

Prerequisites

RequirementVersionNotes
Docker Engine20.10+Install Docker
Docker Composev2.0+Included with Docker Desktop
RAM4 GB minimum8 GB recommended; 16 GB if running with knowledge base (OpenSearch is memory-intensive)
Disk10 GB freeFor images, volumes, and uploaded assets
OSmacOS · Linux · Windows WSL2

Quickstart

1

Clone the repository

git clone https://github.com/rapidaai/voice-ai.git
cd voice-ai
2

Create data directories

make setup-local
Creates ~/rapida-data/assets/{db,redis,opensearch} with the correct ownership for Docker volume mounts. The opensearch directory is only used when running with the knowledge base profile.
macOS users: make setup-local uses setfacl, which is a Linux utility. If the command fails, create the directories manually:
mkdir -p ~/rapida-data/assets/opensearch ~/rapida-data/assets/db ~/rapida-data/assets/redis
sudo chown -R 1000:1000 ~/rapida-data/assets/opensearch
3

Configure environment files

Each service reads from an env file in docker/<service>/. The defaults work for a local deployment with no external providers. You do not need any API keys to start the platform.
docker/web-api/.web.env
docker/assistant-api/.assistant.env
docker/integration-api/.integration.env
docker/endpoint-api/.endpoint.env
docker/document-api/config.yaml
Before going to production, change SECRET=rpd_pks to a strong random value in every env file. All services share the same SECRET for JWT signing. Generate one with openssl rand -hex 32.
4

Build all service images

# Without knowledge base (recommended for most users)
make build-all

# With knowledge base (includes document-api and opensearch)
make build-all-with-knowledge
Builds Docker images for all Go services. Initial build takes 5–10 minutes depending on your network.
5

Start all services

# Without knowledge base — voice assistants only
make up-all

# With knowledge base — includes document-api and opensearch
make up-all-with-knowledge
The default make up-all starts all core services (web-api, assistant-api, integration-api, endpoint-api, UI, PostgreSQL, Redis, Nginx). document-api and opensearch are optional and only started with the with-knowledge variant. You can always add knowledge base support later by running make up-all-with-knowledge.
6

Verify all services are healthy

make status
All containers should show Up or Up (healthy). If any container is in Exited state, check its logs:
make logs-all
7

Open the dashboard

Navigate to http://localhost:3000 in your browser. You should see the Rapida login page.The Nginx gateway is accessible at http://localhost:8080.

Service Endpoints

ServiceURLHealth CheckRequired
Dashboardhttp://localhost:3000Yes
Nginx Gatewayhttp://localhost:8080Yes
Web APIhttp://localhost:9001GET /readiness/Yes
Assistant APIhttp://localhost:9007GET /readiness/Yes
Integration APIhttp://localhost:9004GET /readiness/Yes
Endpoint APIhttp://localhost:9005GET /readiness/Yes
Document APIhttp://localhost:9010GET /readiness/Optional (knowledge base only)
OpenSearchhttp://localhost:9200GET /_cluster/healthOptional (knowledge base only)
# Verify a specific service
curl http://localhost:9001/readiness/

Working with Individual Services

Start / stop specific services

# Infrastructure only (postgres, redis, opensearch)
make deps

# Individual services
make up-web
make up-assistant
make up-integration
make up-endpoint
make up-document
make up-ui

Logs

make logs-all           # All services
make logs-web           # web-api only
make logs-assistant     # assistant-api only
make logs-integration   # integration-api only
make logs-endpoint      # endpoint-api only
make logs-document      # document-api only

Rebuild after code changes

make rebuild-web        # Rebuild web-api (no cache)
make rebuild-assistant  # Rebuild assistant-api (no cache)
make rebuild-all        # Rebuild all services (no cache)

Shell access

make shell-web          # Shell into web-api container
make shell-assistant    # Shell into assistant-api container
make shell-db           # psql shell → PostgreSQL (rapida_user / web_db)

Connecting Your First Provider

The platform runs without any external provider keys. To place a voice call, you need to add at least one LLM, one STT, and one TTS provider credential through the dashboard.
1

Open the dashboard

Navigate to http://localhost:3000 and create an account.
2

Create an organization and project

Every resource in Rapida is scoped to an organization and project.
3

Add provider credentials

Go to Settings → Integrations and add API keys for your LLM (e.g., OpenAI), STT (e.g., Deepgram), and TTS (e.g., ElevenLabs) providers.Credentials are encrypted with AES-256-GCM before storage. See Integration API for details.
4

Create an assistant

Go to Assistants → New Assistant and configure the system prompt, LLM, STT, TTS, and (optionally) a knowledge base.
5

Test the call

Use the built-in call tester in the dashboard or connect via the rapida-react SDK.

Make Command Reference

CommandDescription
make setup-localCreate data directories with correct permissions
make build-allBuild all Docker images (without knowledge base)
make build-all-with-knowledgeBuild all Docker images including document-api
make rebuild-allRebuild all images (no cache, without knowledge base)
make rebuild-all-with-knowledgeRebuild all images including document-api (no cache)
make rebuild-<service>Rebuild a single service (no cache)
make up-allStart all services (without knowledge base)
make up-all-with-knowledgeStart all services including document-api and opensearch
make down-allStop all services
make restart-allRestart all services
make depsStart infrastructure only (postgres, redis)
make statusShow container status and port mappings
make logs-allTail all service logs
make logs-<service>Tail logs for a specific service
make shell-<service>Open a shell inside a service container
make shell-dbOpen psql shell in the PostgreSQL container
make cleanRemove all containers, volumes, and images
make ps-allList container status

Stopping and Resetting

# Stop all services (preserve volumes)
make down-all

# Full reset — removes containers, volumes, and images
make clean
make clean deletes all Docker volumes, including the PostgreSQL data directory and (if used) the OpenSearch data directory. All database content and indexed documents will be lost.

Next Steps

Architecture

System topology, service communication, and data flow diagrams.

Configuration Reference

Complete environment variable reference for all services.

Services Overview

Per-service documentation — components, routing, configuration.

Troubleshooting

Common issues and solutions for Docker and local setup.