Choose your setup
Rapida supports two deployment modes. Pick the one that fits your use case before you start.- Voice Assistant Only (Recommended)
- Voice Assistant + Knowledge Base
Best for most users. Build and run AI voice assistants — inbound/outbound calls, real-time STT/LLM/TTS, webhooks, and telephony integrations.No OpenSearch or Python services required. Lower RAM, simpler setup, faster to get started.Services started: web-api · assistant-api · integration-api · endpoint-api · UI · PostgreSQL · Redis · NginxServices skipped: document-api · OpenSearchRAM needed: 4 GB minimumThat’s it — open http://localhost:3000 and create your first assistant.
Not sure which to pick? Start with Voice Only (
make up-all). You can add the knowledge base at any time later — just run make up-all-with-knowledge. No existing data is lost.What is the knowledge base?
The knowledge base lets your voice assistant answer questions grounded in your own documents. When a caller asks something, the assistant searches your uploaded content and injects the most relevant passages into the LLM prompt before generating a response. This is called Retrieval-Augmented Generation (RAG).| Feature | Without knowledge base | With knowledge base |
|---|---|---|
| Voice calls (STT → LLM → TTS) | ✅ | ✅ |
| Inbound / outbound telephony | ✅ | ✅ |
| Webhooks & callbacks | ✅ | ✅ |
| LLM provider integrations | ✅ | ✅ |
| Upload documents (PDF, DOCX, CSV…) | ✗ | ✅ |
| Answer questions from your documents | ✗ | ✅ |
| Semantic / vector search | ✗ | ✅ |
| OpenSearch required | ✗ | ✅ |
| RAM requirement | ~4 GB | ~8–16 GB |
What you are deploying
Rapida is a microservices platform. Self-hosting means running all services on your own infrastructure — no data leaves your environment.| Service | Port | Role |
|---|---|---|
nginx | 8080 | Reverse proxy, WebSocket upgrade, SSL/TLS termination |
web-api | 9001 | Auth, organizations, credential vault, gRPC proxy |
assistant-api | 9007 · 4573 | Voice orchestration, STT/LLM/TTS pipeline, telephony |
integration-api | 9004 | LLM, STT, TTS provider integrations |
endpoint-api | 9005 | Webhook delivery and event routing |
document-api | 9010 | Document ingestion, embeddings, RAG search (optional — knowledge base only) |
ui | 3000 | React dashboard |
postgres | 5432 | Relational data (4 databases) |
redis | 6379 | Cache, sessions, job queue |
opensearch | 9200 | Conversation search, document indexing (optional — knowledge base only) |
The recommended path for self-hosting is Docker Compose. All services, infrastructure, and Nginx are wired together in
docker-compose.yml. No manual network or volume configuration is needed.Prerequisites
| Requirement | Version | Notes |
|---|---|---|
| Docker Engine | 20.10+ | Install Docker |
| Docker Compose | v2.0+ | Included with Docker Desktop |
| RAM | 4 GB minimum | 8 GB recommended; 16 GB if running with knowledge base (OpenSearch is memory-intensive) |
| Disk | 10 GB free | For images, volumes, and uploaded assets |
| OS | macOS · Linux · Windows WSL2 |
Quickstart
Create data directories
~/rapida-data/assets/{db,redis,opensearch} with the correct ownership for Docker volume mounts. The opensearch directory is only used when running with the knowledge base profile.macOS users:
make setup-local uses setfacl, which is a Linux utility. If the command fails, create the directories manually:Configure environment files
Each service reads from an env file in
docker/<service>/. The defaults work for a local deployment with no external providers. You do not need any API keys to start the platform.Build all service images
Start all services
The default
make up-all starts all core services (web-api, assistant-api, integration-api, endpoint-api, UI, PostgreSQL, Redis, Nginx). document-api and opensearch are optional and only started with the with-knowledge variant. You can always add knowledge base support later by running make up-all-with-knowledge.Verify all services are healthy
Up or Up (healthy). If any container is in Exited state, check its logs:Open the dashboard
Navigate to http://localhost:3000 in your browser. You should see the Rapida login page.The Nginx gateway is accessible at http://localhost:8080.
Service Endpoints
| Service | URL | Health Check | Required |
|---|---|---|---|
| Dashboard | http://localhost:3000 | — | Yes |
| Nginx Gateway | http://localhost:8080 | — | Yes |
| Web API | http://localhost:9001 | GET /readiness/ | Yes |
| Assistant API | http://localhost:9007 | GET /readiness/ | Yes |
| Integration API | http://localhost:9004 | GET /readiness/ | Yes |
| Endpoint API | http://localhost:9005 | GET /readiness/ | Yes |
| Document API | http://localhost:9010 | GET /readiness/ | Optional (knowledge base only) |
| OpenSearch | http://localhost:9200 | GET /_cluster/health | Optional (knowledge base only) |
Working with Individual Services
Start / stop specific services
Logs
Rebuild after code changes
Shell access
Connecting Your First Provider
The platform runs without any external provider keys. To place a voice call, you need to add at least one LLM, one STT, and one TTS provider credential through the dashboard.Open the dashboard
Navigate to http://localhost:3000 and create an account.
Create an organization and project
Every resource in Rapida is scoped to an organization and project.
Add provider credentials
Go to Settings → Integrations and add API keys for your LLM (e.g., OpenAI), STT (e.g., Deepgram), and TTS (e.g., ElevenLabs) providers.Credentials are encrypted with AES-256-GCM before storage. See Integration API for details.
Create an assistant
Go to Assistants → New Assistant and configure the system prompt, LLM, STT, TTS, and (optionally) a knowledge base.
Test the call
Use the built-in call tester in the dashboard or connect via the rapida-react SDK.
Make Command Reference
| Command | Description |
|---|---|
make setup-local | Create data directories with correct permissions |
make build-all | Build all Docker images (without knowledge base) |
make build-all-with-knowledge | Build all Docker images including document-api |
make rebuild-all | Rebuild all images (no cache, without knowledge base) |
make rebuild-all-with-knowledge | Rebuild all images including document-api (no cache) |
make rebuild-<service> | Rebuild a single service (no cache) |
make up-all | Start all services (without knowledge base) |
make up-all-with-knowledge | Start all services including document-api and opensearch |
make down-all | Stop all services |
make restart-all | Restart all services |
make deps | Start infrastructure only (postgres, redis) |
make status | Show container status and port mappings |
make logs-all | Tail all service logs |
make logs-<service> | Tail logs for a specific service |
make shell-<service> | Open a shell inside a service container |
make shell-db | Open psql shell in the PostgreSQL container |
make clean | Remove all containers, volumes, and images |
make ps-all | List container status |
Stopping and Resetting
Next Steps
Architecture
System topology, service communication, and data flow diagrams.
Configuration Reference
Complete environment variable reference for all services.
Services Overview
Per-service documentation — components, routing, configuration.
Troubleshooting
Common issues and solutions for Docker and local setup.