Docker Deployment
Deploy Airbeeps with Docker Compose for production use. The stack includes all required services.
Services
| Service | Image | Description |
|---|---|---|
airbeeps | airbeeps/app | Main application (FastAPI + bundled frontend) |
celery-worker | airbeeps/app | Background task worker (document ingestion) |
celery-beat | airbeeps/app | Scheduled tasks (cleanup, analytics) |
postgres | postgres:16 | PostgreSQL database |
redis | redis:7-alpine | Cache + Celery broker |
qdrant | qdrant/qdrant | Vector store |
minio | minio/minio | S3-compatible object storage |
Quick start
git clone https://github.com/airbeeps/airbeeps.git
cd airbeepsCreate a .env file with required secrets:
# Required secrets
AIRBEEPS_SECRET_KEY=your-secret-key-here # openssl rand -hex 32
POSTGRES_PASSWORD=your-postgres-password
# MinIO (S3)
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=minioadmin
MINIO_BUCKET=airbeepsStart the stack:
docker compose up -dOpen http://localhost:8080 and sign up. The first user becomes admin.
Environment variables
The Docker Compose file sets these automatically:
| Variable | Docker Value | Description |
|---|---|---|
AIRBEEPS_CONFIG_ENV | docker | Loads settings.docker.yaml |
AIRBEEPS_ENVIRONMENT | production | Production mode |
AIRBEEPS_DATABASE_URL | postgresql+asyncpg://... | PostgreSQL connection |
AIRBEEPS_FILE_STORAGE_BACKEND | s3 | MinIO storage |
AIRBEEPS_REDIS_ENABLED | true | Redis caching |
AIRBEEPS_CELERY_ENABLED | true | Background workers |
LLM provider keys should be added to your .env:
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...Health checks
The airbeeps service includes a health check:
GET /api/v1/health/readyThe service is healthy when the API responds, database is connected, and core services are reachable. PostgreSQL and Redis also have their own health checks — the app waits for these before starting.
Volumes
All data is persisted via Docker volumes:
| Volume | Service | Data |
|---|---|---|
airbeeps-data | airbeeps, celery-worker | Application data |
postgres-data | postgres | Database |
redis-data | redis | Cache and queues |
qdrant-data | qdrant | Vector store index |
minio-data | minio | Uploaded files |
Celery workers
The Docker stack includes two Celery containers:
Worker
Processes background tasks (document ingestion, memory compaction):
celery -A airbeeps.tasks.celery_app worker --loglevel=info --concurrency=4Resource limits: 2 CPUs, 2 GB RAM.
Beat scheduler
Runs periodic tasks (cleanup, analytics):
celery -A airbeeps.tasks.celery_app beat --loglevel=infoResource limits: 0.5 CPU, 256 MB RAM.
MinIO setup
The minio-init service automatically creates the storage bucket on first run. It:
- Waits for MinIO to be ready
- Creates the configured bucket
- Sets public download access for the bucket
Access the MinIO console at http://localhost:9001 (username and password from your .env).
Scaling
Horizontal scaling
You can scale Celery workers for faster ingestion:
docker compose up -d --scale celery-worker=3Resource tuning
Adjust deploy.resources in docker-compose.yml for your hardware.
Updating
git pull
docker compose build
docker compose up -dMigrations run automatically on startup.
Backup
Back up data volumes regularly:
# Database
docker compose exec postgres pg_dump -U airbeeps airbeeps > backup.sql
# All volumes
docker run --rm -v airbeeps_postgres-data:/data -v $(pwd):/backup alpine tar czf /backup/postgres.tar.gz /data