Deployment Guide¶
This guide covers deploying the Trump Speeches NLP Chatbot API to various platforms.
Table of Contents¶
- Local Development
- Docker Deployment
- Render Deployment
- Azure App Service Deployment
- CI/CD with GitHub Actions
- Environment Variables
Local Development¶
Running Without Docker¶
- Create a virtual environment (optional but recommended):
# Create venv with default Python version
uv venv
# Create venv with specific Python version
uv venv --python 3.12
uv venv --python 3.11
# Create venv in custom directory
uv venv .venv-dev
# Activate the virtual environment (Windows PowerShell)
.venv\Scripts\Activate.ps1
# Deactivate when done
deactivate
- Install dependencies:
# Install all production dependencies
uv sync
# Install with specific dependency groups
uv sync --group dev # Include dev tools
uv sync --group docs # Include docs tools
uv sync --group notebooks # Include notebook tools
uv sync --all-groups # Include all optional groups
# Install without optional groups
uv sync --no-group dev --no-group docs --no-group notebooks
# Upgrade all dependencies to latest compatible versions
uv sync --upgrade
# Install from scratch (ignore lock file)
uv sync --reinstall
- Add or remove packages:
# Add a new package
uv add requests
uv add "fastapi>=0.100.0"
# Add to a specific group
uv add --group dev pytest
uv add --group docs mkdocs-material
# Remove a package
uv remove requests
# Update lock file without installing
uv lock
# Update specific package
uv lock --upgrade-package fastapi
- Run commands:
# Run the API server
uv run uvicorn speech_nlp.app:app --reload
# Run with custom host and port
uv run uvicorn speech_nlp.app:app --host 0.0.0.0 --port 8001 --reload
# Run tests
uv run pytest
uv run pytest -v --cov=src
# Run code formatter
uv run ruff format src/
# Run linter
uv run ruff check src/
uv run ruff check src/ --fix # Auto-fix issues
uv run mypy src/
# Run Python scripts
uv run python scripts/migrate_rag_embeddings.py
# Run Python REPL with dependencies
uv run python
- Manage Python versions:
# List available Python versions
uv python list
# Install a specific Python version
uv python install 3.12
uv python install 3.11.8
# Pin Python version for project
uv python pin 3.12
# Show current Python version
uv python find
- Export dependencies:
# Export to requirements.txt
uv pip compile pyproject.toml -o requirements.txt
# Export with specific groups excluded
uv export --no-group dev --no-group docs -o requirements.txt
# Export in different formats
uv export --format requirements-txt
-
Access the application:
-
Frontend: http://localhost:8000
- API Docs: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
Docker Deployment¶
Build and Run with Docker¶
- Build the Docker image:
docker build -t trump-speeches-nlp-chatbot .
# Build with no cache (clean build)
```powershell
docker build --no-cache -t trump-speeches-nlp-chatbot:latest .
- Run the container:
# Basic run
docker run --rm -it -p 8000:8000 --env-file .env --name nlp-chatbot trump-speeches-nlp-chatbot
# Run in detached mode (background)
docker run -d -p 8000:8000 --env-file .env --name nlp-chatbot trump-speeches-nlp-chatbot
# Run in remote interactive mode
docker run --rm -it -p 8000:8000 --env-file .env --name nlp-chatbot trump-speeches-nlp-chatbot
# Run with persistent ChromaDB volume
docker run --rm -it -p 8000:8000 -v "${PWD}/data/chromadb:/app/data/chromadb" --env-file .env --name nlp-chatbot trump-speeches-nlp-chatbot
- View logs:
# Follow logs (real-time)
docker logs -f nlp-chatbot
# View last 100 lines
docker logs --tail 100 nlp-chatbot
# View logs with timestamps
docker logs -t nlp-chatbot
- Manage containers:
# Stop the container
docker stop nlp-chatbot
# Start a stopped container
docker start nlp-chatbot
# Restart the container
docker restart nlp-chatbot
# Remove the container
docker rm nlp-chatbot
# Remove with force (even if running)
docker rm -f nlp-chatbot
- Tag and push to Docker Hub:
# Tag for Docker Hub
docker tag trump-speeches-nlp-chatbot yourusername/trump-speeches-nlp-chatbot:latest
docker tag trump-speeches-nlp-chatbot yourusername/trump-speeches-nlp-chatbot:v1.0.0
# Login to Docker Hub
docker login
# Push to Docker Hub
docker push yourusername/trump-speeches-nlp-chatbot:latest
docker push yourusername/trump-speeches-nlp-chatbot:v1.0.0
# Push all tags
docker push --all-tags yourusername/trump-speeches-nlp-chatbot
- Clean up Docker resources:
# Remove unused images
docker image prune
# Remove all stopped containers
docker container prune
# Remove all unused containers, networks, images
docker system prune
# Remove everything (including volumes)
docker system prune -a --volumes
Using Docker Compose¶
Production mode:
Development mode (with hot reload):
Stop services:
Render Deployment¶
Render offers free hosting with automatic deployments using Docker images.
Deployment Strategy¶
This project uses a Docker-based deployment approach for Render:
- GitHub Actions builds and pushes Docker images to Docker Hub
- Render pulls and deploys the pre-built images
- Benefits: Faster deployments, consistent environments, easier rollbacks
Prerequisites¶
- Docker Hub Account (free)
- Sign up at https://hub.docker.com
-
Create a repository named
trump-speeches-nlp-chatbot -
Render Account (free)
-
Sign up at https://render.com
-
GitHub Repository connected to your account
Step 1: Configure GitHub Secrets¶
Add these secrets to your GitHub repository (Settings → Secrets and variables → Actions):
DOCKERHUB_USERNAME- Your Docker Hub usernameDOCKERHUB_TOKEN- Docker Hub access token- Go to Docker Hub → Account Settings → Security → New Access Token
- Copy the token and save it as a GitHub secret
Step 2: Update render.yaml¶
Edit render.yaml and replace <your-dockerhub-username> with your actual Docker Hub username:
Step 3: Deploy to Render¶
Option A: Using Blueprint (Recommended)¶
- Go to Render Dashboard → "Blueprints"
- Click "New Blueprint Instance"
- Connect your GitHub repository
- Render will detect
.render/render.yamland configure everything automatically - Click "Apply" to create the service
Option B: Manual Configuration¶
- Go to Render Dashboard → "New +" → "Web Service"
- Choose "Deploy an existing image from a registry"
- Configure:
- Image URL:
docker.io/your-username/trump-speeches-nlp-chatbot:latest - Name:
trump-speeches-nlp-chatbot - Plan: Free
- Add environment variable:
PORT=8000- Set Health Check Path:
/health - Click "Create Web Service"
Step 4: Trigger Deployment¶
Push to main branch or manually trigger the workflow:
This will:
- Build the Docker image
- Push it to Docker Hub
- Render automatically detects the new image and deploys it
Accessing Your Render App¶
Your API will be available at: https://trump-speeches-nlp-chatbot.onrender.com
Note: Free tier apps spin down after 15 minutes of inactivity. First request may take 30-60 seconds to wake the service.
Monitoring & Logs¶
- Logs: Render Dashboard → Your Service → Logs tab
- Metrics: Render Dashboard → Your Service → Metrics tab
- Manual Deploy: Render Dashboard → Your Service → Manual Deploy button
Upgrading from Free Tier¶
For better performance:
- Starter Plan: $7/month (512 MB RAM, 0.5 CPU)
- No cold starts
- Faster response times
- Better for production use
Azure App Service Deployment¶
Deployment Strategy (UI)¶
This project supports two Azure deployment approaches:
- Azure Container Registry (ACR) - Recommended for enterprise/production (default)
- Docker Hub - Alternative for simpler setups or cross-platform deployments
Both use the same GitHub Actions workflow with conditional logic.
Prerequisites (Azure)¶
- Azure account (free tier available)
- Azure CLI installed: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli
- Docker Hub account (if using Docker Hub approach)
Option 1: Deploy with Azure Container Registry (Recommended)¶
Step 1: Create Azure Resources¶
# Login to Azure
az login
# Create a resource group
az group create --name trump-nlp-rg --location eastus
# Create Azure Container Registry
az acr create `
--resource-group trump-nlp-rg `
--name trumpnlpacr `
--sku Basic `
--admin-enabled true
# Create an App Service plan (B1 or F1 tier)
az appservice plan create `
--name trump-nlp-plan `
--resource-group trump-nlp-rg `
--sku B1 `
--is-linux
# Create a web app for containers
az webapp create `
--resource-group trump-nlp-rg `
--plan trump-nlp-plan `
--name trump-speeches-nlp-chatbot `
--deployment-container-image-name trumpnlpacr.azurecr.io/trump-speeches-nlp-chatbot:latest
# Configure port
az webapp config appsettings set `
--resource-group trump-nlp-rg `
--name trump-speeches-nlp-chatbot `
--settings WEBSITES_PORT=8000
# Enable ACR integration
az webapp config container set `
--resource-group trump-nlp-rg `
--name trump-speeches-nlp-chatbot `
--docker-custom-image-name trumpnlpacr.azurecr.io/trump-speeches-nlp-chatbot:latest `
--docker-registry-server-url https://trumpnlpacr.azurecr.io
Step 2: Configure GitHub Secrets for ACR¶
Add these secrets to your GitHub repository:
AZURE_CREDENTIALS- Service principal credentials
az ad sp create-for-rbac --name "github-actions-trump-nlp" --sdk-auth --role contributor --scopes /subscriptions/{subscription-id}/resourceGroups/trump-nlp-rg
Copy the entire JSON output and save as secret.
-
AZURE_REGISTRY_LOGIN_SERVER- e.g.,trumpnlpacr.azurecr.io -
AZURE_REGISTRY_USERNAMEandAZURE_REGISTRY_PASSWORD
AZURE_WEBAPP_NAME- e.g.,trump-speeches-nlp-chatbot
Step 3: Deploy¶
Push to main branch:
GitHub Actions will automatically:
- Build Docker image
- Push to Azure Container Registry
- Deploy to Azure Web App
- Run health checks
Option 2: Deploy with Docker Hub¶
This approach shares the same Docker images used for Render deployment.
Step 1: Create Azure Resources (CLI)¶
# Login to Azure
az login
# Create a resource group
az group create --name trump-nlp-rg --location eastus
# Create an App Service plan
az appservice plan create `
--name trump-nlp-plan `
--resource-group trump-nlp-rg `
--sku B1 `
--is-linux
# Create a web app using Docker Hub image
az webapp create `
--resource-group trump-nlp-rg `
--plan trump-nlp-plan `
--name trump-speeches-nlp-chatbot `
--deployment-container-image-name docker.io/your-username/trump-speeches-nlp-chatbot:latest
# Configure port
az webapp config appsettings set `
--resource-group trump-nlp-rg `
--name trump-speeches-nlp-chatbot `
--settings WEBSITES_PORT=8000
Step 2: Configure GitHub Secrets for Docker Hub¶
You'll need the same Docker Hub secrets as for Render:
DOCKERHUB_USERNAMEDOCKERHUB_TOKENAZURE_CREDENTIALS(same as Option 1)AZURE_WEBAPP_NAME
Step 3: Deploy with Docker Hub¶
Manually trigger the workflow:
- Go to GitHub → Actions → "Deploy to Azure"
- Click "Run workflow"
- Check "Use Docker Hub instead of ACR"
- Click "Run workflow"
Accessing Your Azure App¶
Your API will be available at: https://trump-speeches-nlp-chatbot.azurewebsites.net
⚠️ Performance Expectations for Free Tier:
This deployment uses Azure Free Tier (F1) hosting. Due to the ML-heavy nature of the app:
- Container size: ~4-5GB (includes FinBERT, RoBERTa, MPNet models)
- Cold start time: 1-5 minutes after inactivity (Azure stops free apps when idle)
- First access: Refresh the page every 30 seconds for a few minutes
- AI response times: 30 seconds to 2 minutes for RAG/sentiment analysis
- Warmed up performance: 2-5 seconds once active
For production deployments, consider:
- Basic (B1) tier or higher for always-on instances (no cold starts)
- Premium tiers for auto-scaling under load
- Application Insights for detailed performance monitoring
Monitoring and Logs¶
# Stream logs
az webapp log tail --resource-group trump-nlp-rg --name trump-speeches-nlp-chatbot
# View metrics
az monitor metrics list `
--resource /subscriptions/{subscription-id}/resourceGroups/trump-nlp-rg/providers/Microsoft.Web/sites/trump-speeches-nlp-chatbot `
--metric-names Requests,ResponseTime,Http5xx
# Open in Azure Portal
az webapp browse --resource-group trump-nlp-rg --name trump-speeches-nlp-chatbot
Updating the Deployment¶
Deployments are automatic on push to main. To manually update:
# Trigger GitHub Actions workflow manually
# Or restart the web app to pull latest image
az webapp restart --resource-group trump-nlp-rg --name trump-speeches-nlp-chatbot
CI/CD with GitHub Actions¶
This project includes automated CI/CD pipelines using GitHub Actions. The workflows are split into multiple files for better organization:
Workflow Files¶
All workflows are located in .github/workflows/:
python-tests.yml- Python Tests- Runs on: All pushes and PRs
- Jobs: Unit tests with coverage (pytest)
- Python versions tested: 3.11, 3.12
-
Coverage requirement: 60%
-
python-lint.yml- Python Linting - Runs on: All pushes and PRs affecting Python files
- Jobs: Ruff linting and format checking
-
Python versions tested: 3.11, 3.12
-
python-typecheck.yml- Type Checking - Runs on: All pushes and PRs affecting Python files
- Jobs: Mypy type checking
- Python versions tested: 3.11, 3.12
-
Status: Allowed to fail (informational)
-
security-audit.yml- Security Scans - Runs on: All pushes and PRs, plus weekly schedule (Mondays 9 AM UTC)
- Jobs: Bandit security scan, pip-audit dependency vulnerabilities
-
Status: Allowed to fail (informational)
-
markdown-lint.yml- Documentation Linting - Runs on: All pushes and PRs affecting markdown files
- Jobs: pymarkdownlnt documentation quality checks
-
Status: Allowed to fail (informational)
-
deploy-render.yml- Render Deployment - Runs on: Push to
mainbranch only - Jobs: Build Docker image, push to Docker Hub, test container
-
Render auto-deploys when new images are detected
-
deploy-azure.yml- Azure Deployment - Runs on: Push to
mainbranch (manual trigger also available) - Jobs: Build & push to ACR/Docker Hub, deploy to Azure Web App, health check
-
Supports both ACR and Docker Hub registries
-
deploy-docs.yml- Documentation Deployment - Runs on: Push to
mainaffecting docs -
Jobs: Build MkDocs site, deploy to GitHub Pages
-
build-push-docker.yml- Docker Build - Runs on: Push to
mainbranch - Jobs: Build and push Docker images to registry
Deployment Architecture¶
┌─────────────────────────────────────────────────────────────┐
│ Push to main branch │
└────────────┬────────────────────────────────────────────────┘
│
├──────────────────┐
│ │
▼ ▼
┌────────────────┐ ┌──────────────────┐
│ Build Docker │ │ Build Docker │
│ Image │ │ Image │
└────────┬───────┘ └────────┬─────────┘
│ │
▼ ▼
┌────────────────┐ ┌──────────────────┐
│ Push to │ │ Push to ACR │
│ Docker Hub │ │ (or Docker Hub) │
└────────┬───────┘ └────────┬─────────┘
│ │
▼ ▼
┌────────────────┐ ┌──────────────────┐
│ Render │ │ Azure Web App │
│ Auto-Deploy │ │ Deploy │
└────────────────┘ └──────────────────┘
Setting Up GitHub Secrets¶
Required for All Deployments¶
DOCKERHUB_USERNAME- Your Docker Hub usernameDOCKERHUB_TOKEN- Docker Hub access token- Create at: Docker Hub → Account Settings → Security → New Access Token
Required for Azure Deployment¶
AZURE_CREDENTIALS- Service principal credentials (JSON format)
az ad sp create-for-rbac --name "github-actions-trump-nlp" --sdk-auth --role contributor --scopes /subscriptions/{subscription-id}/resourceGroups/trump-nlp-rg
AZURE_WEBAPP_NAME- Your Azure Web App name (e.g.,trump-speeches-nlp)
Required for Azure Container Registry (Optional)¶
AZURE_REGISTRY_LOGIN_SERVER- ACR login server (e.g.,trumpnlpacr.azurecr.io)AZURE_REGISTRY_USERNAME- ACR admin usernameAZURE_REGISTRY_PASSWORD- ACR admin password
Get ACR credentials:
Workflow Triggers¶
| Workflow | Automatic Trigger | Manual Trigger | When to Use |
|---|---|---|---|
| CI | Push/PR to main, develop, feature/* | ✅ Yes | Testing code changes |
| Security | Push/PR + Weekly (Mon 9AM) | ✅ Yes | Regular security audits |
| Deploy Render | Push to main | ✅ Yes | Deploy to Render |
| Deploy Azure | Push to main | ✅ Yes (with options) | Deploy to Azure |
Manual Deployment Triggers¶
For Azure:
- Go to GitHub → Actions → "Deploy to Azure"
- Click "Run workflow"
- Select options:
- Environment: production/staging
- Use Docker Hub: true/false (ACR is default)
- Click "Run workflow"
For Render:
- Go to GitHub → Actions → "Deploy to Render"
- Click "Run workflow"
- Click "Run workflow" to confirm
Viewing Workflow Results¶
- Navigate to your repository → Actions tab
- Click on any workflow run to see detailed logs
- Failed workflows will show which step failed and why
- Deployment summaries are available in each workflow run
Best Practices¶
- Always test locally before pushing to
main - Use feature branches for development
- Review CI results before merging PRs
- Monitor deployments after pushing to
main - Check security scans weekly
- Use manual triggers for testing deployment changes
Environment Variables¶
Required Variables¶
| Variable | Description |
|---|---|
GEMINI_API_KEY |
Google Gemini API key (required) |
PORT |
Port to run the application - Default: 8000 |
PYTHONUNBUFFERED |
Disable Python output buffering - Default: 1 |
LLM Provider Configuration¶
| Variable | Description |
|---|---|
LLM_PROVIDER |
LLM provider (gemini/openai/anthropic) - Default: gemini |
LLM_API_KEY |
API key for the selected provider (defaults to GEMINI_API_KEY) |
LLM_MODEL_NAME |
Model name for the LLM - Default: gemini-2.0-flash-exp |
LLM_TEMPERATURE |
Temperature for LLM responses - Default: 0.7 |
LLM_MAX_OUTPUT_TOKENS |
Max tokens for LLM responses - Default: 2048 |
Optional Variables¶
| Variable | Description | Default |
|---|---|---|
ENVIRONMENT |
Environment (production/staging/development) | development |
LOG_LEVEL |
Logging level (INFO/DEBUG/WARNING/ERROR) | INFO |
PYTHON_VERSION |
Python version (for Render) | 3.12.0 |
WEBSITES_PORT |
Port for Azure App Service | 8000 |
RAG-Specific Configuration¶
| Variable | Description |
|---|---|
CHROMADB_PERSIST_DIR |
Directory for ChromaDB persistence - Default: ./data/chromadb |
RAG_COLLECTION_NAME |
Name of the vector collection - Default: speeches |
RAG_CHUNK_SIZE |
Text chunk size for embeddings - Default: 2048 |
RAG_CHUNK_OVERLAP |
Overlap between chunks - Default: 150 |
EMBEDDING_MODEL |
sentence-transformers model - Default: all-mpnet-base-v2 |
Note: ChromaDB data is persisted to disk. Ensure the persist directory is included in volume mounts for Docker deployments to maintain indexed documents across restarts.
Performance Tips¶
For Free Tiers¶
- Render Free Tier:
- Apps sleep after 15 min of inactivity
- Use a health check service like cron-job.org to keep it awake
- Expect 30-60s cold start time
-
RAG Impact: First request will also load embedding model (~80MB) + index documents
-
Azure Free Tier (F1):
- Limited to 60 CPU minutes/day
- 1 GB RAM limit
- Consider B1 tier ($13/month) for better performance
- RAG Requirements: Minimum 2GB RAM recommended with RAG enabled
Optimizing Docker Image¶
- Use multi-stage builds (already implemented)
- Remove unnecessary data files
- Use
.dockerignoreto exclude notebooks and docs - RAG Models: Pre-download models during build to reduce startup time
RAG Performance Considerations¶
Memory Requirements:
- Without RAG: ~1.5GB RAM
- With RAG: ~2.5GB RAM (includes embeddings model + vector store)
- Recommended: 4GB RAM for production with concurrent users
Storage Requirements:
- Base application: ~1GB
- ChromaDB index: ~50-100MB per 10,000 chunks
- Embedding model: ~80MB
Startup Time:
- Without RAG: 10-15 seconds
- With RAG: 30-60 seconds (model loading + document indexing)
- Tip: Documents are auto-indexed on first startup if collection is empty
Query Performance:
- Semantic search: ~50-200ms per query
- Question answering: ~200-500ms (includes search + answer generation)
- Tip: Results improve with more indexed documents
Troubleshooting¶
Environment Variable Parsing Errors in Docker¶
Error:
pydantic_core._pydantic_core.ValidationError: 13 validation errors for Settings
environment
Input should be 'development', 'staging' or 'production' [type=literal_error, input_value='"development" # develop...', input_type=str]
log_level
Value error, Invalid log level. Must be one of: DEBUG, INFO, WARNING, ERROR, CRITICAL [type=value_error, input_value='"INFO" # DEBUG, INFO, WARNING, ERROR, CRITICAL', input_type=str]
...
Root Cause:
Your .env file has quoted values and/or inline comments. When Python-dotenv loads the file, it reads the entire value including quotes and comments as a literal string. So "development" becomes a string with quotes, which doesn't match the expected literal value development.
Solution:
Edit your .env file and follow these rules:
# ✅ CORRECT Format
# Variables without quotes
ENVIRONMENT=development
LOG_LEVEL=INFO
API_PORT=8000
# Booleans as lowercase true/false (no quotes)
LLM_ENABLED=true
USE_RERANKING=true
API_RELOAD=false
# Numbers without quotes
CHUNK_SIZE=2048
GEMINI_TEMPERATURE=0.3
# Comments on separate lines only
# Get free key at https://ai.google.dev/
GEMINI_API_KEY=your-api-key-here
# Paths with spaces are fine (no quotes needed)
SPEECHES_DIRECTORY=./data/Donald Trump Rally Speeches
# ❌ INCORRECT Format (DO NOT USE)
# Quoted values - Pydantic will see the quotes as part of the value
ENVIRONMENT="development"
LOG_LEVEL="INFO"
# Inline comments - Will be read as part of the value
CHUNK_SIZE="2048" # Maximum characters per chunk
USE_RERANKING="true" # Enable reranking
# Booleans with quotes - Can't parse as boolean
LLM_ENABLED="true"
Key Rules for .env files:
- ❌ No quotes around values (not needed and causes parsing errors)
- ❌ No inline comments (use separate lines or comment blocks)
- ✅ Booleans as
trueorfalse(lowercase, no quotes) - ✅ Numbers without quotes
- ✅ Strings can contain spaces without quotes
- ✅ Comments on their own lines with
#prefix
Test your .env file:
After fixing, verify it parses correctly:
# Run without Docker to test locally first
uv run uvicorn speech_nlp.app:app --reload
# Or test the config in Python
uv run python -c "from speech_nlp.config.settings import get_settings; s = get_settings(); print(f'Environment: {s.environment}, Log Level: {s.log_level}')"
Then try Docker again:
Docker Build Fails¶
# Clean Docker cache and rebuild
docker system prune -a
docker build --no-cache -t trump-speeches-nlp-api .
API Returns 503¶
- Model may still be loading (can take 30-60s on first request)
- Check logs for errors
- Ensure sufficient memory (minimum 1GB recommended)
Azure Deployment Issues¶
# Check app logs
az webapp log tail --resource-group trump-nlp-rg --name trump-speeches-nlp
# Restart the app
az webapp restart --resource-group trump-nlp-rg --name trump-speeches-nlp
Render Deployment Issues¶
- Check build logs in Render dashboard
- Verify
.render/render.yamlconfiguration - Ensure all dependencies are in
pyproject.toml
Cost Comparison¶
| Platform | Free Tier | Paid Tier | Best For |
|---|---|---|---|
| Render | ✅ Yes (with limitations) | $7/month starter | Quick demos, small projects |
| Azure | ✅ Yes (F1: 60 min/day) | $13/month (B1) | Enterprise, Azure ecosystem |
| Railway | ✅ $5 free credit | Pay-as-you-go | Simple projects |
| Fly.io | ✅ Free allowance | Pay-as-you-go | Global deployment |
Next Steps¶
After deployment:
- Test the API: Use the
/healthendpoint to verify - Update README: Add your live demo link
- Monitor performance: Check logs and metrics
- Set up alerts: Configure uptime monitoring
- Add custom domain: (optional) Configure DNS
Support¶
For issues or questions:
- Check the main README
- Review API documentation at
/docs - Open an issue on GitHub