Skip to content

Deployment Guide

This guide covers deploying the Trump Speeches NLP Chatbot API to various platforms.

Table of Contents


Local Development

Running Without Docker

  1. Create a virtual environment (optional but recommended):
# Create venv with default Python version
uv venv

# Create venv with specific Python version
uv venv --python 3.12
uv venv --python 3.11

# Create venv in custom directory
uv venv .venv-dev

# Activate the virtual environment (Windows PowerShell)
.venv\Scripts\Activate.ps1

# Deactivate when done
deactivate
  1. Install dependencies:
# Install all production dependencies
uv sync

# Install with specific dependency groups
uv sync --group dev              # Include dev tools
uv sync --group docs             # Include docs tools
uv sync --group notebooks        # Include notebook tools
uv sync --all-groups             # Include all optional groups

# Install without optional groups
uv sync --no-group dev --no-group docs --no-group notebooks

# Upgrade all dependencies to latest compatible versions
uv sync --upgrade

# Install from scratch (ignore lock file)
uv sync --reinstall
  1. Add or remove packages:
# Add a new package
uv add requests
uv add "fastapi>=0.100.0"

# Add to a specific group
uv add --group dev pytest
uv add --group docs mkdocs-material

# Remove a package
uv remove requests

# Update lock file without installing
uv lock

# Update specific package
uv lock --upgrade-package fastapi
  1. Run commands:
# Run the API server
uv run uvicorn speech_nlp.app:app --reload

# Run with custom host and port
uv run uvicorn speech_nlp.app:app --host 0.0.0.0 --port 8001 --reload

# Run tests
uv run pytest
uv run pytest -v --cov=src

# Run code formatter
uv run ruff format src/

# Run linter
uv run ruff check src/
uv run ruff check src/ --fix  # Auto-fix issues
uv run mypy src/

# Run Python scripts
uv run python scripts/migrate_rag_embeddings.py

# Run Python REPL with dependencies
uv run python
  1. Manage Python versions:
# List available Python versions
uv python list

# Install a specific Python version
uv python install 3.12
uv python install 3.11.8

# Pin Python version for project
uv python pin 3.12

# Show current Python version
uv python find
  1. Export dependencies:
# Export to requirements.txt
uv pip compile pyproject.toml -o requirements.txt

# Export with specific groups excluded
uv export --no-group dev --no-group docs -o requirements.txt

# Export in different formats
uv export --format requirements-txt
  1. Access the application:

  2. Frontend: http://localhost:8000

  3. API Docs: http://localhost:8000/docs
  4. ReDoc: http://localhost:8000/redoc

Docker Deployment

Build and Run with Docker

  1. Build the Docker image:
docker build -t trump-speeches-nlp-chatbot .

# Build with no cache (clean build)
```powershell
docker build --no-cache -t trump-speeches-nlp-chatbot:latest .
  1. Run the container:
# Basic run
docker run --rm -it -p 8000:8000 --env-file .env --name nlp-chatbot trump-speeches-nlp-chatbot

# Run in detached mode (background)
docker run -d -p 8000:8000 --env-file .env --name nlp-chatbot trump-speeches-nlp-chatbot

# Run in remote interactive mode
docker run --rm -it -p 8000:8000 --env-file .env --name nlp-chatbot trump-speeches-nlp-chatbot

# Run with persistent ChromaDB volume
docker run --rm -it -p 8000:8000 -v "${PWD}/data/chromadb:/app/data/chromadb" --env-file .env --name nlp-chatbot trump-speeches-nlp-chatbot
  1. View logs:
# Follow logs (real-time)
docker logs -f nlp-chatbot

# View last 100 lines
docker logs --tail 100 nlp-chatbot

# View logs with timestamps
docker logs -t nlp-chatbot
  1. Manage containers:
# Stop the container
docker stop nlp-chatbot

# Start a stopped container
docker start nlp-chatbot

# Restart the container
docker restart nlp-chatbot

# Remove the container
docker rm nlp-chatbot

# Remove with force (even if running)
docker rm -f nlp-chatbot
  1. Tag and push to Docker Hub:
# Tag for Docker Hub
docker tag trump-speeches-nlp-chatbot yourusername/trump-speeches-nlp-chatbot:latest
docker tag trump-speeches-nlp-chatbot yourusername/trump-speeches-nlp-chatbot:v1.0.0

# Login to Docker Hub
docker login

# Push to Docker Hub
docker push yourusername/trump-speeches-nlp-chatbot:latest
docker push yourusername/trump-speeches-nlp-chatbot:v1.0.0

# Push all tags
docker push --all-tags yourusername/trump-speeches-nlp-chatbot
  1. Clean up Docker resources:
# Remove unused images
docker image prune

# Remove all stopped containers
docker container prune

# Remove all unused containers, networks, images
docker system prune

# Remove everything (including volumes)
docker system prune -a --volumes

Using Docker Compose

Production mode:

docker-compose up -d

Development mode (with hot reload):

+

Stop services:

docker-compose down

Render Deployment

Render offers free hosting with automatic deployments using Docker images.

Deployment Strategy

This project uses a Docker-based deployment approach for Render:

  1. GitHub Actions builds and pushes Docker images to Docker Hub
  2. Render pulls and deploys the pre-built images
  3. Benefits: Faster deployments, consistent environments, easier rollbacks

Prerequisites

  1. Docker Hub Account (free)
  2. Sign up at https://hub.docker.com
  3. Create a repository named trump-speeches-nlp-chatbot

  4. Render Account (free)

  5. Sign up at https://render.com

  6. GitHub Repository connected to your account

Step 1: Configure GitHub Secrets

Add these secrets to your GitHub repository (Settings → Secrets and variables → Actions):

  1. DOCKERHUB_USERNAME - Your Docker Hub username
  2. DOCKERHUB_TOKEN - Docker Hub access token
  3. Go to Docker Hub → Account Settings → Security → New Access Token
  4. Copy the token and save it as a GitHub secret

Step 2: Update render.yaml

Edit render.yaml and replace <your-dockerhub-username> with your actual Docker Hub username:

image:
  url: docker.io/your-username/trump-speeches-nlp-chatbot:latest

Step 3: Deploy to Render

  1. Go to Render Dashboard → "Blueprints"
  2. Click "New Blueprint Instance"
  3. Connect your GitHub repository
  4. Render will detect .render/render.yaml and configure everything automatically
  5. Click "Apply" to create the service

Option B: Manual Configuration

  1. Go to Render Dashboard → "New +" → "Web Service"
  2. Choose "Deploy an existing image from a registry"
  3. Configure:
  4. Image URL: docker.io/your-username/trump-speeches-nlp-chatbot:latest
  5. Name: trump-speeches-nlp-chatbot
  6. Plan: Free
  7. Add environment variable:
  8. PORT = 8000
  9. Set Health Check Path: /health
  10. Click "Create Web Service"

Step 4: Trigger Deployment

Push to main branch or manually trigger the workflow:

git push origin main

This will:

  1. Build the Docker image
  2. Push it to Docker Hub
  3. Render automatically detects the new image and deploys it

Accessing Your Render App

Your API will be available at: https://trump-speeches-nlp-chatbot.onrender.com

Note: Free tier apps spin down after 15 minutes of inactivity. First request may take 30-60 seconds to wake the service.

Monitoring & Logs

  • Logs: Render Dashboard → Your Service → Logs tab
  • Metrics: Render Dashboard → Your Service → Metrics tab
  • Manual Deploy: Render Dashboard → Your Service → Manual Deploy button

Upgrading from Free Tier

For better performance:

  • Starter Plan: $7/month (512 MB RAM, 0.5 CPU)
  • No cold starts
  • Faster response times
  • Better for production use

Azure App Service Deployment

Deployment Strategy (UI)

This project supports two Azure deployment approaches:

  1. Azure Container Registry (ACR) - Recommended for enterprise/production (default)
  2. Docker Hub - Alternative for simpler setups or cross-platform deployments

Both use the same GitHub Actions workflow with conditional logic.

Prerequisites (Azure)

Step 1: Create Azure Resources

# Login to Azure
az login

# Create a resource group
az group create --name trump-nlp-rg --location eastus

# Create Azure Container Registry
az acr create `
  --resource-group trump-nlp-rg `
  --name trumpnlpacr `
  --sku Basic `
  --admin-enabled true

# Create an App Service plan (B1 or F1 tier)
az appservice plan create `
  --name trump-nlp-plan `
  --resource-group trump-nlp-rg `
  --sku B1 `
  --is-linux

# Create a web app for containers
az webapp create `
  --resource-group trump-nlp-rg `
  --plan trump-nlp-plan `
  --name trump-speeches-nlp-chatbot `
  --deployment-container-image-name trumpnlpacr.azurecr.io/trump-speeches-nlp-chatbot:latest

# Configure port
az webapp config appsettings set `
  --resource-group trump-nlp-rg `
  --name trump-speeches-nlp-chatbot `
  --settings WEBSITES_PORT=8000

# Enable ACR integration
az webapp config container set `
  --resource-group trump-nlp-rg `
  --name trump-speeches-nlp-chatbot `
  --docker-custom-image-name trumpnlpacr.azurecr.io/trump-speeches-nlp-chatbot:latest `
  --docker-registry-server-url https://trumpnlpacr.azurecr.io

Step 2: Configure GitHub Secrets for ACR

Add these secrets to your GitHub repository:

  1. AZURE_CREDENTIALS - Service principal credentials
az ad sp create-for-rbac --name "github-actions-trump-nlp" --sdk-auth --role contributor --scopes /subscriptions/{subscription-id}/resourceGroups/trump-nlp-rg

Copy the entire JSON output and save as secret.

  1. AZURE_REGISTRY_LOGIN_SERVER - e.g., trumpnlpacr.azurecr.io

  2. AZURE_REGISTRY_USERNAME and AZURE_REGISTRY_PASSWORD

az acr credential show --name trumpnlpacr
  1. AZURE_WEBAPP_NAME - e.g., trump-speeches-nlp-chatbot

Step 3: Deploy

Push to main branch:

git push origin main

GitHub Actions will automatically:

  1. Build Docker image
  2. Push to Azure Container Registry
  3. Deploy to Azure Web App
  4. Run health checks

Option 2: Deploy with Docker Hub

This approach shares the same Docker images used for Render deployment.

Step 1: Create Azure Resources (CLI)

# Login to Azure
az login

# Create a resource group
az group create --name trump-nlp-rg --location eastus

# Create an App Service plan
az appservice plan create `
  --name trump-nlp-plan `
  --resource-group trump-nlp-rg `
  --sku B1 `
  --is-linux

# Create a web app using Docker Hub image
az webapp create `
  --resource-group trump-nlp-rg `
  --plan trump-nlp-plan `
  --name trump-speeches-nlp-chatbot `
  --deployment-container-image-name docker.io/your-username/trump-speeches-nlp-chatbot:latest

# Configure port
az webapp config appsettings set `
  --resource-group trump-nlp-rg `
  --name trump-speeches-nlp-chatbot `
  --settings WEBSITES_PORT=8000

Step 2: Configure GitHub Secrets for Docker Hub

You'll need the same Docker Hub secrets as for Render:

  1. DOCKERHUB_USERNAME
  2. DOCKERHUB_TOKEN
  3. AZURE_CREDENTIALS (same as Option 1)
  4. AZURE_WEBAPP_NAME

Step 3: Deploy with Docker Hub

Manually trigger the workflow:

  1. Go to GitHub → Actions → "Deploy to Azure"
  2. Click "Run workflow"
  3. Check "Use Docker Hub instead of ACR"
  4. Click "Run workflow"

Accessing Your Azure App

Your API will be available at: https://trump-speeches-nlp-chatbot.azurewebsites.net

⚠️ Performance Expectations for Free Tier:

This deployment uses Azure Free Tier (F1) hosting. Due to the ML-heavy nature of the app:

  • Container size: ~4-5GB (includes FinBERT, RoBERTa, MPNet models)
  • Cold start time: 1-5 minutes after inactivity (Azure stops free apps when idle)
  • First access: Refresh the page every 30 seconds for a few minutes
  • AI response times: 30 seconds to 2 minutes for RAG/sentiment analysis
  • Warmed up performance: 2-5 seconds once active

For production deployments, consider:

  • Basic (B1) tier or higher for always-on instances (no cold starts)
  • Premium tiers for auto-scaling under load
  • Application Insights for detailed performance monitoring

Monitoring and Logs

# Stream logs
az webapp log tail --resource-group trump-nlp-rg --name trump-speeches-nlp-chatbot

# View metrics
az monitor metrics list `
  --resource /subscriptions/{subscription-id}/resourceGroups/trump-nlp-rg/providers/Microsoft.Web/sites/trump-speeches-nlp-chatbot `
  --metric-names Requests,ResponseTime,Http5xx

# Open in Azure Portal
az webapp browse --resource-group trump-nlp-rg --name trump-speeches-nlp-chatbot

Updating the Deployment

Deployments are automatic on push to main. To manually update:

# Trigger GitHub Actions workflow manually
# Or restart the web app to pull latest image
az webapp restart --resource-group trump-nlp-rg --name trump-speeches-nlp-chatbot

CI/CD with GitHub Actions

This project includes automated CI/CD pipelines using GitHub Actions. The workflows are split into multiple files for better organization:

Workflow Files

All workflows are located in .github/workflows/:

  • python-tests.yml - Python Tests
  • Runs on: All pushes and PRs
  • Jobs: Unit tests with coverage (pytest)
  • Python versions tested: 3.11, 3.12
  • Coverage requirement: 60%

  • python-lint.yml - Python Linting

  • Runs on: All pushes and PRs affecting Python files
  • Jobs: Ruff linting and format checking
  • Python versions tested: 3.11, 3.12

  • python-typecheck.yml - Type Checking

  • Runs on: All pushes and PRs affecting Python files
  • Jobs: Mypy type checking
  • Python versions tested: 3.11, 3.12
  • Status: Allowed to fail (informational)

  • security-audit.yml - Security Scans

  • Runs on: All pushes and PRs, plus weekly schedule (Mondays 9 AM UTC)
  • Jobs: Bandit security scan, pip-audit dependency vulnerabilities
  • Status: Allowed to fail (informational)

  • markdown-lint.yml - Documentation Linting

  • Runs on: All pushes and PRs affecting markdown files
  • Jobs: pymarkdownlnt documentation quality checks
  • Status: Allowed to fail (informational)

  • deploy-render.yml - Render Deployment

  • Runs on: Push to main branch only
  • Jobs: Build Docker image, push to Docker Hub, test container
  • Render auto-deploys when new images are detected

  • deploy-azure.yml - Azure Deployment

  • Runs on: Push to main branch (manual trigger also available)
  • Jobs: Build & push to ACR/Docker Hub, deploy to Azure Web App, health check
  • Supports both ACR and Docker Hub registries

  • deploy-docs.yml - Documentation Deployment

  • Runs on: Push to main affecting docs
  • Jobs: Build MkDocs site, deploy to GitHub Pages

  • build-push-docker.yml - Docker Build

  • Runs on: Push to main branch
  • Jobs: Build and push Docker images to registry

Deployment Architecture

┌─────────────────────────────────────────────────────────────┐
│                     Push to main branch                     │
└────────────┬────────────────────────────────────────────────┘
             ├──────────────────┐
             │                  │
             ▼                  ▼
    ┌────────────────┐   ┌──────────────────┐
    │  Build Docker  │   │  Build Docker    │
    │  Image         │   │  Image           │
    └────────┬───────┘   └────────┬─────────┘
             │                    │
             ▼                    ▼
    ┌────────────────┐   ┌──────────────────┐
    │  Push to       │   │  Push to ACR     │
    │  Docker Hub    │   │  (or Docker Hub) │
    └────────┬───────┘   └────────┬─────────┘
             │                    │
             ▼                    ▼
    ┌────────────────┐   ┌──────────────────┐
    │  Render        │   │  Azure Web App   │
    │  Auto-Deploy   │   │  Deploy          │
    └────────────────┘   └──────────────────┘

Setting Up GitHub Secrets

Required for All Deployments

  1. DOCKERHUB_USERNAME - Your Docker Hub username
  2. DOCKERHUB_TOKEN - Docker Hub access token
  3. Create at: Docker Hub → Account Settings → Security → New Access Token

Required for Azure Deployment

  1. AZURE_CREDENTIALS - Service principal credentials (JSON format)
az ad sp create-for-rbac --name "github-actions-trump-nlp" --sdk-auth --role contributor --scopes /subscriptions/{subscription-id}/resourceGroups/trump-nlp-rg
  1. AZURE_WEBAPP_NAME - Your Azure Web App name (e.g., trump-speeches-nlp)

Required for Azure Container Registry (Optional)

  1. AZURE_REGISTRY_LOGIN_SERVER - ACR login server (e.g., trumpnlpacr.azurecr.io)
  2. AZURE_REGISTRY_USERNAME - ACR admin username
  3. AZURE_REGISTRY_PASSWORD - ACR admin password

Get ACR credentials:

az acr credential show --name trumpnlpacr

Workflow Triggers

Workflow Automatic Trigger Manual Trigger When to Use
CI Push/PR to main, develop, feature/* ✅ Yes Testing code changes
Security Push/PR + Weekly (Mon 9AM) ✅ Yes Regular security audits
Deploy Render Push to main ✅ Yes Deploy to Render
Deploy Azure Push to main ✅ Yes (with options) Deploy to Azure

Manual Deployment Triggers

For Azure:

  1. Go to GitHub → Actions → "Deploy to Azure"
  2. Click "Run workflow"
  3. Select options:
  4. Environment: production/staging
  5. Use Docker Hub: true/false (ACR is default)
  6. Click "Run workflow"

For Render:

  1. Go to GitHub → Actions → "Deploy to Render"
  2. Click "Run workflow"
  3. Click "Run workflow" to confirm

Viewing Workflow Results

  • Navigate to your repository → Actions tab
  • Click on any workflow run to see detailed logs
  • Failed workflows will show which step failed and why
  • Deployment summaries are available in each workflow run

Best Practices

  1. Always test locally before pushing to main
  2. Use feature branches for development
  3. Review CI results before merging PRs
  4. Monitor deployments after pushing to main
  5. Check security scans weekly
  6. Use manual triggers for testing deployment changes

Environment Variables

Required Variables

Variable Description
GEMINI_API_KEY Google Gemini API key (required)
PORT Port to run the application - Default: 8000
PYTHONUNBUFFERED Disable Python output buffering - Default: 1

LLM Provider Configuration

Variable Description
LLM_PROVIDER LLM provider (gemini/openai/anthropic) - Default: gemini
LLM_API_KEY API key for the selected provider (defaults to GEMINI_API_KEY)
LLM_MODEL_NAME Model name for the LLM - Default: gemini-2.0-flash-exp
LLM_TEMPERATURE Temperature for LLM responses - Default: 0.7
LLM_MAX_OUTPUT_TOKENS Max tokens for LLM responses - Default: 2048

Optional Variables

Variable Description Default
ENVIRONMENT Environment (production/staging/development) development
LOG_LEVEL Logging level (INFO/DEBUG/WARNING/ERROR) INFO
PYTHON_VERSION Python version (for Render) 3.12.0
WEBSITES_PORT Port for Azure App Service 8000

RAG-Specific Configuration

Variable Description
CHROMADB_PERSIST_DIR Directory for ChromaDB persistence - Default: ./data/chromadb
RAG_COLLECTION_NAME Name of the vector collection - Default: speeches
RAG_CHUNK_SIZE Text chunk size for embeddings - Default: 2048
RAG_CHUNK_OVERLAP Overlap between chunks - Default: 150
EMBEDDING_MODEL sentence-transformers model - Default: all-mpnet-base-v2

Note: ChromaDB data is persisted to disk. Ensure the persist directory is included in volume mounts for Docker deployments to maintain indexed documents across restarts.


Performance Tips

For Free Tiers

  1. Render Free Tier:
  2. Apps sleep after 15 min of inactivity
  3. Use a health check service like cron-job.org to keep it awake
  4. Expect 30-60s cold start time
  5. RAG Impact: First request will also load embedding model (~80MB) + index documents

  6. Azure Free Tier (F1):

  7. Limited to 60 CPU minutes/day
  8. 1 GB RAM limit
  9. Consider B1 tier ($13/month) for better performance
  10. RAG Requirements: Minimum 2GB RAM recommended with RAG enabled

Optimizing Docker Image

  • Use multi-stage builds (already implemented)
  • Remove unnecessary data files
  • Use .dockerignore to exclude notebooks and docs
  • RAG Models: Pre-download models during build to reduce startup time

RAG Performance Considerations

Memory Requirements:

  • Without RAG: ~1.5GB RAM
  • With RAG: ~2.5GB RAM (includes embeddings model + vector store)
  • Recommended: 4GB RAM for production with concurrent users

Storage Requirements:

  • Base application: ~1GB
  • ChromaDB index: ~50-100MB per 10,000 chunks
  • Embedding model: ~80MB

Startup Time:

  • Without RAG: 10-15 seconds
  • With RAG: 30-60 seconds (model loading + document indexing)
  • Tip: Documents are auto-indexed on first startup if collection is empty

Query Performance:

  • Semantic search: ~50-200ms per query
  • Question answering: ~200-500ms (includes search + answer generation)
  • Tip: Results improve with more indexed documents

Troubleshooting

Environment Variable Parsing Errors in Docker

Error:

pydantic_core._pydantic_core.ValidationError: 13 validation errors for Settings
  environment
    Input should be 'development', 'staging' or 'production' [type=literal_error, input_value='"development"  # develop...', input_type=str]
  log_level
    Value error, Invalid log level. Must be one of: DEBUG, INFO, WARNING, ERROR, CRITICAL [type=value_error, input_value='"INFO"  # DEBUG, INFO, WARNING, ERROR, CRITICAL', input_type=str]
  ...

Root Cause: Your .env file has quoted values and/or inline comments. When Python-dotenv loads the file, it reads the entire value including quotes and comments as a literal string. So "development" becomes a string with quotes, which doesn't match the expected literal value development.

Solution:

Edit your .env file and follow these rules:

# ✅ CORRECT Format

# Variables without quotes
ENVIRONMENT=development
LOG_LEVEL=INFO
API_PORT=8000

# Booleans as lowercase true/false (no quotes)
LLM_ENABLED=true
USE_RERANKING=true
API_RELOAD=false

# Numbers without quotes
CHUNK_SIZE=2048
GEMINI_TEMPERATURE=0.3

# Comments on separate lines only
# Get free key at https://ai.google.dev/
GEMINI_API_KEY=your-api-key-here

# Paths with spaces are fine (no quotes needed)
SPEECHES_DIRECTORY=./data/Donald Trump Rally Speeches
# ❌ INCORRECT Format (DO NOT USE)

# Quoted values - Pydantic will see the quotes as part of the value
ENVIRONMENT="development"
LOG_LEVEL="INFO"

# Inline comments - Will be read as part of the value
CHUNK_SIZE="2048"  # Maximum characters per chunk
USE_RERANKING="true"  # Enable reranking

# Booleans with quotes - Can't parse as boolean
LLM_ENABLED="true"

Key Rules for .env files:

  • ❌ No quotes around values (not needed and causes parsing errors)
  • ❌ No inline comments (use separate lines or comment blocks)
  • ✅ Booleans as true or false (lowercase, no quotes)
  • ✅ Numbers without quotes
  • ✅ Strings can contain spaces without quotes
  • ✅ Comments on their own lines with # prefix

Test your .env file:

After fixing, verify it parses correctly:

# Run without Docker to test locally first
uv run uvicorn speech_nlp.app:app --reload

# Or test the config in Python
uv run python -c "from speech_nlp.config.settings import get_settings; s = get_settings(); print(f'Environment: {s.environment}, Log Level: {s.log_level}')"

Then try Docker again:

docker run --rm -it -p 8000:8000 --env-file .env --name nlp-chatbot trump-speeches-nlp-chatbot

Docker Build Fails

# Clean Docker cache and rebuild
docker system prune -a
docker build --no-cache -t trump-speeches-nlp-api .

API Returns 503

  • Model may still be loading (can take 30-60s on first request)
  • Check logs for errors
  • Ensure sufficient memory (minimum 1GB recommended)

Azure Deployment Issues

# Check app logs
az webapp log tail --resource-group trump-nlp-rg --name trump-speeches-nlp

# Restart the app
az webapp restart --resource-group trump-nlp-rg --name trump-speeches-nlp

Render Deployment Issues

  • Check build logs in Render dashboard
  • Verify .render/render.yaml configuration
  • Ensure all dependencies are in pyproject.toml

Cost Comparison

Platform Free Tier Paid Tier Best For
Render ✅ Yes (with limitations) $7/month starter Quick demos, small projects
Azure ✅ Yes (F1: 60 min/day) $13/month (B1) Enterprise, Azure ecosystem
Railway ✅ $5 free credit Pay-as-you-go Simple projects
Fly.io ✅ Free allowance Pay-as-you-go Global deployment

Next Steps

After deployment:

  1. Test the API: Use the /health endpoint to verify
  2. Update README: Add your live demo link
  3. Monitor performance: Check logs and metrics
  4. Set up alerts: Configure uptime monitoring
  5. Add custom domain: (optional) Configure DNS

Support

For issues or questions:

  • Check the main README
  • Review API documentation at /docs
  • Open an issue on GitHub