Skip to content

Deployment Models

Verity supports three deployment models designed to cover the full spectrum from local development to globally distributed enterprise environments. This guide compares each model and helps you choose the right one.


Overview

graph LR
    DC["Docker Compose<br/><b>Development</b>"]
    K8S["Kubernetes + Helm<br/><b>Production</b>"]
    HYB["Hybrid Edge + Cloud<br/><b>Distributed</b>"]

    DC -->|"Promote"| K8S
    K8S -->|"Extend"| HYB

    style DC fill:#6366f1,color:#fff,stroke:none
    style K8S fill:#10b981,color:#fff,stroke:none
    style HYB fill:#f59e0b,color:#000,stroke:none
Docker Compose Kubernetes + Helm Hybrid (Edge + Cloud)
Target Local dev, demos, testing Staging & production Distributed enterprises
Services All 19 on one machine All 19 across cluster nodes Split: connectors at edge, core in cloud
Scaling Vertical only Horizontal (HPA) Horizontal + geographic
Infrastructure Local containers Managed cloud services Mixed on-prem + cloud
High Availability None Multi-replica, pod disruption budgets Per-region HA
Data Residency Single machine Single region Multi-region capable
Management docker compose CLI Helm + GitOps Helm + fleet management
Setup Time ~5 minutes ~30 minutes ~2 hours

Model 1 — Docker Compose (Development)

A single-machine deployment that runs all 19 Verity microservices alongside infrastructure dependencies. Ideal for local development, demos, and integration testing.

When to Use

  • Local feature development and debugging
  • Running the full integration test suite
  • Customer or stakeholder demos
  • Evaluating the platform before committing to production infrastructure

Architecture

graph TB
    subgraph Host["Developer Machine"]
        subgraph Infra["Infrastructure Containers"]
            PG["PostgreSQL<br/>(TimescaleDB)<br/>:5432"]
            CH["ClickHouse<br/>:8123"]
            KAFKA["Kafka (KRaft)<br/>:9092"]
            REDIS["Redis<br/>:6379"]
            TEMPORAL["Temporal<br/>:7233"]
            TEMPORAL_UI["Temporal UI<br/>:8233"]
        end

        subgraph App["Application Containers"]
            API["API Gateway :8000"]
            UI["Dashboard UI :3000"]
            CONNECTORS["Connectors ×6"]
            INGESTION["Ingestion Pipeline ×3"]
            ANALYTICS["Analytics ×3"]
            DECISION["Decision ×2"]
            REMEDIATION["Remediation"]
            AUDIT["Audit ×2"]
        end
    end

    UI --> API
    API --> PG & REDIS
    CONNECTORS --> KAFKA & PG
    INGESTION --> KAFKA & PG & REDIS
    ANALYTICS --> PG & KAFKA & REDIS
    DECISION --> PG & TEMPORAL & KAFKA
    REMEDIATION --> PG & KAFKA
    AUDIT --> KAFKA & CH

    style Host fill:#1e1e2e,color:#cdd6f4,stroke:#6366f1
    style Infra fill:#2a2a3e,color:#cdd6f4,stroke:#6366f1
    style App fill:#2a2a3e,color:#cdd6f4,stroke:#6366f1

Infrastructure Requirements

Resource Minimum Recommended
CPU 4 cores 8 cores
RAM 8 GB 16 GB
Storage 10 GB 20 GB (SSD)
Docker v24+ Latest stable
Docker Compose v2.20+ Latest stable

Quick Start

# Clone and start
git clone https://github.com/mjtpena/verity.git
cd verity
cp .env.example .env

# Start all services
docker compose up -d --wait

# Load sample data
docker compose --profile tools run --rm seed-data

# Open the dashboard
open http://localhost:3000

Selective startup

Use Docker Compose profiles to start only the services you need:

# Core only (no connectors)
docker compose up -d

# With connectors
docker compose --profile connectors up -d

Detailed guide: Docker Compose Deployment


Model 2 — Kubernetes with Helm (Production)

A fully orchestrated deployment using the Verity Helm chart (infra/helm/verity/). Designed for staging and production environments on any conformant Kubernetes cluster, with first-class support for Azure Kubernetes Service (AKS).

When to Use

  • Staging and production environments
  • Workloads requiring horizontal auto-scaling
  • Environments with compliance or security requirements (network policies, RBAC, secrets management)
  • Teams operating with GitOps workflows (Flux, ArgoCD)

Architecture

graph TB
    subgraph Internet
        USERS["Users / API Clients"]
    end

    subgraph Azure["Azure Cloud"]
        subgraph AKS["Azure Kubernetes Service"]
            INGRESS["Ingress Controller<br/>(NGINX)"]

            subgraph AppPods["Application Pods"]
                API["API Gateway ×4"]
                UI["Dashboard UI ×3"]
                CONN["Connectors ×4"]
                ING["Ingestion ×4"]
                ANALYTICS["Analytics ×2"]
                DECAY["Decay Engine ×4"]
                DECISION["Decision ×3"]
                REMED["Remediation ×3"]
                AUDIT["Audit ×3"]
            end

            TEMPORAL_W["Temporal<br/>Workers"]
        end

        subgraph Managed["Managed Services"]
            AZ_PG["Azure Database<br/>for PostgreSQL"]
            AZ_EH["Azure Event Hubs<br/>(Kafka)"]
            AZ_REDIS["Azure Cache<br/>for Redis"]
            AZ_CH["ClickHouse Cloud"]
            AZ_KV["Azure Key Vault"]
        end

        subgraph Observability["Observability"]
            PROM["Prometheus"]
            GRAF["Grafana"]
            OTEL["OpenTelemetry<br/>Collector"]
        end
    end

    USERS --> INGRESS
    INGRESS --> API & UI
    API --> AZ_PG & AZ_REDIS
    CONN --> AZ_EH & AZ_PG
    ING --> AZ_EH & AZ_PG & AZ_REDIS
    ANALYTICS --> AZ_PG & AZ_EH & AZ_REDIS
    DECAY --> AZ_PG & AZ_EH & AZ_REDIS
    DECISION --> AZ_PG & AZ_EH & TEMPORAL_W
    REMED --> AZ_PG & TEMPORAL_W
    AUDIT --> AZ_EH & AZ_CH
    AppPods -.->|metrics| PROM
    AppPods -.->|traces| OTEL

    style AKS fill:#1e3a5f,color:#fff,stroke:#10b981
    style Managed fill:#1a2e1a,color:#fff,stroke:#10b981
    style Observability fill:#2a2a1a,color:#fff,stroke:#f59e0b

Infrastructure Requirements

Resource Staging Production
Kubernetes v1.28+ v1.28+
Nodes 3 nodes (4 vCPU, 16 GB each) 5+ nodes (8 vCPU, 32 GB each)
Node Pool CPU 12 vCPU total 40+ vCPU total
Node Pool RAM 48 GB total 160+ GB total
Storage 100 GB (managed disks) 500+ GB (Premium SSD)
PostgreSQL General Purpose (2 vCores) Memory Optimised (8+ vCores)
Redis Basic C1 Premium P1+
Kafka / Event Hubs Standard (2 TU) Premium (4+ PU)
ClickHouse 2 shards 4+ shards

Key Features

  • Horizontal Pod Autoscaler — CPU/memory-based scaling for all services
  • Network Policies — 21 least-privilege policies enforcing service-to-service traffic
  • RBAC — Namespace-scoped roles with Azure Workload Identity
  • Secret Management — Azure Key Vault CSI driver integration
  • Observability — Prometheus rules, Grafana dashboards, OpenTelemetry traces
  • Rolling Updates — Zero-downtime deployments with configurable strategies

Quick Start

# Connect to cluster
az aks get-credentials --resource-group rg-verity --name aks-verity

# Deploy to staging
helm upgrade --install verity infra/helm/verity/ \
  --namespace verity --create-namespace \
  --values infra/helm/verity/values.yaml \
  --values infra/helm/verity/values-staging.yaml \
  --wait --timeout 10m

Detailed guide: Kubernetes & Helm Deployment


Model 3 — Hybrid (Edge + Cloud)

A distributed deployment where connectors run near data sources (on-premises or in satellite regions) while core scoring, review, and remediation services run in the cloud. Designed for enterprises with data residency requirements or geographically distributed identity systems.

When to Use

  • Data sources are spread across regions or on-premises data centres
  • Regulatory constraints prevent raw identity data from leaving certain jurisdictions
  • Network latency to centralised cloud services is unacceptable for real-time sync
  • Organisation operates a hub-and-spoke IT model

Architecture

graph TB
    subgraph Edge1["Edge Site — Region A (On-Premises)"]
        DS_A["Identity Sources<br/>(AD, HR, DBs)"]
        CONN_A["Connectors"]
        KAFKA_A["Kafka<br/>(Local Relay)"]
    end

    subgraph Edge2["Edge Site — Region B (Cloud Region)"]
        DS_B["Identity Sources<br/>(Azure AD, Synapse)"]
        CONN_B["Connectors"]
        KAFKA_B["Kafka<br/>(Local Relay)"]
    end

    subgraph Cloud["Central Cloud (AKS)"]
        KAFKA_C["Kafka<br/>(Central Broker)"]
        subgraph CoreServices["Core Services"]
            ING["Ingestion Pipeline"]
            DECAY["Decay Engine"]
            REVIEW["Review Generator"]
            WORKFLOW["Workflow Engine"]
            REMED["Remediation"]
            API["API Gateway"]
            UI["Dashboard UI"]
            AUDIT["Audit Writer"]
        end
        PG["PostgreSQL"]
        CH["ClickHouse"]
        REDIS["Redis"]
        TEMPORAL["Temporal"]
    end

    DS_A --> CONN_A --> KAFKA_A
    DS_B --> CONN_B --> KAFKA_B
    KAFKA_A -->|"MirrorMaker /<br/>Event Hubs"| KAFKA_C
    KAFKA_B -->|"MirrorMaker /<br/>Event Hubs"| KAFKA_C
    KAFKA_C --> ING
    ING --> PG & REDIS
    DECAY --> PG & KAFKA_C & REDIS
    REVIEW --> PG & TEMPORAL
    WORKFLOW --> PG & TEMPORAL
    REMED --> PG & KAFKA_C
    AUDIT --> KAFKA_C & CH
    API --> PG & REDIS & CH

    style Edge1 fill:#3b1a1a,color:#fff,stroke:#ef4444
    style Edge2 fill:#1a2e3b,color:#fff,stroke:#3b82f6
    style Cloud fill:#1a2e1a,color:#fff,stroke:#10b981

Infrastructure Requirements

Component Edge Site (each) Central Cloud
Compute 2 vCPU, 4 GB RAM Same as Kubernetes Production
Network 10 Mbps to cloud Standard AKS networking
Kafka 1 broker (relay) Standard/Premium Event Hubs
Storage 20 GB (local buffer) Same as Kubernetes Production
Kubernetes Optional (Docker Compose acceptable) v1.28+

Data Flow

  1. Edge connectors poll local identity sources and produce normalised events to a local Kafka relay
  2. Kafka MirrorMaker 2 (or Azure Event Hubs geo-replication) replicates events to the central broker
  3. Central services process events identically to the standard Kubernetes model
  4. Remediation commands flow back through Kafka to edge connectors for execution against local systems

Key Considerations

Network Connectivity

Edge sites require reliable outbound connectivity to the central Kafka broker. Configure dead-letter queues and retry policies to handle intermittent disconnections.

Data Residency

Raw identity events can be filtered or anonymised at the edge before replication. Only normalised, pseudonymised events need to reach the central cloud. Configure connector-level filtering with CONNECTOR_EXPORT_FILTER rules.

Edge Deployment

Edge sites can run connectors via Docker Compose even when the central cloud uses Kubernetes:

# On edge server
docker compose --profile connectors \
  -f docker-compose.edge.yml up -d

Detailed guide: Contact your Verity solutions architect for hybrid deployment planning.


Choosing a Model

Use this decision flowchart to select the right deployment model for your environment:

flowchart TD
    START(["Which deployment<br/>model should I use?"]) --> Q1{"Is this for<br/>development or<br/>testing?"}

    Q1 -->|Yes| DC["✅ <b>Docker Compose</b><br/>Single machine, fast setup"]
    Q1 -->|No| Q2{"Are data sources<br/>distributed across<br/>regions or on-prem?"}

    Q2 -->|No| Q3{"Do you need<br/>horizontal scaling<br/>or HA?"}
    Q2 -->|Yes| Q4{"Do data residency<br/>rules prevent<br/>centralisation?"}

    Q3 -->|No| DC
    Q3 -->|Yes| K8S["✅ <b>Kubernetes + Helm</b><br/>Scalable, production-grade"]

    Q4 -->|No| K8S
    Q4 -->|Yes| HYB["✅ <b>Hybrid Edge + Cloud</b><br/>Distributed with data locality"]

    style DC fill:#6366f1,color:#fff,stroke:none
    style K8S fill:#10b981,color:#fff,stroke:none
    style HYB fill:#f59e0b,color:#000,stroke:none
    style START fill:#7c4dff,color:#fff,stroke:none

Summary Matrix

Criterion Docker Compose Kubernetes + Helm Hybrid
Setup complexity Low Medium High
Operational overhead Minimal Moderate Significant
Horizontal scaling No Yes Yes
High availability No Yes Yes
Data residency controls No Partial Yes
Air-gapped support No With mirrored registry Yes (edge)
Cost (small workload)
Cost (large workload) N/A

Migrating Between Models

Docker Compose → Kubernetes

  1. Export your .env configuration
  2. Map environment variables to Helm values.yaml (see Configuration Reference)
  3. Provision managed infrastructure (PostgreSQL, Redis, Kafka, ClickHouse)
  4. Deploy with Helm using your values file
  5. Migrate data using pg_dump / pg_restore and ClickHouse backup tools

Configuration parity

Verity uses identical environment variable names across Docker Compose and Kubernetes. A .env file translates directly to a Kubernetes ConfigMap.

Kubernetes → Hybrid

  1. Deploy the central cloud cluster as a standard Kubernetes deployment
  2. Provision Kafka MirrorMaker 2 or configure Azure Event Hubs geo-replication
  3. Deploy connector-only stacks at each edge site
  4. Configure edge connectors to produce to the local Kafka relay
  5. Validate end-to-end event flow from edge to central scoring

Next Steps