Skip to content

Architecture

Verity is built as a distributed, event-driven microservices platform designed for enterprise-grade identity security. This section provides comprehensive technical reference for engineers building, operating, and extending the platform.


High-Level Architecture

graph TB
    subgraph Ingest["① Ingest Plane"]
        C1[Azure AD Connector]
        C2[Fabric Connector]
        C3[Synapse Connector]
        C4[Databricks Connector]
        C5[PostgreSQL Connector]
        C6[HR Connector]
    end

    subgraph Normalise["② Normalise Plane"]
        IW[Ingest Worker]
        IR[Identity Resolver]
        AC[Asset Classifier]
    end

    subgraph Score["③ Score Plane"]
        DE[Decay Engine]
    end

    subgraph Review["④ Review Plane"]
        RG[Review Generator]
        WE[Workflow Engine]
    end

    subgraph Remediate["⑤ Remediate Plane"]
        RS[Remediation Service]
        AW[Audit Writer]
    end

    subgraph Infrastructure
        PG[(PostgreSQL/<br/>TimescaleDB)]
        CH[(ClickHouse)]
        KF[Apache Kafka]
        RD[(Redis)]
        TP[Temporal]
    end

    subgraph Interface
        API[API Gateway]
        UI[Dashboard UI]
        CR[Compliance Reporter]
    end

    C1 & C2 & C3 & C4 & C5 & C6 --> KF
    KF --> IW --> IR & AC
    IR & AC --> KF
    KF --> DE --> PG
    DE --> KF
    KF --> RG --> WE
    WE --> TP
    KF --> RS --> AW --> CH
    API --> PG & CH & RD
    UI --> API
    CR --> CH

    style Ingest fill:#7c4dff,color:#fff,stroke:none
    style Normalise fill:#651fff,color:#fff,stroke:none
    style Score fill:#536dfe,color:#fff,stroke:none
    style Review fill:#448aff,color:#fff,stroke:none
    style Remediate fill:#40c4ff,color:#000,stroke:none

Documentation

  • System Overview


    High-level architecture, 19-service inventory, technology stack, and the five processing planes explained.

    System Overview

  • Data Flow


    End-to-end data pipeline, Kafka topic catalogue, workflow state machine, and dead-letter queue patterns.

    Data Flow

  • Database Schema


    PostgreSQL/TimescaleDB operational schema, ClickHouse audit schema, indexes, and retention policies.

    Database Schema

  • Security Model


    Authentication, RBAC, network policies, secrets management, encryption, and compliance controls.

    Security Model


Design Principles

Principle Description
Event-Driven Every state change flows through Kafka; services are loosely coupled consumers and producers
Separation of Planes Ingest → Normalise → Score → Review → Remediate; each plane scales independently
Immutable Audit All actions appended to ClickHouse with 7-year retention for regulatory compliance
Defence in Depth OAuth 2.0/OIDC, RBAC, Kubernetes NetworkPolicies, encrypted data at rest and in transit
Observable by Default Prometheus metrics, structured JSON logging, and distributed tracing on every service
Idempotent Operations Every service handles duplicates gracefully; at-least-once Kafka delivery with deduplication

Technology Stack

Layer Technology Purpose
Language Python 3.12, TypeScript Backend microservices, frontend SPA
Framework FastAPI, React HTTP APIs, dashboard UI
Database PostgreSQL + TimescaleDB Operational data + time-series scores
Analytics ClickHouse Immutable audit trail, compliance queries
Messaging Apache Kafka Event streaming between planes
Cache Redis Score caching, rate limiting, session store
Workflow Temporal Review lifecycle, SLA tracking, escalation
Orchestration Kubernetes + Helm Production deployment
CI/CD GitHub Actions Lint, test, build, deploy pipeline
Monitoring Prometheus + Grafana Metrics, alerting, dashboards