Getting Started with Verity¶
This tutorial teaches you how to use Verity by getting it running locally, then progressively walking you through the platform's capabilities. You'll start with infrastructure, seed data, and then experience the full access‑decay lifecycle — from ingesting raw events to making a review decision.
Steps 1–4 give you a working Verity platform in under 15 minutes. Steps 5–8 explore the API, event pipeline, and review workflow.
| Step | What You'll Learn | Time |
|---|---|---|
| Step 1: Set Up Your Environment | Install prerequisites, configure Docker resources, clone the repo | 5 min |
| Step 2: Start Infrastructure | Launch Postgres, ClickHouse, Kafka, Redis, and Temporal; verify health | 3 min |
| Step 3: Start Application Services | Bring up all 19 microservices, verify each is healthy | 5 min |
| Step 4: Explore the Dashboard | Open the React UI, navigate the decay heatmap, review queue, and compliance views | 5 min |
| Step 5: Use the REST API | Make your first API calls — list principals, fetch scores, query reviews | 5 min |
| Step 6: Ingest Your First Events | Send raw access events via the API and watch them flow through the pipeline | 10 min |
| Step 7: Trigger a Review | Find a high‑decay grant, trigger manual review, inspect the evidence, make a decision | 5 min |
| Step 8: Clean Up | Tear down all containers and volumes | 1 min |
Architecture Overview¶
Before diving in, here's a high‑level view of what you're about to run:
graph TB
subgraph Infrastructure
PG["PostgreSQL / TimescaleDB :5432"]
CH["ClickHouse :8123"]
KF["Kafka :9092"]
RD["Redis :6379"]
TM["Temporal :7233"]
end
subgraph Ingestion
IR["Identity Resolver :8001"]
AC["Asset Classifier :8002"]
EE["Event Enricher"]
end
subgraph Analytics
DE["Decay Engine"]
PA["Peer Analyser"]
AD["Anomaly Detector"]
end
subgraph Decision
RG["Review Generator"]
WE["Workflow Engine"]
end
subgraph "Remediation & Audit"
RE["Remediation Executor :8005"]
AW["Audit Writer"]
CR["Compliance Reporter :8006"]
end
subgraph Frontend
API["API Gateway :8000"]
FE["Dashboard :3000"]
end
FE --> API
API --> PG
API --> CH
KF --> IR --> AC --> EE
EE --> DE --> PA --> AD
AD --> RG --> WE --> RE
RE --> AW
AW --> CH
Step 1: Set Up Your Environment¶
Prerequisites¶
Ensure the following tools are installed:
| Tool | Minimum Version | Check Command |
|---|---|---|
| Docker | 24.0+ | docker --version |
| Docker Compose | v2.20+ | docker compose version |
| Git | 2.40+ | git --version |
| curl | any | curl --version |
| Python | 3.12+ (for JSON formatting) | python3 --version |
Run all checks at once:
Expected output (versions may vary):
Docker version 27.1.1, build 6312585
Docker Compose version v2.29.1
git version 2.45.2
curl 8.7.1 (x86_64-apple-darwin23.0)
Configure Docker Resources¶
Required: Increase Docker Memory
Verity runs 6 infrastructure services and 19 microservices. You must allocate sufficient resources.
Docker Desktop → Settings → Resources:
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 4 cores | 6 cores |
| Memory | 8 GB | 12 GB |
| Disk | 20 GB | 40 GB |
Clone the Repository¶
Configure Environment Variables¶
The defaults are tuned for local development — no changes required to get started.
What's inside .env.example?
The file contains connection strings and configuration for every service:
| Category | Variables |
|---|---|
| PostgreSQL | POSTGRES_HOST, POSTGRES_PORT, POSTGRES_USER, POSTGRES_PASSWORD, POSTGRES_DB |
| ClickHouse | CLICKHOUSE_HOST, CLICKHOUSE_PORT, CLICKHOUSE_DB |
| Kafka | KAFKA_BOOTSTRAP_SERVERS |
| Redis | REDIS_URL |
| Temporal | TEMPORAL_HOST, TEMPORAL_NAMESPACE |
| Auth | AUTH_DISABLED=true (disabled for local dev) |
| Azure AD | AZURE_TENANT_ID, AZURE_CLIENT_ID, AZURE_CLIENT_SECRET |
See the Configuration reference for details on every variable.
✅ Checkpoint: You have Docker running with sufficient resources, the Verity repository cloned, and a .env file ready. You haven't started any containers yet.
Troubleshooting
docker compose versionnot found? Ensure you have Docker Compose v2 (comes with Docker Desktop 4.x+). The legacydocker-compose(with hyphen) is not supported.- macOS: Docker Desktop uses too much memory? Close other memory‑intensive apps before continuing, or increase the swap size in Docker Desktop settings.
- Windows: WSL2 backend recommended. Hyper‑V backend may have slower file system performance.
Step 2: Start Infrastructure¶
Start only the infrastructure services first. This ensures databases and message brokers are fully healthy before application services try to connect.
Expected output:
[+] Running 7/7
✔ Network verity_default Created
✔ Container verity-postgres Started
✔ Container verity-clickhouse Started
✔ Container verity-kafka Started
✔ Container verity-redis Started
✔ Container verity-temporal Started
✔ Container verity-temporal-ui Started
Verify Health Checks¶
Wait for all infrastructure services to become healthy (this takes 30–60 seconds):
# Poll until all infrastructure containers report healthy
docker compose ps postgres clickhouse kafka redis temporal \
--format "table {{.Name}}\t{{.Status}}"
Expected output (all should show (healthy)):
NAME STATUS
verity-postgres Up 45 seconds (healthy)
verity-clickhouse Up 44 seconds (healthy)
verity-kafka Up 44 seconds (healthy)
verity-redis Up 44 seconds (healthy)
verity-temporal Up 43 seconds (healthy)
Verify Connectivity¶
Run quick connectivity checks against each service:
# PostgreSQL
docker compose exec postgres pg_isready -U verity
# ClickHouse
curl -s http://localhost:8123/ping
# Kafka
docker compose exec kafka kafka-broker-api-versions.sh \
--bootstrap-server localhost:9092 2>&1 | head -1
# Redis
docker compose exec redis redis-cli ping
Expected responses:
/var/run/postgresql:5432 - accepting connections # PostgreSQL
Ok. # ClickHouse
ApiVersion(...) # Kafka (any version info)
PONG # Redis
Initialize the Database¶
The init-db container runs automatically and creates schemas in both PostgreSQL and ClickHouse:
Verify it completed:
Expected output:
verity-init-db | PostgreSQL schema initialized successfully
verity-init-db | ClickHouse schema initialized successfully
Seed Sample Data¶
Load realistic sample data for development and exploration:
This populates the database with:
| Data Type | Description | Example Range |
|---|---|---|
| Principals | Users, service accounts, groups | ~50 identities across departments |
| Assets | Databases, tables, workspaces | Fabric, Synapse, PostgreSQL platforms |
| Access Grants | Permission mappings | Fresh daily‑use to stale 6‑month‑old |
| Access Events | Historical audit trail | Varied activity patterns |
| Decay Scores | Pre‑computed scores | 5 (active) to 95 (nearly abandoned) |
✅ Checkpoint: All infrastructure services are healthy, the database schemas are initialised, and sample data is loaded. You're ready to start the application layer.
Troubleshooting
- Container stuck in
starting? Check logs:docker compose logs postgres. Common cause: port 5432 already in use by a local PostgreSQL installation. init-dbfailed? Re‑run it:docker compose run --rm init-db.- Port conflicts? Ensure these ports are free:
5432,6379,8123,9000,9092,7233,8233. Check with:lsof -i :5432(macOS/Linux). - Kafka not ready? Kafka has a 30‑second start period. Wait and retry:
docker compose restart kafka.
Step 3: Start Application Services¶
Now start the remaining application services, which depend on the infrastructure you just verified:
This brings up all 19 microservices alongside the already‑running infrastructure. Monitor startup:
Press Ctrl+C once all services show (healthy) status (typically 2–3 minutes).
Expected Service List¶
Run the following to see the complete list:
You should see all services running:
NAME STATUS PORTS
verity-anomaly-detector Up 2 minutes (healthy)
verity-api-gateway Up 2 minutes (healthy) 0.0.0.0:8000->8000/tcp
verity-asset-classifier Up 2 minutes (healthy) 0.0.0.0:8002->8002/tcp
verity-audit-writer Up 2 minutes (healthy)
verity-clickhouse Up 5 minutes (healthy) 0.0.0.0:8123->8123/tcp, 0.0.0.0:9000->9000/tcp
verity-compliance-reporter Up 2 minutes (healthy) 0.0.0.0:8006->8006/tcp
verity-connector-aad Up 2 minutes (healthy)
verity-connector-databricks Up 2 minutes (healthy)
verity-connector-fabric Up 2 minutes (healthy)
verity-connector-hr Up 2 minutes (healthy)
verity-connector-postgres Up 2 minutes (healthy)
verity-connector-synapse Up 2 minutes (healthy)
verity-decay-engine Up 2 minutes (healthy)
verity-event-enricher Up 2 minutes (healthy)
verity-frontend Up 2 minutes (healthy) 0.0.0.0:3000->3000/tcp
verity-identity-resolver Up 2 minutes (healthy) 0.0.0.0:8001->8001/tcp
verity-kafka Up 5 minutes (healthy) 0.0.0.0:9092->9092/tcp
verity-peer-analyser Up 2 minutes (healthy)
verity-postgres Up 5 minutes (healthy) 0.0.0.0:5432->5432/tcp
verity-redis Up 5 minutes (healthy) 0.0.0.0:6379->6379/tcp
verity-remediation-executor Up 2 minutes (healthy) 0.0.0.0:8005->8005/tcp
verity-review-generator Up 2 minutes (healthy)
verity-temporal Up 5 minutes (healthy) 0.0.0.0:7233->7233/tcp
verity-temporal-ui Up 5 minutes (healthy) 0.0.0.0:8233->8233/tcp
verity-workflow-engine Up 2 minutes (healthy)
Service Port Reference¶
| Service | Port | URL |
|---|---|---|
| API Gateway | 8000 | http://localhost:8000 |
| Dashboard | 3000 | http://localhost:3000 |
| Identity Resolver | 8001 | http://localhost:8001 |
| Asset Classifier | 8002 | http://localhost:8002 |
| Remediation Executor | 8005 | http://localhost:8005 |
| Compliance Reporter | 8006 | http://localhost:8006 |
| Temporal UI | 8233 | http://localhost:8233 |
| PostgreSQL | 5432 | psql -h localhost -U verity -d verity |
| ClickHouse | 8123 | http://localhost:8123 |
| Kafka | 9092 | kafka-console-consumer.sh --bootstrap-server localhost:9092 |
| Redis | 6379 | redis-cli -h localhost |
Verify the API Gateway¶
The API Gateway is your primary interface. Confirm it's responding:
Expected response:
Authentication is disabled in dev mode
The API Gateway runs with AUTH_DISABLED=true by default, which disables Azure AD authentication and returns a synthetic admin user. See Configuration for production auth setup.
✅ Checkpoint: All 19 microservices and 6 infrastructure services are running and healthy. The API Gateway is responding on port 8000. You have a fully operational Verity platform.
Troubleshooting
- Services keep restarting? Check logs:
docker compose logs <service-name> --tail 30. Common cause: infrastructure wasn't healthy before app services started. Fix:docker compose down && docker compose up -d postgres clickhouse kafka redis temporal temporal-ui, wait for healthy, thendocker compose up -d. - API Gateway returns 502? It may still be starting. Wait 30 seconds and retry.
- Out of memory? Run
docker stats --no-streamto check memory usage. Increase Docker Desktop memory allocation if containers are being OOM‑killed. - Temporal UI not loading? Temporal UI depends on the Temporal server. Verify:
docker compose logs temporal --tail 10.
Step 4: Explore the Dashboard¶
Open the Verity dashboard in your browser:
Main Dashboard¶
The landing page gives you an organisation‑wide overview of access health:
-
Decay Heatmap — A colour‑coded grid showing decay scores across all principals and assets. Red cells indicate stale, unused access that should be reviewed. Green cells indicate actively‑used permissions.
-
Score Distribution — A histogram showing how decay scores are distributed across all active grants. A healthy organisation skews left (low scores); a right‑skewed distribution indicates widespread access sprawl.
-
Platform Breakdown — Connector status and event volume by platform (Fabric, Synapse, PostgreSQL, Databricks). Shows which connectors are actively ingesting events.
Review Queue¶
Navigate to Reviews in the sidebar to see:
- Pending reviews — Review packets awaiting data‑owner decisions
- Review details — Each review includes the grant, decay score, last‑used date, peer comparison, and anomaly flags
- Decision actions — Approve (keep access), Revoke (remove access), or Reassign (transfer to another reviewer)
Compliance Dashboard¶
Navigate to Compliance in the sidebar to explore:
- Compliance reports — Pre‑generated reports showing access posture over time
- Audit trail — Searchable log of all review decisions, remediations, and system events (backed by ClickHouse)
- Policy violations — Grants that exceed configured thresholds
Temporal Workflow UI¶
For deeper insight into the orchestration layer, open the Temporal UI:
Here you can see active and completed workflows for review generation, remediation execution, and scheduled decay recalculations.
✅ Checkpoint: You've explored the three main areas of the Verity UI — the decay heatmap, the review queue, and the compliance dashboard. You've also seen the Temporal workflow UI. You understand what the platform visualises.
Step 5: Use the REST API¶
All Verity functionality is accessible through the REST API on port 8000. The API uses the /v1/ prefix for all endpoints.
Interactive API Docs
The API Gateway serves Swagger/OpenAPI documentation at http://localhost:8000/docs. Open it in your browser to explore all available endpoints interactively.
List Principals¶
Principals are the identities (users, service accounts, groups) whose access Verity tracks.
Expected response:
{
"items": [
{
"id": "usr-001",
"display_name": "Alice Johnson",
"type": "user",
"department": "Engineering",
"platform": "azure_ad"
},
{
"id": "usr-002",
"display_name": "Bob Smith",
"type": "user",
"department": "Finance",
"platform": "azure_ad"
},
{
"id": "svc-001",
"display_name": "data-pipeline-prod",
"type": "service_account",
"department": "Platform",
"platform": "azure_ad"
}
],
"total": 50,
"limit": 3,
"offset": 0
}
List Assets¶
Assets are the resources (databases, tables, workspaces) that principals have access to.
Expected response:
{
"items": [
{
"id": "ast-001",
"name": "sales_warehouse",
"type": "database",
"platform": "fabric",
"classification": "confidential"
},
{
"id": "ast-002",
"name": "customer_data",
"type": "table",
"platform": "synapse",
"classification": "pii"
},
{
"id": "ast-003",
"name": "analytics_workspace",
"type": "workspace",
"platform": "fabric",
"classification": "internal"
}
],
"total": 35,
"limit": 3,
"offset": 0
}
Fetch Decay Scores¶
Decay scores are the heart of Verity — they quantify how "stale" each access grant is. Higher scores mean the access hasn't been used recently and should be reviewed.
Expected response:
{
"items": [
{
"principal_id": "usr-012",
"asset_id": "ast-007",
"score": 94.2,
"last_used": "2024-08-15T10:30:00Z",
"grant_type": "read_write",
"risk_level": "critical"
},
{
"principal_id": "svc-003",
"asset_id": "ast-015",
"score": 87.6,
"last_used": "2024-09-02T14:15:00Z",
"grant_type": "admin",
"risk_level": "high"
}
],
"total": 120,
"limit": 5,
"offset": 0
}
Understanding Decay Scores
| Score Range | Risk Level | Meaning |
|---|---|---|
| 0–20 | Low | Actively used — no action needed |
| 21–50 | Medium | Declining usage — monitor |
| 51–80 | High | Stale access — review recommended |
| 81–100 | Critical | Unused access — review urgently |
List Reviews¶
View Audit Logs¶
✅ Checkpoint: You've successfully called the five core API endpoints — principals, assets, scores, reviews, and audit. You understand the data model: principals have grants to assets, each grant has a decay score, and high scores trigger reviews.
Troubleshooting
curl: (7) Failed to connect? The API Gateway may not be running. Check:docker compose ps api-gateway.- Empty
itemsarray? Seed data may not have loaded. Re‑run:docker compose run --rm seed-data. 401 Unauthorized? Auth should be disabled in dev mode. VerifyAUTH_DISABLED=trueis set in your.envfile and restart:docker compose restart api-gateway.
Step 6: Ingest Your First Events¶
This is the "aha moment" — you'll send raw access events into Verity and watch them flow through the full pipeline: normalise → enrich → score → review.
Understanding the Event Pipeline¶
sequenceDiagram
participant You as You (curl)
participant API as API Gateway
participant KF as Kafka
participant IR as Identity Resolver
participant AC as Asset Classifier
participant EE as Event Enricher
participant DE as Decay Engine
participant RG as Review Generator
You->>API: POST /v1/events
API->>KF: Publish raw event
KF->>IR: Resolve identity
IR->>AC: Classify asset
AC->>EE: Enrich with context
EE->>DE: Recalculate decay score
DE->>RG: Trigger review (if threshold met)
Send a "Recent Access" Event¶
First, send an event showing a principal actively using an asset. This should lower the decay score for that grant:
curl -s -X POST http://localhost:8000/v1/events \
-H "Content-Type: application/json" \
-d '{
"principal_id": "usr-001",
"asset_id": "ast-001",
"action": "query",
"platform": "fabric",
"timestamp": "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'",
"metadata": {
"source_ip": "10.0.1.50",
"query_type": "SELECT",
"rows_returned": 1500
}
}' | python3 -m json.tool
Expected response:
Watch the Pipeline Process the Event¶
Tail the logs from the ingestion and analytics services to see the event flow through the pipeline:
docker compose logs -f identity-resolver asset-classifier \
event-enricher decay-engine --since 1m --tail 0
You should see output like:
verity-identity-resolver | INFO: Resolved principal usr-001 → Alice Johnson (user, Engineering)
verity-asset-classifier | INFO: Classified asset ast-001 → sales_warehouse (database, confidential)
verity-event-enricher | INFO: Enriched event evt-a1b2c3d4 with peer context
verity-decay-engine | INFO: Recalculated score for usr-001/ast-001: 45.2 → 12.8
Press Ctrl+C to stop following logs.
Verify the Score Changed¶
Check the updated decay score for the principal/asset pair you just sent an event for:
The score should have decreased (indicating recent activity):
{
"items": [
{
"principal_id": "usr-001",
"asset_id": "ast-001",
"score": 12.8,
"last_used": "2025-07-15T14:30:00Z",
"grant_type": "read_write",
"risk_level": "low"
}
]
}
Send a Batch of Events¶
Now send multiple events at once to simulate a burst of activity:
curl -s -X POST http://localhost:8000/v1/events/batch \
-H "Content-Type: application/json" \
-d '{
"events": [
{
"principal_id": "usr-005",
"asset_id": "ast-010",
"action": "query",
"platform": "synapse",
"timestamp": "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"
},
{
"principal_id": "usr-008",
"asset_id": "ast-003",
"action": "write",
"platform": "fabric",
"timestamp": "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"
},
{
"principal_id": "svc-002",
"asset_id": "ast-020",
"action": "admin_login",
"platform": "databricks",
"timestamp": "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"
}
]
}' | python3 -m json.tool
http POST http://localhost:8000/v1/events/batch \
events:='[
{"principal_id":"usr-005","asset_id":"ast-010","action":"query","platform":"synapse","timestamp":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'"},
{"principal_id":"usr-008","asset_id":"ast-003","action":"write","platform":"fabric","timestamp":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'"},
{"principal_id":"svc-002","asset_id":"ast-020","action":"admin_login","platform":"databricks","timestamp":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'"}
]'
Expected response:
View Score History¶
Check how a score has changed over time:
✅ Checkpoint: You've sent access events through the API and watched them flow through the full pipeline. You've seen how recent activity lowers decay scores and how the pipeline automatically processes events through identity resolution, asset classification, enrichment, and scoring.
Troubleshooting
- Event returns
status: rejected? Check thatprincipal_idandasset_idreference entities that exist in the seed data. List valid IDs with:curl -s 'http://localhost:8000/v1/principals?limit=50' | python3 -m json.tool | grep id. - Score didn't change? The decay engine processes events asynchronously. Wait 10–15 seconds and query again. Check engine logs:
docker compose logs decay-engine --tail 20. - Kafka consumer lag? Check:
docker compose exec kafka kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --all-groups 2>&1 | head -20.
Step 7: Trigger a Review¶
When a decay score crosses a threshold, Verity generates a review packet — an evidence bundle that helps a data owner decide whether to keep or revoke access. In this step, you'll find a high‑decay grant and trigger a review manually.
Find a High‑Decay Grant¶
Query for grants with critical decay scores:
Pick a grant from the results. For this example, we'll use the first one returned (e.g., usr-012 / ast-007).
Trigger a Manual Review¶
Expected response:
{
"review_id": "rev-x1y2z3",
"status": "pending",
"principal_id": "usr-012",
"asset_id": "ast-007",
"decay_score": 94.2,
"created_at": "2025-07-15T14:35:00Z"
}
Inspect the Evidence Package¶
Each review includes a rich evidence package. Fetch the full review details:
The evidence package includes:
| Evidence | Description |
|---|---|
| Decay score | Current score and score history over time |
| Last used | When the principal last accessed the asset |
| Peer comparison | How this principal's usage compares to peers with similar access |
| Anomaly flags | Any unusual access patterns detected |
| Grant details | Permission type, when granted, by whom |
| Activity timeline | Recent access events for this grant |
Make a Review Decision¶
Submit a decision on the review — approve (keep access) or revoke (remove access):
Expected response:
{
"review_id": "rev-x1y2z3",
"status": "completed",
"decision": "revoke",
"decided_by": "admin (dev-mode)",
"decided_at": "2025-07-15T14:40:00Z"
}
Verify in the Audit Trail¶
The decision is recorded in the immutable audit log (stored in ClickHouse):
View the Workflow in Temporal¶
Open the Temporal UI to see the review workflow execution:
Search for the workflow associated with your review. You'll see the complete workflow history: review created → evidence gathered → notification sent → decision recorded → remediation executed.
✅ Checkpoint: You've completed the full access‑review lifecycle — from finding a stale grant, to triggering a review, inspecting the evidence package, making a decision, and verifying the audit trail. This is the core workflow that Verity automates at scale.
Troubleshooting
- Review not created? Ensure the principal/asset IDs exist. Check with:
curl -s 'http://localhost:8000/v1/principals/usr-012' | python3 -m json.tool. - Decision returns 404? Replace
rev-x1y2z3with the actualreview_idreturned in the previous step. - Workflow not visible in Temporal UI? The workflow engine processes asynchronously. Wait 10 seconds and refresh.
Step 8: Clean Up¶
When you're finished exploring, shut down the entire platform and remove all data:
Expected output:
[+] Running 26/26
✔ Container verity-frontend Removed
✔ Container verity-api-gateway Removed
✔ Container verity-anomaly-detector Removed
...
✔ Container verity-postgres Removed
✔ Volume verity_postgres_data Removed
✔ Volume verity_clickhouse_data Removed
✔ Volume verity_kafka_data Removed
✔ Network verity_default Removed
If you want to keep your data for next time, omit the -v flag:
Quick Reference: Common Commands¶
# View logs for a specific service
docker compose logs -f decay-engine
# Restart a single service after config change
docker compose restart api-gateway
# Rebuild a service after code changes
docker compose build api-gateway && docker compose up -d api-gateway
# Full reset (removes all data)
docker compose down -v
# Check resource usage
docker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}"
✅ Checkpoint: Your environment is clean. All containers, networks, and volumes have been removed.
Next Steps¶
Now that you've experienced the full Verity lifecycle, here's where to go next:
| What | Link | Description |
|---|---|---|
| Configuration | Configuration Guide | Customise environment variables, enable Azure AD auth, tune decay parameters |
| First Connector | Build a Connector | Connect Verity to your own data platforms |
| :material-architect: Architecture | System Design | Deep‑dive into the microservice architecture, event flows, and data model |
| API Reference | http://localhost:8000/docs | Interactive Swagger documentation (when running locally) |
| Temporal Workflows | http://localhost:8233 | Explore workflow definitions and execution history |