FAQ¶
Answers to the most common questions about Verity. Can't find what you're looking for? Open a discussion on GitHub.
General¶
What is access decay?¶
Access decay is the gradual increase in risk that occurs when an access grant remains in place but is no longer actively needed. Think of it like food expiry — a permission that was perfectly valid six months ago may now be a liability if the person hasn't used it, has changed roles, or has left the organisation.
Verity quantifies this decay as a score from 0 (fresh / actively used) to 100 (fully decayed / high risk).
How is Verity different from PAM (Privileged Access Management)?¶
PAM tools focus on securing and vaulting privileged credentials — SSH keys, admin passwords, service-account tokens. Verity operates at a different layer: it analyses all access grants (not just privileged ones) across identity providers and data platforms, scores how stale each grant has become, and orchestrates reviews and remediation.
| Aspect | PAM | Verity |
|---|---|---|
| Scope | Privileged accounts only | All access grants |
| Primary function | Credential vaulting & session recording | Decay scoring & review orchestration |
| Access lifecycle | Check-out / check-in | Continuous scoring & automated remediation |
| Data-platform coverage | Limited | Databricks, Synapse, Fabric, PostgreSQL |
Verity complements PAM — it identifies which privileged accounts are decayed, so your PAM tool can enforce tighter controls on the right accounts.
What connectors are supported?¶
Verity v1.0 ships with six connectors:
| Connector | Source System | Data Collected |
|---|---|---|
| Azure AD | Microsoft Entra ID | Users, groups, app-role assignments, service principals |
| Fabric | Microsoft Fabric | Workspace, lakehouse, warehouse permissions |
| Synapse | Azure Synapse Analytics | SQL/Spark pool permissions, workspace roles |
| Databricks | Databricks | Workspace, cluster, SQL warehouse, Unity Catalog grants |
| PostgreSQL | PostgreSQL databases | Role grants, schema-level permissions |
| HR | HR systems (CSV/SCIM) | Joiner/mover/leaver events, department, manager |
Scoring¶
How does the decay scoring work?¶
The Decay Engine evaluates six weighted factors for every access grant:
- Days since last use — How long since the principal actually exercised this permission?
- Peer comparison — Do peers in the same department/role have similar access?
- Privilege level — Higher privileges (admin, write) decay faster than read-only.
- Asset sensitivity — Access to a sensitivity-5 database scores higher than a dev sandbox.
- Login frequency — Has the principal logged in to the source system recently?
- HR signals — Has the principal changed roles, gone on leave, or been terminated?
Each factor produces a sub-score; these are combined using configurable weights into the final 0–100 decay score.
How often are scores recalculated?¶
By default, every 6 hours. The schedule is configurable per connector and globally:
# config/scoring.yaml
scoring:
schedule: "0 */6 * * *" # Every 6 hours (cron syntax)
batch_size: 5000 # Grants processed per batch
Tip
For high-sensitivity assets, you can configure more frequent scoring (e.g., hourly) by overriding the schedule at the asset or connector level.
How does Verity handle false positives?¶
Verity uses several mechanisms to minimise false positives:
- Multi-factor scoring — A single factor (e.g., no recent login) alone won't push a score to Critical. Multiple signals must converge.
- Peer comparison — If everyone in the role has the same access, the score stays low even if individual usage is infrequent.
- Configurable thresholds — Adjust the score thresholds that trigger reviews to match your organisation's risk appetite.
- Dry-run mode — Run remediation in dry-run for weeks before enabling live revocation.
- Reviewer override — Data owners can approve grants they know are still needed, resetting the review clock.
Connectors & Integration¶
Can I write custom connectors?¶
Yes. Verity provides a Connector SDK that defines the interface every connector must implement:
from verity.sdk.connector import BaseConnector, ConnectorResult
class MyCustomConnector(BaseConnector):
"""Pull grants from a custom system."""
async def fetch_principals(self) -> list[Principal]:
...
async def fetch_assets(self) -> list[Asset]:
...
async def fetch_grants(self) -> list[Grant]:
...
async def revoke_grant(self, grant_id: str) -> ConnectorResult:
...
See the Connector SDK guide for the full development workflow.
Can Verity integrate with ServiceNow or Jira?¶
Yes. The Remediation Executor supports pluggable action handlers. Instead of (or in addition to) revoking access directly, it can open a ticket in an external ITSM system:
# config/remediation.yaml
remediation:
actions:
- type: revoke # Direct revocation via connector
- type: ticket # Open ITSM ticket
provider: servicenow
config:
instance: mycompany.service-now.com
assignment_group: "IAM Operations"
Jira integration works the same way by setting provider: jira.
Infrastructure & Deployment¶
What databases are required?¶
Verity requires four data-infrastructure components:
| Component | Technology | Purpose |
|---|---|---|
| Operational DB | PostgreSQL 16 + TimescaleDB | Principals, assets, grants, scores, reviews |
| Audit store | ClickHouse | Immutable event log, compliance reports |
| Event streaming | Kafka (KRaft mode) | Async communication between planes |
| Cache | Redis 7 | Score look-ups, session data, rate limiting |
Additionally, Temporal is required for durable workflow orchestration.
What's the minimum infrastructure?¶
For a development / evaluation environment:
| Resource | Minimum |
|---|---|
| CPU | 4 cores |
| RAM | 16 GB |
| Disk | 40 GB SSD |
| Docker | Docker Desktop or equivalent |
For production (up to 100k grants):
| Resource | Recommended |
|---|---|
| Kubernetes nodes | 3 nodes, 8 CPU / 32 GB each |
| PostgreSQL | 4 CPU / 16 GB / 200 GB SSD |
| ClickHouse | 4 CPU / 16 GB / 500 GB SSD |
| Kafka | 3 brokers, 2 CPU / 8 GB each |
| Redis | 2 CPU / 4 GB |
| Temporal | 2 CPU / 4 GB |
Can I use Verity without Kubernetes?¶
Yes. Verity ships with a Docker Compose configuration for local development and small-scale production deployments:
For production without Kubernetes, you can run the containers on any Docker-capable host(s) using Docker Compose or a similar orchestrator (e.g., Nomad, ECS).
Warning
For production workloads above ~50k grants, Kubernetes with the Helm chart is recommended for horizontal scaling, health-checking, and rolling updates.
Is there a SaaS version?¶
Not yet. Verity v1.0 is a self-hosted platform. A managed SaaS offering is on the roadmap — follow the GitHub repository for announcements.
Operations¶
What's the performance like?¶
Benchmark numbers on the recommended production infrastructure:
| Metric | Value |
|---|---|
| Grants scored per second | ~2,500 |
| End-to-end ingest-to-score latency | < 30 seconds |
| API p99 response time | < 120 ms |
| Dashboard page load | < 1.5 seconds |
| ClickHouse audit query (90-day range) | < 500 ms |
Performance scales linearly by adding Decay Engine and Ingest Worker replicas.
How do I monitor Verity?¶
Every Verity microservice exposes:
/healthz— liveness probe (HTTP 200 / 503)/readyz— readiness probe (HTTP 200 / 503)/metrics— Prometheus-compatible metrics
Pre-built Grafana dashboards are included in the Helm chart:
- Ingest throughput & error rates
- Scoring latency & queue depth
- Review SLA compliance
- Remediation success/failure rates
- Kafka consumer lag
See Monitoring & Alerting for setup instructions.
How is data secured?¶
| Layer | Mechanism |
|---|---|
| In transit | TLS 1.3 between all services; mTLS optional for Kafka |
| At rest | Database-level encryption (PostgreSQL, ClickHouse); Kubernetes secrets for credentials |
| Authentication | OIDC / OAuth 2.0 for dashboard; service-principal auth for connectors |
| Authorisation | RBAC with four built-in roles: admin, reviewer, auditor, viewer |
| Audit | Every API call and state change logged to ClickHouse |
| Secrets | Kubernetes Secrets or external vault (HashiCorp Vault, Azure Key Vault) |
See Security Model for the full security architecture.
Reviews & Remediation¶
What happens if a review times out?¶
When a review exceeds its SLA, the Workflow Engine (Temporal) triggers the configured escalation chain:
flowchart LR
A["Review Created"] --> B["Assigned to<br/>Data Owner"]
B -->|SLA expires| C["Escalate to<br/>Manager"]
C -->|SLA expires| D["Escalate to<br/>Security Team"]
D -->|SLA expires| E["Auto-Remediate<br/>(if configured)"]
style E fill:#f44336,color:#fff,stroke:none
Default escalation SLAs:
| Step | Default Timer |
|---|---|
| Primary data owner | 7 days (High) / 48 hours (Critical) |
| Manager escalation | +3 days |
| Security team escalation | +2 days |
| Auto-remediation fallback | +1 day |
Info
Auto-remediation on escalation timeout is disabled by default and must be explicitly enabled in the policy configuration.
Can I customise review routing rules?¶
Yes. Review routing is configured via policy rules:
# config/review-policies.yaml
policies:
- name: financial-systems
match:
asset_sensitivity: [4, 5]
asset_tags: ["financial"]
routing:
primary: asset_owner
escalation:
- role: manager
after: 7d
- role: security_team
after: 10d
thresholds:
review: 50 # Score ≥ 50 → generate review
auto_remediate: 90 # Score ≥ 90 → skip review, auto-revoke
Troubleshooting¶
Where can I find logs?¶
All services emit structured JSON logs with correlation IDs. In Kubernetes:
# All Verity services
kubectl logs -n verity -l app.kubernetes.io/part-of=verity --tail=100
# Specific service
kubectl logs -n verity -l app.kubernetes.io/name=decay-engine --tail=100
See Troubleshooting for common issues and resolution steps.
Still have questions?
Open a GitHub Discussion or check the Glossary for term definitions.