Access Decay Theory¶
What Is Access Decay?¶
Access decay is the natural, inevitable process by which access permissions become stale, excessive, or misaligned with actual need.
Every access grant starts with a legitimate purpose: a developer needs write access to a repository, an analyst needs read access to a data warehouse, a service account needs API credentials to call an upstream service. Over time, however, the context that justified the grant changes:
- The developer moves to a different team.
- The analyst's project is completed.
- The upstream service is decommissioned.
- The contractor's engagement ends, but their account persists.
The permission itself does not expire — but its justification does. The gap between what a user has and what a user needs widens silently, creating an ever-growing attack surface.
The Decay Principle
Access that is not actively used is access that is actively dangerous.
Every unused permission is a door that can be opened by an attacker, a misconfigured automation, or an honest mistake — but never by the person it was granted to.
The Decay Curve¶
Access decay is not binary. It follows a curve: permissions start fresh and justified, gradually become stale, and eventually become dangerous.
Verity models this curve as a score from 0 to 100:
%%{init: {'theme': 'base', 'themeVariables': {'lineColor': '#536dfe'}}}%%
xychart-beta
title "Access Decay Score Over Time"
x-axis ["Week 1", "Week 4", "Week 8", "Week 12", "Week 16", "Week 20", "Week 24"]
y-axis "Decay Score" 0 --> 100
line [5, 12, 28, 45, 62, 78, 91]
| Phase | Score Range | Description |
|---|---|---|
| Fresh | 0 – 24 | Access was recently used and aligns with current role. No action needed. |
| Stale | 25 – 49 | Usage has declined. Access may still be needed but warrants monitoring. |
| Decayed | 50 – 74 | Significant indicators that access is no longer needed. Review recommended. |
| Dangerous | 75 – 100 | Strong evidence of complete decay. Immediate review and likely revocation. |
The Score Is Not a Countdown
Decay scores are not a simple timer. A user who logs in and actively uses a resource will see their score decrease, even if it had been climbing. The score reflects current evidence, not elapsed calendar time.
Signals of Decay¶
Verity's scoring model consumes multiple signals — observable, measurable data points that indicate whether access is still justified.
Recency — When Was Access Last Used?¶
The most direct signal. If a user has not accessed a resource in 60 days, the probability that they need it drops sharply. Verity tracks:
- Last interactive login to the target system
- Last API call using the credential
- Last data access event (query, read, download)
Trend — Which Way Is the Score Moving?¶
A single snapshot is noisy. Verity computes a 30-day rolling trend to distinguish between:
- Steady fresh: Consistently low score — active, legitimate use.
- Climbing: Score has been increasing for weeks — likely decay in progress.
- Recovered: Score was high but recently dropped — user resumed access.
Organisational Context¶
Role changes, department transfers, and manager changes are strong decay signals:
| Event | Impact on Score |
|---|---|
| User changes department | +15 baseline increase |
| User changes manager | +10 baseline increase |
| User moves to a different cost centre | +10 baseline increase |
| User's role title changes | +5 to +15 depending on similarity |
| User's employment status changes (FTE → contractor) | +20 baseline increase |
Peer Comparison¶
If everyone in a user's role and team actively uses a resource but one user does not, that user's access is likely decayed. Conversely, if nobody in the team uses a resource, the entire group's access may be over-provisioned.
graph LR
subgraph Team["Engineering Team — Repo X"]
A["Alice<br/>Score: 8<br/>Last use: 2 days"]
B["Bob<br/>Score: 12<br/>Last use: 5 days"]
C["Charlie<br/>Score: 72<br/>Last use: 94 days"]
D["Dana<br/>Score: 6<br/>Last use: 1 day"]
end
C -. "Peer outlier<br/>+15 peer factor" .-> C
style C fill:#f44336,color:#fff,stroke:none
style A fill:#4caf50,color:#fff,stroke:none
style B fill:#4caf50,color:#fff,stroke:none
style D fill:#4caf50,color:#fff,stroke:none
Review History¶
Previous review outcomes feed back into the model:
- Access that was explicitly approved in a recent review receives a temporary score reduction (the reviewer confirmed it is still needed).
- Access that was flagged but deferred ("approve for now, revisit later") receives a smaller reduction that decays faster.
- Access that required multiple review rounds before approval is weighted as higher-risk.
Asset Sensitivity¶
Not all access is equal. A decayed permission on a public wiki is less dangerous than a decayed permission on a production database containing PII.
Verity applies a sensitivity multiplier based on asset classification:
| Classification | Multiplier | Example |
|---|---|---|
| Public | × 0.5 | Public documentation site |
| Internal | × 1.0 | Internal wiki, team chat |
| Confidential | × 1.5 | Customer database, financial reports |
| Restricted | × 2.0 | PCI-scoped systems, production secrets |
Why Traditional Approaches Fail¶
Annual / Quarterly Reviews¶
The most common approach: once per quarter (or worse, once per year), managers receive a list of entitlements and are asked to approve or revoke each one.
Why it fails:
- 364 days of blindness. Between reviews, no one is watching.
- Rubber-stamping. Managers approve 90–95 % of entitlements without investigation because the volume is overwhelming and context is absent.
- No risk prioritisation. A decayed admin credential and an unused read-only wiki permission receive equal treatment.
- Reviewer fatigue. Campaigns generate thousands of line items. Quality degrades after the first dozen.
Time-Based Expiry¶
Some organisations set automatic expiry dates on credentials. The user requests a renewal when the credential expires.
Why it fails:
- Arbitrary durations. 90 days is too long for a one-week project and too short for a core team member.
- Renewal is automatic. Users set calendar reminders and click "renew" reflexively — no genuine evaluation.
- No usage signal. A credential that has not been used for 89 days is renewed just as readily as one used daily.
Manual Requests (Ticket-Based)¶
Access is granted via tickets and revoked via tickets. Revocation only happens when someone remembers to file one.
Why it fails:
- Revocation is nobody's job. The person who granted access moves on. The manager changes. The project ends. Nobody files the revocation ticket.
- No visibility. There is no inventory of who has access to what, so no one knows what to revoke.
Verity's Approach¶
Verity replaces these point-in-time, human-memory-dependent processes with a continuous, evidence-driven pipeline:
graph TD
S1["Signals arrive continuously<br/>(audit logs, HR events, login data)"]
S2["Decay Engine scores every<br/>access grant in near-real-time"]
S3["Threshold crossed → Review<br/>packet generated automatically"]
S4["Evidence-based review routed<br/>to the right person with SLA"]
S5["Decision executed safely via<br/>connector with dry-run & rollback"]
S1 --> S2 --> S3 --> S4 --> S5
S5 -.->|"Outcome feeds back<br/>into scoring model"| S2
style S1 fill:#7c4dff,color:#fff,stroke:none
style S2 fill:#651fff,color:#fff,stroke:none
style S3 fill:#536dfe,color:#fff,stroke:none
style S4 fill:#448aff,color:#fff,stroke:none
style S5 fill:#40c4ff,color:#000,stroke:none
| Traditional | Verity |
|---|---|
| Review once per quarter | Score continuously, review when evidence warrants |
| Spreadsheet of entitlements | Rich evidence package with usage data and peer context |
| Manager reviews everything | Route to the person closest to the resource |
| Approve/Deny with no context | Approve, Revoke, Reduce, or Delegate with full context |
| No follow-through on decisions | Automated remediation with safety guardrails |
| No audit trail | Immutable audit log in ClickHouse |
Next Steps¶
-
Scoring Model
See how the six factors combine into a single 0–100 score.
-
Review Lifecycle
Follow a review from trigger to execution.