DORA Testing Requirements
DORA Articles 24-25 define a testing hierarchy:
- Basic testing (all entities): vulnerability assessments, network security scans, code reviews, configuration checks
- Advanced testing (significant entities): scenario-based testing simulating realistic threat scenarios
- Threat-led penetration testing (systemically important entities): TIBER-EU style red team exercises
For database teams, the first two levels map directly to automated tools and manual validation procedures.
Level 1: Automated Configuration Assessment
Run weekly to catch configuration drift and security degradation:
-- Full 14-check CIS-based security assessment
SELECT rule, category, severity, status, detail
FROM pgcomply.health_check()
ORDER BY
CASE status WHEN 'FAIL' THEN 1 WHEN 'WARN' THEN 2 ELSE 3 END,
CASE severity WHEN 'critical' THEN 1 WHEN 'high' THEN 2 WHEN 'medium' THEN 3 ELSE 4 END;
Example output for a partially hardened system:
rule | category | severity | status | detail
------------------+------------+----------+--------+-------------------------------------
SSL_ENFORCED | encryption | critical | FAIL | ssl = off
SUPERUSER_COUNT | access | high | WARN | 2 superusers (recommend ≤ 1)
IDLE_TIMEOUT | config | medium | WARN | idle_in_transaction_session_timeout = 0
PW_ENCRYPTION | auth | critical | PASS | scram-sha-256
AUDIT_INTEGRITY | pgcomply | critical | PASS | SHA-256 chain verified (1,247 entries)
Also run connection security analysis:
-- Six connection-level checks
SELECT * FROM pgcomply.connection_audit();
Schedule both automatically:
SELECT pgcomply.schedule_jobs(install := true);
-- Creates: health_check every Monday 06:00, session logging every 15 min
Level 2: Scenario-Based Testing
DORA requires testing realistic failure scenarios. For databases, the critical scenarios are:
Scenario A: Primary Database Failure
Test: Can you failover to a replica and maintain operations?
# 1. Verify replica is synced
psql -h replica -c "SELECT pg_last_wal_receive_lsn(), pg_last_wal_replay_lsn();"
# 2. Promote replica
pg_ctl promote -D /var/lib/postgresql/data
# 3. Verify pgcomply functions still work on new primary
psql -h new-primary -c "SELECT * FROM pgcomply.health_check();"
psql -h new-primary -c "SELECT pgcomply.verify_audit();"
Document: Failover time, data loss window, and whether compliance controls survived the switch.
Scenario B: Backup Restoration
Test: Can you actually restore from backup and is the data intact?
# 1. Take backup
pg_dump -Fc production_db > backup_$(date +%Y%m%d).dump
# 2. Restore to isolated instance
createdb test_restore
pg_restore -d test_restore backup_$(date +%Y%m%d).dump
# 3. Verify data integrity
psql test_restore -c "SELECT pgcomply.verify_audit();"
psql test_restore -c "SELECT count(*) FROM pgcomply.pii_registry;"
psql test_restore -c "SELECT * FROM pgcomply.health_check();"
# 4. Verify PII is intact and compliance state is preserved
psql test_restore -c "SELECT * FROM pgcomply.classification_map();"
# 5. Clean up
dropdb test_restore
Document: Restoration time, verification results, and any discrepancies found.
Scenario C: Audit Trail Corruption
Test: Can you detect if someone tampered with the audit trail?
-- Verify chain integrity
SELECT pgcomply.verify_audit();
-- Simulate tampering detection (on test instance only!)
-- UPDATE pgcomply.audit_log SET details = '{}' WHERE id = 100;
-- This should trigger: ERROR: audit log is immutable
Document: Whether the trigger prevented modification, and whether verify_audit() detected the attempt.
Scenario D: Connection Exhaustion
Test: What happens when all connections are consumed?
-- Check current state
SELECT * FROM pgcomply.connection_audit();
-- Under load test: verify graceful degradation
-- Your application should return meaningful errors, not hang indefinitely
Scenario E: Schema Drift After Emergency Change
Test: Does an emergency schema change get caught?
-- Simulate emergency column addition
ALTER TABLE users ADD COLUMN emergency_phone TEXT;
-- Run drift detection
SELECT * FROM pgcomply.schema_drift();
-- Should flag: new_column, users, emergency_phone, phone, high
Documenting Test Results
Every automated test execution is logged in the audit trail:
SELECT event_type, created_at, details
FROM pgcomply.audit_log
WHERE event_type IN ('health_check', 'audit_verification')
ORDER BY created_at DESC LIMIT 20;
For formal DORA testing documentation, generate a structured report:
-- Pro: Full compliance report including test history
SELECT pgcomply.compliance_report('resilience_testing');
Testing Calendar
| Frequency | Test | Tool |
|-----------|------|------|
| Weekly | Configuration assessment | health_check() (automated) |
| Weekly | Schema drift detection | schema_drift() (automated) |
| Monthly | Backup restoration | Manual + verify_audit() |
| Quarterly | Failover drill | Manual + health_check() |
| Quarterly | Connection exhaustion | Load test + connection_audit() |
| Annually | Full scenario test | All of the above documented |
Summary
DORA resilience testing at the database level means continuous automated checks, monthly backup validation, and quarterly scenario testing. pgcomply provides the automated assessment layer with health_check(), connection_audit(), and verify_audit(). The manual scenarios test what automation cannot: actual failover behavior, real backup integrity, and human response times. Document everything — DORA auditors want to see both the test results and the testing calendar.