- Added InMemoryTransportOptions class for configuration settings including timeouts and latency. - Developed InMemoryTransportServer class to handle connections, frame processing, and event management. - Created ServiceCollectionExtensions for easy registration of InMemory transport services. - Established project structure and dependencies for InMemory transport library. - Implemented comprehensive unit tests for endpoint discovery, connection management, request/response flow, and streaming capabilities. - Ensured proper handling of cancellation, heartbeat, and hello frames within the transport layer.
29 KiB
29 KiB
Database Verification Requirements
Version: 1.0.0 Status: ACTIVE Last Updated: 2025-12-04
Purpose
This document defines the verification and testing requirements for the MongoDB to PostgreSQL conversion. It ensures that the conversion maintains data integrity, determinism, and functional correctness.
Module Verification Reports
| Module | Status | Report | Date |
|---|---|---|---|
| Authority | PASS | docs/db/reports/authority-verification-2025-12-03.md |
2025-12-03 |
| Notify | PASS | docs/db/reports/notify-verification-2025-12-02.md |
2025-12-02 |
| Scheduler | PENDING | TBD | — |
| Policy | PENDING | TBD | — |
| Concelier (Vuln) | PENDING | TBD | — |
| Excititor (VEX/Graph) | PENDING | TBD | — |
1. Verification Principles
1.1 Core Guarantees
The conversion MUST maintain these guarantees:
| Guarantee | Description | Verification Method |
|---|---|---|
| Data Integrity | No data loss during conversion | Record count comparison, checksum validation |
| Determinism | Same inputs produce identical outputs | Parallel pipeline comparison |
| Functional Equivalence | APIs behave identically | Integration test suite |
| Performance Parity | No significant degradation | Benchmark comparison |
| Tenant Isolation | Data remains properly isolated | Cross-tenant query tests |
1.2 Verification Levels
Level 1: Unit Tests
└── Individual repository method correctness
Level 2: Integration Tests
└── End-to-end repository operations with real PostgreSQL
Level 3: Comparison Tests
└── MongoDB vs PostgreSQL output comparison
Level 4: Load Tests
└── Performance and scalability verification
Level 5: Production Verification
└── Dual-write monitoring and validation
2. Test Infrastructure
2.1 Testcontainers Setup
All PostgreSQL integration tests MUST use Testcontainers:
public sealed class PostgresTestFixture : IAsyncLifetime
{
private readonly PostgreSqlContainer _container;
private NpgsqlDataSource? _dataSource;
public PostgresTestFixture()
{
_container = new PostgreSqlBuilder()
.WithImage("postgres:16-alpine")
.WithDatabase("stellaops_test")
.WithUsername("test")
.WithPassword("test")
.WithWaitStrategy(Wait.ForUnixContainer()
.UntilPortIsAvailable(5432))
.Build();
}
public string ConnectionString => _container.GetConnectionString();
public NpgsqlDataSource DataSource => _dataSource
?? throw new InvalidOperationException("Not initialized");
public async Task InitializeAsync()
{
await _container.StartAsync();
_dataSource = NpgsqlDataSource.Create(ConnectionString);
await RunMigrationsAsync();
}
public async Task DisposeAsync()
{
if (_dataSource is not null)
await _dataSource.DisposeAsync();
await _container.DisposeAsync();
}
private async Task RunMigrationsAsync()
{
await using var connection = await _dataSource!.OpenConnectionAsync();
var migrationRunner = new PostgresMigrationRunner(_dataSource, GetMigrations());
await migrationRunner.RunAsync();
}
}
2.2 Test Database State Management
public abstract class PostgresRepositoryTestBase : IAsyncLifetime
{
protected readonly PostgresTestFixture Fixture;
protected NpgsqlConnection Connection = null!;
protected NpgsqlTransaction Transaction = null!;
protected PostgresRepositoryTestBase(PostgresTestFixture fixture)
{
Fixture = fixture;
}
public async Task InitializeAsync()
{
Connection = await Fixture.DataSource.OpenConnectionAsync();
Transaction = await Connection.BeginTransactionAsync();
// Set test tenant context
await using var cmd = Connection.CreateCommand();
cmd.CommandText = "SET app.tenant_id = 'test-tenant-id'";
await cmd.ExecuteNonQueryAsync();
}
public async Task DisposeAsync()
{
await Transaction.RollbackAsync();
await Transaction.DisposeAsync();
await Connection.DisposeAsync();
}
}
2.3 Test Data Builders
public sealed class ScheduleBuilder
{
private Guid _id = Guid.NewGuid();
private string _tenantId = "test-tenant";
private string _name = "test-schedule";
private bool _enabled = true;
private string? _cronExpression = "0 * * * *";
public ScheduleBuilder WithId(Guid id) { _id = id; return this; }
public ScheduleBuilder WithTenant(string tenantId) { _tenantId = tenantId; return this; }
public ScheduleBuilder WithName(string name) { _name = name; return this; }
public ScheduleBuilder Enabled(bool enabled = true) { _enabled = enabled; return this; }
public ScheduleBuilder WithCron(string? cron) { _cronExpression = cron; return this; }
public Schedule Build() => new()
{
Id = _id,
TenantId = _tenantId,
Name = _name,
Enabled = _enabled,
CronExpression = _cronExpression,
Timezone = "UTC",
Mode = ScheduleMode.Scheduled,
CreatedAt = DateTimeOffset.UtcNow,
UpdatedAt = DateTimeOffset.UtcNow
};
}
3. Unit Test Requirements
3.1 Repository CRUD Tests
Every repository implementation MUST have tests for:
public class PostgresScheduleRepositoryTests : PostgresRepositoryTestBase
{
private readonly PostgresScheduleRepository _repository;
public PostgresScheduleRepositoryTests(PostgresTestFixture fixture)
: base(fixture)
{
_repository = new PostgresScheduleRepository(/* ... */);
}
// CREATE
[Fact]
public async Task UpsertAsync_CreatesNewSchedule_WhenNotExists()
{
var schedule = new ScheduleBuilder().Build();
await _repository.UpsertAsync(schedule, CancellationToken.None);
var retrieved = await _repository.GetAsync(
schedule.TenantId, schedule.Id.ToString(), CancellationToken.None);
retrieved.Should().BeEquivalentTo(schedule);
}
// READ
[Fact]
public async Task GetAsync_ReturnsNull_WhenNotExists()
{
var result = await _repository.GetAsync(
"tenant", Guid.NewGuid().ToString(), CancellationToken.None);
result.Should().BeNull();
}
[Fact]
public async Task GetAsync_ReturnsSchedule_WhenExists()
{
var schedule = new ScheduleBuilder().Build();
await _repository.UpsertAsync(schedule, CancellationToken.None);
var result = await _repository.GetAsync(
schedule.TenantId, schedule.Id.ToString(), CancellationToken.None);
result.Should().NotBeNull();
result!.Id.Should().Be(schedule.Id);
}
// UPDATE
[Fact]
public async Task UpsertAsync_UpdatesExisting_WhenExists()
{
var schedule = new ScheduleBuilder().Build();
await _repository.UpsertAsync(schedule, CancellationToken.None);
schedule = schedule with { Name = "updated-name" };
await _repository.UpsertAsync(schedule, CancellationToken.None);
var retrieved = await _repository.GetAsync(
schedule.TenantId, schedule.Id.ToString(), CancellationToken.None);
retrieved!.Name.Should().Be("updated-name");
}
// DELETE
[Fact]
public async Task SoftDeleteAsync_SetsDeletedAt_WhenExists()
{
var schedule = new ScheduleBuilder().Build();
await _repository.UpsertAsync(schedule, CancellationToken.None);
var result = await _repository.SoftDeleteAsync(
schedule.TenantId, schedule.Id.ToString(),
"test-user", DateTimeOffset.UtcNow, CancellationToken.None);
result.Should().BeTrue();
var retrieved = await _repository.GetAsync(
schedule.TenantId, schedule.Id.ToString(), CancellationToken.None);
retrieved.Should().BeNull(); // Soft-deleted not returned
}
// LIST
[Fact]
public async Task ListAsync_ReturnsAllForTenant()
{
var schedule1 = new ScheduleBuilder().WithName("schedule-1").Build();
var schedule2 = new ScheduleBuilder().WithName("schedule-2").Build();
await _repository.UpsertAsync(schedule1, CancellationToken.None);
await _repository.UpsertAsync(schedule2, CancellationToken.None);
var results = await _repository.ListAsync(
schedule1.TenantId, null, CancellationToken.None);
results.Should().HaveCount(2);
}
}
3.2 Tenant Isolation Tests
public class TenantIsolationTests : PostgresRepositoryTestBase
{
[Fact]
public async Task GetAsync_DoesNotReturnOtherTenantData()
{
var tenant1Schedule = new ScheduleBuilder()
.WithTenant("tenant-1")
.WithName("tenant1-schedule")
.Build();
var tenant2Schedule = new ScheduleBuilder()
.WithTenant("tenant-2")
.WithName("tenant2-schedule")
.Build();
await _repository.UpsertAsync(tenant1Schedule, CancellationToken.None);
await _repository.UpsertAsync(tenant2Schedule, CancellationToken.None);
// Tenant 1 should not see Tenant 2's data
var result = await _repository.GetAsync(
"tenant-1", tenant2Schedule.Id.ToString(), CancellationToken.None);
result.Should().BeNull();
}
[Fact]
public async Task ListAsync_OnlyReturnsTenantData()
{
// Create schedules for two tenants
for (int i = 0; i < 5; i++)
{
await _repository.UpsertAsync(
new ScheduleBuilder().WithTenant("tenant-1").Build(),
CancellationToken.None);
await _repository.UpsertAsync(
new ScheduleBuilder().WithTenant("tenant-2").Build(),
CancellationToken.None);
}
var tenant1Results = await _repository.ListAsync(
"tenant-1", null, CancellationToken.None);
var tenant2Results = await _repository.ListAsync(
"tenant-2", null, CancellationToken.None);
tenant1Results.Should().HaveCount(5);
tenant2Results.Should().HaveCount(5);
tenant1Results.Should().OnlyContain(s => s.TenantId == "tenant-1");
tenant2Results.Should().OnlyContain(s => s.TenantId == "tenant-2");
}
}
3.3 Determinism Tests
public class DeterminismTests : PostgresRepositoryTestBase
{
[Fact]
public async Task ListAsync_ReturnsDeterministicOrder()
{
// Insert multiple schedules with same created_at
var baseTime = DateTimeOffset.UtcNow;
var schedules = Enumerable.Range(0, 10)
.Select(i => new ScheduleBuilder()
.WithName($"schedule-{i}")
.Build() with { CreatedAt = baseTime })
.ToList();
foreach (var schedule in schedules)
await _repository.UpsertAsync(schedule, CancellationToken.None);
// Multiple calls should return same order
var results1 = await _repository.ListAsync("test-tenant", null, CancellationToken.None);
var results2 = await _repository.ListAsync("test-tenant", null, CancellationToken.None);
var results3 = await _repository.ListAsync("test-tenant", null, CancellationToken.None);
results1.Select(s => s.Id).Should().Equal(results2.Select(s => s.Id));
results2.Select(s => s.Id).Should().Equal(results3.Select(s => s.Id));
}
[Fact]
public async Task JsonbSerialization_IsDeterministic()
{
var schedule = new ScheduleBuilder()
.Build() with
{
Selection = new ScheduleSelector
{
Tags = new[] { "z", "a", "m" },
Repositories = new[] { "repo-2", "repo-1" }
}
};
await _repository.UpsertAsync(schedule, CancellationToken.None);
// Retrieve and re-save multiple times
for (int i = 0; i < 3; i++)
{
var retrieved = await _repository.GetAsync(
schedule.TenantId, schedule.Id.ToString(), CancellationToken.None);
await _repository.UpsertAsync(retrieved!, CancellationToken.None);
}
// Final retrieval should have identical JSONB
var final = await _repository.GetAsync(
schedule.TenantId, schedule.Id.ToString(), CancellationToken.None);
// Arrays should be consistently ordered
final!.Selection.Tags.Should().BeInAscendingOrder();
}
}
4. Comparison Test Requirements
4.1 MongoDB vs PostgreSQL Comparison Framework
public abstract class ComparisonTestBase<TEntity, TRepository>
where TRepository : class
{
protected readonly TRepository MongoRepository;
protected readonly TRepository PostgresRepository;
protected abstract Task<TEntity?> GetFromMongo(string tenantId, string id);
protected abstract Task<TEntity?> GetFromPostgres(string tenantId, string id);
protected abstract Task<IReadOnlyList<TEntity>> ListFromMongo(string tenantId);
protected abstract Task<IReadOnlyList<TEntity>> ListFromPostgres(string tenantId);
[Fact]
public async Task Get_ReturnsSameEntity_FromBothBackends()
{
var entityId = GetTestEntityId();
var tenantId = GetTestTenantId();
var mongoResult = await GetFromMongo(tenantId, entityId);
var postgresResult = await GetFromPostgres(tenantId, entityId);
postgresResult.Should().BeEquivalentTo(mongoResult, options =>
options.Excluding(e => e.Path.Contains("Id"))); // IDs may differ
}
[Fact]
public async Task List_ReturnsSameEntities_FromBothBackends()
{
var tenantId = GetTestTenantId();
var mongoResults = await ListFromMongo(tenantId);
var postgresResults = await ListFromPostgres(tenantId);
postgresResults.Should().BeEquivalentTo(mongoResults, options =>
options
.Excluding(e => e.Path.Contains("Id"))
.WithStrictOrdering()); // Order must match
}
}
4.2 Advisory Matching Comparison
public class AdvisoryMatchingComparisonTests
{
[Theory]
[MemberData(nameof(GetSampleSboms))]
public async Task VulnerabilityMatching_ProducesSameResults(string sbomPath)
{
var sbom = await LoadSbomAsync(sbomPath);
// Configure Mongo backend
var mongoConfig = CreateConfig("Mongo");
var mongoScanner = CreateScanner(mongoConfig);
var mongoFindings = await mongoScanner.ScanAsync(sbom);
// Configure Postgres backend
var postgresConfig = CreateConfig("Postgres");
var postgresScanner = CreateScanner(postgresConfig);
var postgresFindings = await postgresScanner.ScanAsync(sbom);
// Compare findings
postgresFindings.Should().BeEquivalentTo(mongoFindings, options =>
options
.WithStrictOrdering()
.Using<DateTimeOffset>(ctx =>
ctx.Subject.Should().BeCloseTo(ctx.Expectation, TimeSpan.FromSeconds(1)))
.WhenTypeIs<DateTimeOffset>());
}
public static IEnumerable<object[]> GetSampleSboms()
{
yield return new object[] { "testdata/sbom-alpine-3.18.json" };
yield return new object[] { "testdata/sbom-debian-12.json" };
yield return new object[] { "testdata/sbom-nodejs-app.json" };
yield return new object[] { "testdata/sbom-python-app.json" };
}
}
4.3 VEX Graph Comparison
public class GraphRevisionComparisonTests
{
[Theory]
[MemberData(nameof(GetTestProjects))]
public async Task GraphComputation_ProducesIdenticalRevisionId(string projectId)
{
// Compute graph with Mongo backend
var mongoGraph = await ComputeGraphAsync(projectId, "Mongo");
// Compute graph with Postgres backend
var postgresGraph = await ComputeGraphAsync(projectId, "Postgres");
// Revision ID MUST be identical (hash-stable)
postgresGraph.RevisionId.Should().Be(mongoGraph.RevisionId);
// Node and edge counts should match
postgresGraph.NodeCount.Should().Be(mongoGraph.NodeCount);
postgresGraph.EdgeCount.Should().Be(mongoGraph.EdgeCount);
// VEX statements should match
var mongoStatements = await GetStatementsAsync(projectId, "Mongo");
var postgresStatements = await GetStatementsAsync(projectId, "Postgres");
postgresStatements.Should().BeEquivalentTo(mongoStatements, options =>
options
.Excluding(s => s.Id)
.WithStrictOrdering());
}
}
5. Performance Test Requirements
5.1 Benchmark Framework
[MemoryDiagnoser]
[SimpleJob(RuntimeMoniker.Net80)]
public class RepositoryBenchmarks
{
private IScheduleRepository _mongoRepository = null!;
private IScheduleRepository _postgresRepository = null!;
private string _tenantId = null!;
[GlobalSetup]
public async Task Setup()
{
// Initialize both repositories
_mongoRepository = await CreateMongoRepositoryAsync();
_postgresRepository = await CreatePostgresRepositoryAsync();
_tenantId = await SeedTestDataAsync();
}
[Benchmark(Baseline = true)]
public async Task<Schedule?> Mongo_GetById()
{
return await _mongoRepository.GetAsync(_tenantId, _testScheduleId, CancellationToken.None);
}
[Benchmark]
public async Task<Schedule?> Postgres_GetById()
{
return await _postgresRepository.GetAsync(_tenantId, _testScheduleId, CancellationToken.None);
}
[Benchmark(Baseline = true)]
public async Task<IReadOnlyList<Schedule>> Mongo_List100()
{
return await _mongoRepository.ListAsync(_tenantId,
new QueryOptions { PageSize = 100 }, CancellationToken.None);
}
[Benchmark]
public async Task<IReadOnlyList<Schedule>> Postgres_List100()
{
return await _postgresRepository.ListAsync(_tenantId,
new QueryOptions { PageSize = 100 }, CancellationToken.None);
}
}
5.2 Performance Acceptance Criteria
| Operation | Mongo Baseline | Postgres Target | Maximum Acceptable |
|---|---|---|---|
| Get by ID | X ms | ≤ X ms | ≤ 1.5X ms |
| List (100 items) | Y ms | ≤ Y ms | ≤ 1.5Y ms |
| Insert | Z ms | ≤ Z ms | ≤ 2Z ms |
| Update | W ms | ≤ W ms | ≤ 2W ms |
| Complex query | V ms | ≤ V ms | ≤ 2V ms |
5.3 Load Test Scenarios
# k6 load test configuration
scenarios:
constant_load:
executor: constant-arrival-rate
rate: 100
timeUnit: 1s
duration: 5m
preAllocatedVUs: 50
maxVUs: 100
spike_test:
executor: ramping-arrival-rate
startRate: 10
timeUnit: 1s
stages:
- duration: 1m
target: 10
- duration: 1m
target: 100
- duration: 2m
target: 100
- duration: 1m
target: 10
thresholds:
http_req_duration:
- p(95) < 200 # 95th percentile under 200ms
- p(99) < 500 # 99th percentile under 500ms
http_req_failed:
- rate < 0.01 # Error rate under 1%
6. Data Integrity Verification
6.1 Record Count Verification
public class DataIntegrityVerifier
{
public async Task<VerificationResult> VerifyCountsAsync(string module)
{
var results = new Dictionary<string, (long mongo, long postgres)>();
foreach (var collection in GetCollections(module))
{
var mongoCount = await _mongoDb.GetCollection<BsonDocument>(collection)
.CountDocumentsAsync(FilterDefinition<BsonDocument>.Empty);
var postgresCount = await GetPostgresCountAsync(collection);
results[collection] = (mongoCount, postgresCount);
}
return new VerificationResult
{
Module = module,
Counts = results,
AllMatch = results.All(r => r.Value.mongo == r.Value.postgres)
};
}
}
6.2 Checksum Verification
public class ChecksumVerifier
{
public async Task<bool> VerifyAdvisoryChecksumAsync(string advisoryKey)
{
var mongoAdvisory = await _mongoAdvisoryRepo.GetAsync(advisoryKey);
var postgresAdvisory = await _postgresAdvisoryRepo.GetAsync(advisoryKey);
if (mongoAdvisory is null || postgresAdvisory is null)
return mongoAdvisory is null && postgresAdvisory is null;
var mongoChecksum = ComputeChecksum(mongoAdvisory);
var postgresChecksum = ComputeChecksum(postgresAdvisory);
return mongoChecksum == postgresChecksum;
}
private string ComputeChecksum(Advisory advisory)
{
// Serialize to canonical JSON and hash
var json = JsonSerializer.Serialize(advisory, new JsonSerializerOptions
{
PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
WriteIndented = false,
DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull
});
using var sha256 = SHA256.Create();
var hash = sha256.ComputeHash(Encoding.UTF8.GetBytes(json));
return Convert.ToHexString(hash);
}
}
6.3 Referential Integrity Verification
public class ReferentialIntegrityTests
{
[Fact]
public async Task AllForeignKeys_ReferenceExistingRecords()
{
await using var connection = await _dataSource.OpenConnectionAsync();
await using var cmd = connection.CreateCommand();
// Check for orphaned references
cmd.CommandText = """
SELECT 'advisory_aliases' as table_name, COUNT(*) as orphan_count
FROM vuln.advisory_aliases a
LEFT JOIN vuln.advisories adv ON a.advisory_id = adv.id
WHERE adv.id IS NULL
UNION ALL
SELECT 'advisory_cvss', COUNT(*)
FROM vuln.advisory_cvss c
LEFT JOIN vuln.advisories adv ON c.advisory_id = adv.id
WHERE adv.id IS NULL
-- Add more tables...
""";
await using var reader = await cmd.ExecuteReaderAsync();
while (await reader.ReadAsync())
{
var tableName = reader.GetString(0);
var orphanCount = reader.GetInt64(1);
orphanCount.Should().Be(0, $"Table {tableName} has orphaned references");
}
}
}
7. Production Verification
7.1 Dual-Write Monitoring
public class DualWriteMonitor
{
private readonly IMetrics _metrics;
public async Task RecordWriteAsync(
string module,
string operation,
bool mongoSuccess,
bool postgresSuccess,
TimeSpan mongoDuration,
TimeSpan postgresDuration)
{
_metrics.Counter("dual_write_total", new[]
{
("module", module),
("operation", operation),
("mongo_success", mongoSuccess.ToString()),
("postgres_success", postgresSuccess.ToString())
}).Inc();
_metrics.Histogram("dual_write_duration_ms", new[]
{
("module", module),
("operation", operation),
("backend", "mongo")
}).Observe(mongoDuration.TotalMilliseconds);
_metrics.Histogram("dual_write_duration_ms", new[]
{
("module", module),
("operation", operation),
("backend", "postgres")
}).Observe(postgresDuration.TotalMilliseconds);
if (mongoSuccess != postgresSuccess)
{
_metrics.Counter("dual_write_inconsistency", new[]
{
("module", module),
("operation", operation)
}).Inc();
_logger.LogWarning(
"Dual-write inconsistency: {Module}/{Operation} - Mongo: {Mongo}, Postgres: {Postgres}",
module, operation, mongoSuccess, postgresSuccess);
}
}
}
7.2 Read Comparison Sampling
public class ReadComparisonSampler : BackgroundService
{
private readonly IOptions<SamplingOptions> _options;
private readonly Random _random = new();
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
if (_random.NextDouble() < _options.Value.SampleRate) // e.g., 1%
{
await CompareRandomRecordAsync(stoppingToken);
}
await Task.Delay(_options.Value.Interval, stoppingToken);
}
}
private async Task CompareRandomRecordAsync(CancellationToken ct)
{
var entityId = await GetRandomEntityIdAsync(ct);
var mongoEntity = await _mongoRepo.GetAsync(entityId, ct);
var postgresEntity = await _postgresRepo.GetAsync(entityId, ct);
if (!AreEquivalent(mongoEntity, postgresEntity))
{
_logger.LogError(
"Read comparison mismatch for entity {EntityId}",
entityId);
_metrics.Counter("read_comparison_mismatch").Inc();
}
}
}
7.3 Rollback Verification
public class RollbackVerificationTests
{
[Fact]
public async Task Rollback_RestoresMongoAsSource_WhenPostgresFails()
{
// Simulate Postgres failure
await _postgresDataSource.DisposeAsync();
// Verify system falls back to Mongo
var config = _configuration.GetSection("Persistence");
config["Scheduler"] = "Mongo"; // Simulate config change
// Operations should continue working
var schedule = await _scheduleRepository.GetAsync(
"tenant", "schedule-id", CancellationToken.None);
schedule.Should().NotBeNull();
}
}
8. Module-Specific Verification
8.1 Authority Verification
| Test | Description | Pass Criteria |
|---|---|---|
| User CRUD | Create, read, update, delete users | All operations succeed |
| Role assignment | Assign/revoke roles | Roles correctly applied |
| Token issuance | Issue OAuth tokens | Tokens valid and verifiable |
| Token verification | Verify issued tokens | Verification succeeds |
| Login tracking | Record login attempts | Attempts logged correctly |
| License validation | Check license validity | Same result both backends |
8.2 Scheduler Verification
| Test | Description | Pass Criteria |
|---|---|---|
| Schedule CRUD | All CRUD operations | Data integrity preserved |
| Trigger calculation | Next fire time calculation | Identical results |
| Run history | Run creation and completion | Correct state transitions |
| Impact snapshots | Finding aggregation | Same counts and severity |
| Worker registration | Worker heartbeats | Consistent status |
8.3 Vulnerability Verification
| Test | Description | Pass Criteria |
|---|---|---|
| Advisory ingest | Import from feed | All advisories imported |
| Alias resolution | CVE → Advisory lookup | Same advisory returned |
| CVSS lookup | Get CVSS scores | Identical scores |
| Affected package match | PURL matching | Same vulnerabilities found |
| KEV flag lookup | Check KEV status | Correct flag status |
8.4 VEX Verification
| Test | Description | Pass Criteria |
|---|---|---|
| Graph revision | Compute revision ID | Identical revision IDs |
| Node/edge counts | Graph structure | Same counts |
| VEX statements | Status determination | Same statuses |
| Consensus computation | Aggregate signals | Same consensus |
| Evidence manifest | Merkle root | Identical roots |
9. Verification Checklist
Per-Module Checklist
- All unit tests pass with PostgreSQL
- Tenant isolation tests pass
- Determinism tests pass
- Performance benchmarks within tolerance
- Record counts match between MongoDB and PostgreSQL
- Checksum verification passes for sample data
- Referential integrity verified
- Comparison tests pass for all scenarios
- Load tests pass with acceptable metrics
Pre-Production Checklist
- Dual-write monitoring in place
- Read comparison sampling enabled
- Rollback procedure tested
- Performance baselines established
- Alert thresholds configured
- Runbook documented
Post-Switch Checklist
- Dual-write window closed; no inconsistencies observed (retired post-cutover)
- Read comparison sampling shows 100% match
- Performance within acceptable range
- No data integrity alerts
- MongoDB reads disabled
- MongoDB backups archived
Note: Authority and Notify have completed cutover and verification; remaining modules pending.
10. Reporting
10.1 Verification Report Template
# Database Conversion Verification Report
## Module: [Module Name]
## Date: [YYYY-MM-DD]
## Status: [PASS/FAIL]
### Summary
- Total Tests: X
- Passed: Y
- Failed: Z
### Unit Tests
| Category | Passed | Failed | Notes |
|----------|--------|--------|-------|
| CRUD | | | |
| Isolation| | | |
| Determinism | | | |
### Comparison Tests
| Test | Status | Notes |
|------|--------|-------|
| | | |
### Performance
| Operation | Mongo | Postgres | Diff |
|-----------|-------|----------|------|
| | | | |
### Data Integrity
- Record count match: [YES/NO]
- Checksum verification: [PASS/FAIL]
- Referential integrity: [PASS/FAIL]
### Sign-off
- [ ] QA Engineer
- [ ] Tech Lead
- [ ] Product Owner
Document Version: 1.0.0 Last Updated: 2025-11-28