Files
git.stella-ops.org/docs/workflow/engine/13-backend-comparison-2026-03-17.md
master f5b5f24d95 Add StellaOps.Workflow engine: 14 libraries, WebService, 8 test projects
Extract product-agnostic workflow engine from Ablera.Serdica.Workflow into
standalone StellaOps.Workflow.* libraries targeting net10.0.

Libraries (14):
- Contracts, Abstractions (compiler, decompiler, expression runtime)
- Engine (execution, signaling, scheduling, projections, hosted services)
- ElkSharp (generic graph layout algorithm)
- Renderer.ElkSharp, Renderer.ElkJs, Renderer.Msagl, Renderer.Svg
- Signaling.Redis, Signaling.OracleAq
- DataStore.MongoDB, DataStore.PostgreSQL, DataStore.Oracle

WebService: ASP.NET Core Minimal API with 22 endpoints

Tests (8 projects, 109 tests pass):
- Engine.Tests (105 pass), WebService.Tests (4 E2E pass)
- Renderer.Tests, DataStore.MongoDB/Oracle/PostgreSQL.Tests
- Signaling.Redis.Tests, IntegrationTests.Shared

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-20 19:14:44 +02:00

6.4 KiB

Backend Comparison 2026-03-17

Purpose

This document compares the current Oracle, PostgreSQL, and MongoDB workflow-engine backends using the published normalized performance baselines and the currently implemented backend-specific validation suites.

This is a decision pack for the current local Docker benchmark set. It is not the final production recommendation pack yet, because hostile-condition and Bulstrad hardening depth is still strongest on Oracle.

The durable machine-readable companion is 13-backend-comparison-2026-03-17.json.

For the exact six-profile signal-driver matrix, including Oracle+Redis, PostgreSQL+Redis, and Mongo+Redis, see 14-signal-driver-backend-matrix-2026-03-17.md.

Source Baselines

Validation Status

Current comparison-relevant validation state:

  • Oracle performance suite: 12/12 passed
  • PostgreSQL performance suite: 11/11 passed
  • MongoDB performance suite: 14/14 passed
  • PostgreSQL focused backend parity suite: 9/9 passed
  • MongoDB focused backend parity suite: 23/23 passed
  • integration project build: 0 warnings, 0 errors

Important caveat:

  • Oracle still has the broadest hostile-condition and Bulstrad E2E matrix.
  • PostgreSQL and MongoDB now have backend-native signal/runtime/projection/performance coverage plus curated Bulstrad parity, but they do not yet match Oracle's full reliability surface.

Normalized Comparison

Synthetic Signal Round-Trip

Metric Oracle PostgreSQL MongoDB
Serial latency avg ms 3104.85 3079.33 97.88
Serial latency P95 ms 3165.04 3094.94 149.20
Throughput ops/s 20.98 25.74 76.28
Throughput avg ms 4142.13 3603.54 1110.94
Throughput P95 ms 4215.64 3635.59 1121.22
Soak ops/s 3.91 4.30 47.62
Soak avg ms 4494.29 4164.52 322.40
Soak P95 ms 5589.33 4208.42 550.50

Capacity Ladder

Concurrency Oracle ops/s PostgreSQL ops/s MongoDB ops/s
c1 3.37 4.11 7.08
c4 15.22 17.29 38.35
c8 21.34 33.21 66.04
c16 34.03 57.04 68.65

Transport Baselines

Scenario Oracle ops/s PostgreSQL ops/s MongoDB ops/s
Immediate burst nightly 50.18 70.10 46.18
Delayed burst nightly 10.71 19.60 16.66
Immediate burst smoke 56.94 70.21 32.11
Delayed burst smoke 2.86 5.59 4.97

Bulstrad Workloads

Scenario Oracle ops/s PostgreSQL ops/s MongoDB ops/s
QuoteOrAplCancel smoke 19.69 59.88 23.48
QuotationConfirm -> ConvertToPolicy nightly 1.77 3.33 10.83

Backend-Specific Observations

Oracle

  • Strongest validation depth and strongest correctness story.
  • Main cost center is still commit pressure.
  • Dominant wait is log file sync in almost every measured scenario.
  • c8 is still the last comfortable rung on the local Oracle Free setup.

PostgreSQL

  • Best relational performance profile in the current measurements.
  • Immediate transport is the strongest of the three measured backends.
  • Dominant wait is Client:ClientRead, which points to queue-claim cadence and short-transaction wake behavior, not a clear storage stall.
  • c16 still scales cleanly on this benchmark set and does not yet show a hard saturation cliff.

MongoDB

  • Fastest measured backend across the synthetic signal round-trip workloads and the medium Bulstrad nightly flow.
  • No stable top wait dominated the measured runs; current analysis is more meaningful through normalized metrics and Mongo counters than through wait classification.
  • The significant findings were correctness issues discovered and fixed during perf work:
    • bounded empty-queue receive
    • explicit collection bootstrap before transactional concurrency
  • c16 is the first rung where latency visibly expands even though throughput still rises.

Decision View

Raw Performance Ranking

For the current local benchmark set:

  1. MongoDB
  2. PostgreSQL
  3. Oracle

This order is stable for:

  • serial latency
  • steady-state synthetic throughput
  • soak throughput
  • medium Bulstrad nightly flow

Validation Maturity Ranking

For the current implementation state:

  1. Oracle
  2. PostgreSQL
  3. MongoDB

Reason:

  • Oracle has the deepest hostile-condition and Bulstrad E2E surface.
  • PostgreSQL now has a solid backend-native suite and competitive performance, but less reliability breadth than Oracle.
  • MongoDB now has good backend-native and performance coverage, but its operational model is still the most infrastructure-sensitive because it depends on replica-set transactions plus change-stream wake behavior.

Current Recommendation

If the next backend after Oracle must be chosen today:

  • choose PostgreSQL as the next default portability target

Reason:

  • it materially outperforms Oracle on the normalized workflow workloads
  • it preserves the relational operational model for runtime state and projections
  • its wake-hint model is simpler to reason about operationally than MongoDB change streams
  • it now has enough backend-native correctness and Bulstrad parity coverage to be a credible second backend

If the decision is based only on benchmark speed:

  • MongoDB is currently fastest

But that is not the same as the safest operational recommendation yet.

What Remains Before A Final Production Recommendation

  • expand PostgreSQL hostile-condition coverage to match the broader Oracle matrix
  • expand MongoDB hostile-condition coverage to match the broader Oracle matrix
  • widen the curated Bulstrad parity pack further on PostgreSQL and MongoDB
  • rerun the shared parity pack on all three backends in one closeout pass
  • add environment-to-environment reruns before turning local Docker numbers into sizing guidance

Short Conclusion

The engine is now backend-comparable on normalized performance across Oracle, PostgreSQL, and MongoDB.

The current picture is:

  • Oracle is the most validated backend
  • PostgreSQL is the best performance-to-operability compromise
  • MongoDB is the fastest measured backend but not yet the safest backend recommendation