Extract product-agnostic workflow engine from Ablera.Serdica.Workflow into standalone StellaOps.Workflow.* libraries targeting net10.0. Libraries (14): - Contracts, Abstractions (compiler, decompiler, expression runtime) - Engine (execution, signaling, scheduling, projections, hosted services) - ElkSharp (generic graph layout algorithm) - Renderer.ElkSharp, Renderer.ElkJs, Renderer.Msagl, Renderer.Svg - Signaling.Redis, Signaling.OracleAq - DataStore.MongoDB, DataStore.PostgreSQL, DataStore.Oracle WebService: ASP.NET Core Minimal API with 22 endpoints Tests (8 projects, 109 tests pass): - Engine.Tests (105 pass), WebService.Tests (4 E2E pass) - Renderer.Tests, DataStore.MongoDB/Oracle/PostgreSQL.Tests - Signaling.Redis.Tests, IntegrationTests.Shared Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
77 lines
3.3 KiB
Markdown
77 lines
3.3 KiB
Markdown
# Signal Driver / Backend Matrix 2026-03-17
|
|
|
|
## Purpose
|
|
|
|
This snapshot records the current six-profile synthetic signal round-trip comparison:
|
|
|
|
- `Oracle`
|
|
- `PostgreSQL`
|
|
- `Mongo`
|
|
- `Oracle+Redis`
|
|
- `PostgreSQL+Redis`
|
|
- `Mongo+Redis`
|
|
|
|
The matrix is artifact-driven. Every value comes from measured JSON artifacts under `TestResults/workflow-performance/`. No hand-entered metric values are used.
|
|
|
|
The exact generated matrix artifact is:
|
|
- `WorkflowPerfComparison/20260317T210643496-workflow-backend-signal-roundtrip-six-profile-matrix.md`
|
|
- `WorkflowPerfComparison/20260317T210643496-workflow-backend-signal-roundtrip-six-profile-matrix.json`
|
|
|
|
## Serial Latency
|
|
|
|
Primary comparison rows in this section are:
|
|
|
|
- `Signal to first completion avg`
|
|
- `Drain-to-idle overhang avg`
|
|
|
|
`Signal to completion avg` is a mixed number.
|
|
It includes both real resume work and the benchmark drain policy.
|
|
|
|
| Metric | Unit | Oracle | PostgreSQL | Mongo | Oracle+Redis | PostgreSQL+Redis | Mongo+Redis |
|
|
| --- | --- | ---: | ---: | ---: | ---: | ---: | ---: |
|
|
| End-to-end avg | ms | 3091.73 | 3101.57 | 151.36 | 3223.22 | 3073.70 | 3099.51 |
|
|
| End-to-end p95 | ms | 3492.73 | 3143.39 | 308.90 | 3644.66 | 3090.75 | 3162.04 |
|
|
| Start avg | ms | 105.88 | 16.35 | 38.39 | 110.03 | 8.32 | 21.77 |
|
|
| Signal publish avg | ms | 23.39 | 11.47 | 14.30 | 23.90 | 7.55 | 10.43 |
|
|
| Signal to first completion avg | ms | 76.15 | 37.56 | 55.06 | 81.46 | 31.77 | 40.88 |
|
|
| Signal to completion avg | ms | 2985.81 | 3085.21 | 112.92 | 3113.11 | 3065.38 | 3077.73 |
|
|
| Drain-to-idle overhang avg | ms | 2909.65 | 3047.65 | 57.86 | 3031.66 | 3033.61 | 3036.85 |
|
|
|
|
## Parallel Throughput
|
|
|
|
| Metric | Unit | Oracle | PostgreSQL | Mongo | Oracle+Redis | PostgreSQL+Redis | Mongo+Redis |
|
|
| --- | --- | ---: | ---: | ---: | ---: | ---: | ---: |
|
|
| Throughput | ops/s | 24.17 | 26.28 | 119.51 | 21.88 | 25.51 | 25.14 |
|
|
| End-to-end avg | ms | 3740.84 | 3546.11 | 688.57 | 4147.82 | 3643.70 | 3701.72 |
|
|
| End-to-end p95 | ms | 3841.33 | 3554.13 | 701.92 | 4243.83 | 3675.15 | 3721.14 |
|
|
| Start avg | ms | 47.44 | 11.32 | 17.89 | 55.82 | 17.07 | 17.04 |
|
|
| Signal publish avg | ms | 15.62 | 15.11 | 10.85 | 23.80 | 10.53 | 12.27 |
|
|
| Signal to completion avg | ms | 3525.84 | 3469.46 | 590.78 | 3872.54 | 3564.43 | 3598.14 |
|
|
|
|
## Integrity
|
|
|
|
The comparison test required all six columns to pass these checks on both the serial-latency and parallel-throughput source artifacts:
|
|
|
|
- `Failures = 0`
|
|
- `DeadLetteredSignals = 0`
|
|
- `RuntimeConflicts = 0`
|
|
- `StuckInstances = 0`
|
|
- `WorkflowsStarted = OperationCount`
|
|
- `SignalsPublished = OperationCount`
|
|
- `SignalsProcessed = OperationCount`
|
|
|
|
All six columns passed.
|
|
|
|
## Interpretation
|
|
|
|
The main conclusions from this six-profile matrix are:
|
|
|
|
- Native Mongo is still the fastest measured profile for the synthetic signal round-trip.
|
|
- Native PostgreSQL remains the best-performing relational profile.
|
|
- Oracle+Redis is slower than native Oracle in this benchmark.
|
|
- PostgreSQL+Redis is very close to native PostgreSQL, but not clearly better.
|
|
- Mongo+Redis is dramatically worse than native Mongo because the Redis path reintroduces the empty-wait overhang that native change streams avoid.
|
|
|
|
The most useful row for actual resume speed is `Signal to first completion avg`, not the mixed `Signal to completion avg`, because the latter still includes drain-to-idle policy.
|
|
|