feat(rate-limiting): Implement core rate limiting functionality with configuration, decision-making, metrics, middleware, and service registration
- Add RateLimitConfig for configuration management with YAML binding support. - Introduce RateLimitDecision to encapsulate the result of rate limit checks. - Implement RateLimitMetrics for OpenTelemetry metrics tracking. - Create RateLimitMiddleware for enforcing rate limits on incoming requests. - Develop RateLimitService to orchestrate instance and environment rate limit checks. - Add RateLimitServiceCollectionExtensions for dependency injection registration.
This commit is contained in:
@@ -26,7 +26,7 @@ src/
|
||||
├─ StellaOps.Scheduler.Worker/ # planners + runners (N replicas)
|
||||
├─ StellaOps.Scheduler.ImpactIndex/ # purl→images inverted index (roaring bitmaps)
|
||||
├─ StellaOps.Scheduler.Models/ # DTOs (Schedule, Run, ImpactSet, Deltas)
|
||||
├─ StellaOps.Scheduler.Storage.Mongo/ # schedules, runs, cursors, locks
|
||||
├─ StellaOps.Scheduler.Storage.Postgres/ # schedules, runs, cursors, locks
|
||||
├─ StellaOps.Scheduler.Queue/ # Redis Streams / NATS abstraction
|
||||
├─ StellaOps.Scheduler.Tests.* # unit/integration/e2e
|
||||
```
|
||||
@@ -36,7 +36,7 @@ src/
|
||||
* **Scheduler.WebService** (stateless)
|
||||
* **Scheduler.Worker** (scale‑out; planners + executors)
|
||||
|
||||
**Dependencies**: Authority (OpTok + DPoP/mTLS), Scanner.WebService, Conselier, Excitor, MongoDB, Redis/NATS, (optional) Notify.
|
||||
**Dependencies**: Authority (OpTok + DPoP/mTLS), Scanner.WebService, Conselier, Excitor, PostgreSQL, Redis/NATS, (optional) Notify.
|
||||
|
||||
---
|
||||
|
||||
@@ -52,7 +52,7 @@ src/
|
||||
|
||||
---
|
||||
|
||||
## 3) Data model (Mongo)
|
||||
## 3) Data model (PostgreSQL)
|
||||
|
||||
**Database**: `scheduler`
|
||||
|
||||
@@ -111,7 +111,7 @@ Goal: translate **change keys** → **image sets** in **milliseconds**.
|
||||
* `Contains[purl] → bitmap(imageIds)`
|
||||
* `UsedBy[purl] → bitmap(imageIds)` (subset of Contains)
|
||||
* Optionally keep **Owner maps**: `{imageId → {tenantId, namespaces[], repos[]}}` for selection filters.
|
||||
* Persist in RocksDB/LMDB or Redis‑modules; cache hot shards in memory; snapshot to Mongo for cold start.
|
||||
* Persist in RocksDB/LMDB or Redis‑modules; cache hot shards in memory; snapshot to PostgreSQL for cold start.
|
||||
|
||||
**Update paths**:
|
||||
|
||||
@@ -298,8 +298,8 @@ scheduler:
|
||||
queue:
|
||||
kind: "redis" # or "nats"
|
||||
url: "redis://redis:6379/4"
|
||||
mongo:
|
||||
uri: "mongodb://mongo/scheduler"
|
||||
postgres:
|
||||
connectionString: "Host=postgres;Port=5432;Database=scheduler;Username=stellaops;Password=stellaops"
|
||||
impactIndex:
|
||||
storage: "rocksdb" # "rocksdb" | "redis" | "memory"
|
||||
warmOnStart: true
|
||||
@@ -335,7 +335,7 @@ scheduler:
|
||||
| Scanner under load (429) | Backoff with jitter; respect per‑tenant/leaky bucket |
|
||||
| Oversubscription (too many impacted) | Prioritize KEV/critical first; spillover to next window; UI banner shows backlog |
|
||||
| Notify down | Buffer outbound events in queue (TTL 24h) |
|
||||
| Mongo slow | Cut batch sizes; sample‑log; alert ops; don’t drop runs unless critical |
|
||||
| PostgreSQL slow | Cut batch sizes; sample‑log; alert ops; don't drop runs unless critical |
|
||||
|
||||
---
|
||||
|
||||
|
||||
Reference in New Issue
Block a user