Add comprehensive security tests for OWASP A02, A05, A07, and A08 categories
Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled
Export Center CI / export-ci (push) Has been cancelled
Findings Ledger CI / build-test (push) Has been cancelled
Findings Ledger CI / migration-validation (push) Has been cancelled
Findings Ledger CI / generate-manifest (push) Has been cancelled
Manifest Integrity / Validate Schema Integrity (push) Has been cancelled
Lighthouse CI / Lighthouse Audit (push) Has been cancelled
Lighthouse CI / Axe Accessibility Audit (push) Has been cancelled
Manifest Integrity / Validate Contract Documents (push) Has been cancelled
Manifest Integrity / Validate Pack Fixtures (push) Has been cancelled
Manifest Integrity / Audit SHA256SUMS Files (push) Has been cancelled
Manifest Integrity / Verify Merkle Roots (push) Has been cancelled
Policy Lint & Smoke / policy-lint (push) Has been cancelled
Policy Simulation / policy-simulate (push) Has been cancelled
Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled
Export Center CI / export-ci (push) Has been cancelled
Findings Ledger CI / build-test (push) Has been cancelled
Findings Ledger CI / migration-validation (push) Has been cancelled
Findings Ledger CI / generate-manifest (push) Has been cancelled
Manifest Integrity / Validate Schema Integrity (push) Has been cancelled
Lighthouse CI / Lighthouse Audit (push) Has been cancelled
Lighthouse CI / Axe Accessibility Audit (push) Has been cancelled
Manifest Integrity / Validate Contract Documents (push) Has been cancelled
Manifest Integrity / Validate Pack Fixtures (push) Has been cancelled
Manifest Integrity / Audit SHA256SUMS Files (push) Has been cancelled
Manifest Integrity / Verify Merkle Roots (push) Has been cancelled
Policy Lint & Smoke / policy-lint (push) Has been cancelled
Policy Simulation / policy-simulate (push) Has been cancelled
- Implemented tests for Cryptographic Failures (A02) to ensure proper handling of sensitive data, secure algorithms, and key management. - Added tests for Security Misconfiguration (A05) to validate production configurations, security headers, CORS settings, and feature management. - Developed tests for Authentication Failures (A07) to enforce strong password policies, rate limiting, session management, and MFA support. - Created tests for Software and Data Integrity Failures (A08) to verify artifact signatures, SBOM integrity, attestation chains, and feed updates.
This commit is contained in:
88
tests/load/README.md
Normal file
88
tests/load/README.md
Normal file
@@ -0,0 +1,88 @@
|
||||
# Load Tests
|
||||
|
||||
This directory contains k6 load test suites for StellaOps performance testing.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [k6](https://k6.io/docs/getting-started/installation/) installed
|
||||
- Target environment accessible
|
||||
- (Optional) Grafana k6 Cloud for distributed testing
|
||||
|
||||
## Test Suites
|
||||
|
||||
### TTFS Load Test (`ttfs-load-test.js`)
|
||||
|
||||
Tests the Time to First Signal endpoint under various load conditions.
|
||||
|
||||
**Scenarios:**
|
||||
- **Sustained**: 50 RPS for 5 minutes (normal operation)
|
||||
- **Spike**: Ramp from 50 to 200 RPS, hold, ramp down (CI burst simulation)
|
||||
- **Soak**: 25 RPS for 15 minutes (stability test)
|
||||
|
||||
**Thresholds (per Advisory §12.4):**
|
||||
- Cache-hit P95 ≤ 250ms
|
||||
- Cold-path P95 ≤ 500ms
|
||||
- Error rate < 0.1%
|
||||
|
||||
**Run locally:**
|
||||
```bash
|
||||
k6 run tests/load/ttfs-load-test.js
|
||||
```
|
||||
|
||||
**Run against staging:**
|
||||
```bash
|
||||
k6 run --env BASE_URL=https://staging.stellaops.local \
|
||||
--env AUTH_TOKEN=$STAGING_TOKEN \
|
||||
tests/load/ttfs-load-test.js
|
||||
```
|
||||
|
||||
**Run with custom run IDs:**
|
||||
```bash
|
||||
k6 run --env BASE_URL=http://localhost:5000 \
|
||||
--env RUN_IDS='["run-1","run-2","run-3"]' \
|
||||
tests/load/ttfs-load-test.js
|
||||
```
|
||||
|
||||
## CI Integration
|
||||
|
||||
Load tests can be integrated into CI pipelines. See `.gitea/workflows/load-test.yml` for an example.
|
||||
|
||||
```yaml
|
||||
load-test-ttfs:
|
||||
runs-on: ubuntu-latest
|
||||
needs: [deploy-staging]
|
||||
steps:
|
||||
- uses: grafana/k6-action@v0.3.1
|
||||
with:
|
||||
filename: tests/load/ttfs-load-test.js
|
||||
env:
|
||||
BASE_URL: ${{ secrets.STAGING_URL }}
|
||||
AUTH_TOKEN: ${{ secrets.STAGING_TOKEN }}
|
||||
```
|
||||
|
||||
## Results
|
||||
|
||||
Test results are written to `results/ttfs-load-test-latest.json` and timestamped files.
|
||||
|
||||
Use Grafana Cloud or local Prometheus + Grafana to visualize results:
|
||||
|
||||
```bash
|
||||
k6 run --out json=results/metrics.json tests/load/ttfs-load-test.js
|
||||
```
|
||||
|
||||
## Writing New Load Tests
|
||||
|
||||
1. Create a new `.js` file in this directory
|
||||
2. Define scenarios, thresholds, and the default function
|
||||
3. Use custom metrics for domain-specific measurements
|
||||
4. Add handleSummary for result export
|
||||
5. Update this README
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Description | Default |
|
||||
|----------|-------------|---------|
|
||||
| `BASE_URL` | Target API base URL | `http://localhost:5000` |
|
||||
| `RUN_IDS` | JSON array of run IDs to test | `["run-load-1",...,"run-load-5"]` |
|
||||
| `TENANT_ID` | Tenant ID header value | `load-test-tenant` |
|
||||
| `AUTH_TOKEN` | Bearer token for authentication | (none) |
|
||||
226
tests/load/ttfs-load-test.js
Normal file
226
tests/load/ttfs-load-test.js
Normal file
@@ -0,0 +1,226 @@
|
||||
/**
|
||||
* TTFS (Time to First Signal) Load Test Suite
|
||||
* Reference: SPRINT_0341_0001_0001 Task T13
|
||||
*
|
||||
* Tests the /first-signal endpoint under various load scenarios.
|
||||
* Requirements from Advisory §12.4:
|
||||
* - Cache-hit P95 ≤ 250ms
|
||||
* - Cold-path P95 ≤ 500ms
|
||||
* - Error rate < 0.1%
|
||||
*/
|
||||
|
||||
import http from 'k6/http';
|
||||
import { check, sleep } from 'k6';
|
||||
import { Rate, Trend } from 'k6/metrics';
|
||||
import { textSummary } from 'https://jslib.k6.io/k6-summary/0.0.3/index.js';
|
||||
|
||||
// Custom metrics
|
||||
const cacheHitLatency = new Trend('ttfs_cache_hit_latency_ms');
|
||||
const coldPathLatency = new Trend('ttfs_cold_path_latency_ms');
|
||||
const errorRate = new Rate('ttfs_error_rate');
|
||||
const signalKindCounter = new Rate('ttfs_signal_kind_distribution');
|
||||
|
||||
// Configuration
|
||||
export const options = {
|
||||
scenarios: {
|
||||
// Scenario 1: Sustained load - simulates normal operation
|
||||
sustained: {
|
||||
executor: 'constant-arrival-rate',
|
||||
rate: 50,
|
||||
timeUnit: '1s',
|
||||
duration: '5m',
|
||||
preAllocatedVUs: 50,
|
||||
maxVUs: 100,
|
||||
tags: { scenario: 'sustained' },
|
||||
},
|
||||
// Scenario 2: Spike test - simulates CI pipeline burst
|
||||
spike: {
|
||||
executor: 'ramping-arrival-rate',
|
||||
startRate: 50,
|
||||
timeUnit: '1s',
|
||||
stages: [
|
||||
{ duration: '30s', target: 200 }, // Ramp to 200 RPS
|
||||
{ duration: '1m', target: 200 }, // Hold
|
||||
{ duration: '30s', target: 50 }, // Ramp down
|
||||
],
|
||||
preAllocatedVUs: 100,
|
||||
maxVUs: 300,
|
||||
startTime: '5m30s',
|
||||
tags: { scenario: 'spike' },
|
||||
},
|
||||
// Scenario 3: Soak test - long running stability
|
||||
soak: {
|
||||
executor: 'constant-arrival-rate',
|
||||
rate: 25,
|
||||
timeUnit: '1s',
|
||||
duration: '15m',
|
||||
preAllocatedVUs: 30,
|
||||
maxVUs: 50,
|
||||
startTime: '8m',
|
||||
tags: { scenario: 'soak' },
|
||||
},
|
||||
},
|
||||
thresholds: {
|
||||
// Advisory requirements: §12.4
|
||||
'ttfs_cache_hit_latency_ms{scenario:sustained}': ['p(95)<250'], // P95 ≤ 250ms
|
||||
'ttfs_cache_hit_latency_ms{scenario:spike}': ['p(95)<350'], // Allow slightly higher during spike
|
||||
'ttfs_cold_path_latency_ms{scenario:sustained}': ['p(95)<500'], // P95 ≤ 500ms
|
||||
'ttfs_cold_path_latency_ms{scenario:spike}': ['p(95)<750'], // Allow slightly higher during spike
|
||||
'ttfs_error_rate': ['rate<0.001'], // < 0.1% errors
|
||||
'http_req_duration{scenario:sustained}': ['p(95)<300'],
|
||||
'http_req_duration{scenario:spike}': ['p(95)<500'],
|
||||
'http_req_failed': ['rate<0.01'], // HTTP failures < 1%
|
||||
},
|
||||
};
|
||||
|
||||
// Environment configuration
|
||||
const BASE_URL = __ENV.BASE_URL || 'http://localhost:5000';
|
||||
const RUN_IDS = JSON.parse(__ENV.RUN_IDS || '["run-load-1","run-load-2","run-load-3","run-load-4","run-load-5"]');
|
||||
const TENANT_ID = __ENV.TENANT_ID || 'load-test-tenant';
|
||||
const AUTH_TOKEN = __ENV.AUTH_TOKEN || '';
|
||||
|
||||
/**
|
||||
* Main test function - called for each VU iteration
|
||||
*/
|
||||
export default function () {
|
||||
const runId = RUN_IDS[Math.floor(Math.random() * RUN_IDS.length)];
|
||||
const url = `${BASE_URL}/api/v1/orchestrator/runs/${runId}/first-signal`;
|
||||
|
||||
const params = {
|
||||
headers: {
|
||||
'Accept': 'application/json',
|
||||
'X-Tenant-Id': TENANT_ID,
|
||||
'X-Correlation-Id': `load-test-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`,
|
||||
},
|
||||
tags: { endpoint: 'first-signal' },
|
||||
};
|
||||
|
||||
// Add auth if provided
|
||||
if (AUTH_TOKEN) {
|
||||
params.headers['Authorization'] = `Bearer ${AUTH_TOKEN}`;
|
||||
}
|
||||
|
||||
const start = Date.now();
|
||||
const response = http.get(url, params);
|
||||
const duration = Date.now() - start;
|
||||
|
||||
// Track latency by cache status
|
||||
const cacheStatus = response.headers['Cache-Status'] || response.headers['X-Cache-Status'];
|
||||
if (cacheStatus && cacheStatus.toLowerCase().includes('hit')) {
|
||||
cacheHitLatency.add(duration);
|
||||
} else {
|
||||
coldPathLatency.add(duration);
|
||||
}
|
||||
|
||||
// Validate response
|
||||
const checks = check(response, {
|
||||
'status is 200 or 204 or 304': (r) => [200, 204, 304].includes(r.status),
|
||||
'has ETag header': (r) => r.status === 200 ? !!r.headers['ETag'] : true,
|
||||
'has Cache-Status header': (r) => !!cacheStatus,
|
||||
'response time < 500ms': (r) => r.timings.duration < 500,
|
||||
'valid JSON response': (r) => {
|
||||
if (r.status !== 200) return true;
|
||||
try {
|
||||
const body = JSON.parse(r.body);
|
||||
return body.runId !== undefined;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
},
|
||||
'has signal kind': (r) => {
|
||||
if (r.status !== 200) return true;
|
||||
try {
|
||||
const body = JSON.parse(r.body);
|
||||
return !body.firstSignal || ['passed', 'failed', 'degraded', 'partial', 'pending'].includes(body.firstSignal.kind);
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
},
|
||||
});
|
||||
|
||||
errorRate.add(!checks);
|
||||
|
||||
// Extract signal kind for distribution analysis
|
||||
if (response.status === 200) {
|
||||
try {
|
||||
const body = JSON.parse(response.body);
|
||||
if (body.firstSignal?.kind) {
|
||||
signalKindCounter.add(1, { kind: body.firstSignal.kind });
|
||||
}
|
||||
} catch {
|
||||
// Ignore parse errors
|
||||
}
|
||||
}
|
||||
|
||||
// Minimal sleep to allow for realistic load patterns
|
||||
sleep(0.05 + Math.random() * 0.1); // 50-150ms between requests per VU
|
||||
}
|
||||
|
||||
/**
|
||||
* Conditional request test - tests ETag/304 behavior
|
||||
*/
|
||||
export function conditionalRequest() {
|
||||
const runId = RUN_IDS[0];
|
||||
const url = `${BASE_URL}/api/v1/orchestrator/runs/${runId}/first-signal`;
|
||||
|
||||
// First request to get ETag
|
||||
const firstResponse = http.get(url, {
|
||||
headers: { 'Accept': 'application/json', 'X-Tenant-Id': TENANT_ID },
|
||||
});
|
||||
|
||||
if (firstResponse.status !== 200) return;
|
||||
|
||||
const etag = firstResponse.headers['ETag'];
|
||||
if (!etag) return;
|
||||
|
||||
// Conditional request
|
||||
const conditionalResponse = http.get(url, {
|
||||
headers: {
|
||||
'Accept': 'application/json',
|
||||
'X-Tenant-Id': TENANT_ID,
|
||||
'If-None-Match': etag,
|
||||
},
|
||||
tags: { request_type: 'conditional' },
|
||||
});
|
||||
|
||||
check(conditionalResponse, {
|
||||
'conditional request returns 304': (r) => r.status === 304,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Setup function - runs once before the test
|
||||
*/
|
||||
export function setup() {
|
||||
console.log(`Starting TTFS load test against ${BASE_URL}`);
|
||||
console.log(`Testing with ${RUN_IDS.length} run IDs`);
|
||||
|
||||
// Verify endpoint is accessible
|
||||
const healthCheck = http.get(`${BASE_URL}/health`, { timeout: '5s' });
|
||||
if (healthCheck.status !== 200) {
|
||||
console.warn(`Health check returned ${healthCheck.status} - proceeding anyway`);
|
||||
}
|
||||
|
||||
return { startTime: Date.now() };
|
||||
}
|
||||
|
||||
/**
|
||||
* Teardown function - runs once after the test
|
||||
*/
|
||||
export function teardown(data) {
|
||||
const duration = (Date.now() - data.startTime) / 1000;
|
||||
console.log(`TTFS load test completed in ${duration.toFixed(1)}s`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate test summary
|
||||
*/
|
||||
export function handleSummary(data) {
|
||||
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
|
||||
|
||||
return {
|
||||
'stdout': textSummary(data, { indent: ' ', enableColors: true }),
|
||||
[`results/ttfs-load-test-${timestamp}.json`]: JSON.stringify(data, null, 2),
|
||||
'results/ttfs-load-test-latest.json': JSON.stringify(data, null, 2),
|
||||
};
|
||||
}
|
||||
Reference in New Issue
Block a user