5.9 KiB
Feedser GHSA Connector – Operations Runbook
Last updated: 2025-10-12
1. Overview
The GitHub Security Advisories (GHSA) connector pulls advisory metadata from the GitHub REST API /security/advisories endpoint. GitHub enforces both primary and secondary rate limits, so operators must monitor usage and configure retries to avoid throttling incidents.
2. Rate-limit telemetry
The connector now surfaces rate-limit headers on every fetch and exposes the following metrics via OpenTelemetry:
| Metric | Description | Tags |
|---|---|---|
ghsa.ratelimit.limit (histogram) |
Samples the reported request quota at fetch time. | phase = list or detail, resource (e.g., core). |
ghsa.ratelimit.remaining (histogram) |
Remaining requests returned by X-RateLimit-Remaining. |
phase, resource. |
ghsa.ratelimit.reset_seconds (histogram) |
Seconds until X-RateLimit-Reset. |
phase, resource. |
ghsa.ratelimit.headroom_pct (histogram) |
Percentage of the quota still available (remaining / limit * 100). |
phase, resource. |
ghsa.ratelimit.headroom_pct_current (observable gauge) |
Latest headroom percentage reported per resource. | phase, resource. |
ghsa.ratelimit.exhausted (counter) |
Incremented whenever GitHub returns a zero remaining quota and the connector delays before retrying. | phase. |
Dashboards & alerts
- Plot
ghsa.ratelimit.remainingas the latest value to watch the runway. Alert when the value stays belowRateLimitWarningThreshold(default500) for more than 5 minutes. - Use
ghsa.ratelimit.headroom_pct_currentto visualise remaining quota % — paging once it sits below 10 % for longer than a single reset window helps avoid secondary limits. - Raise a separate alert on
increase(ghsa.ratelimit.exhausted[15m]) > 0to catch hard throttles. - Overlay
ghsa.fetch.attemptsvsghsa.fetch.failuresto confirm retries are effective.
3. Logging signals
When X-RateLimit-Remaining falls below RateLimitWarningThreshold, the connector emits:
GHSA rate limit warning: remaining {Remaining}/{Limit} for {Phase} {Resource} (headroom {Headroom}%)
When GitHub reports zero remaining calls, the connector logs and sleeps for the reported Retry-After/X-RateLimit-Reset interval (falling back to SecondaryRateLimitBackoff).
After the quota recovers above the warning threshold the connector writes an informational log with the refreshed remaining/headroom, letting operators clear alerts quickly.
4. Configuration knobs (feedser.yaml)
feedser:
sources:
ghsa:
apiToken: "${GITHUB_PAT}"
pageSize: 50
requestDelay: "00:00:00.200"
failureBackoff: "00:05:00"
rateLimitWarningThreshold: 500 # warn below this many remaining calls
secondaryRateLimitBackoff: "00:02:00" # fallback delay when GitHub omits Retry-After
Recommendations
- Increase
requestDelayin air-gapped or burst-heavy deployments to smooth token consumption. - Lower
rateLimitWarningThresholdonly if your dashboards already page on the new histogram; never set it negative. - For bots using a low-privilege PAT, keep
secondaryRateLimitBackoffat ≥60 seconds to respect GitHub’s secondary-limit guidance.
Default job schedule
| Job kind | Cron | Timeout | Lease |
|---|---|---|---|
source:ghsa:fetch |
1,11,21,31,41,51 * * * * |
6 minutes | 4 minutes |
source:ghsa:parse |
3,13,23,33,43,53 * * * * |
5 minutes | 4 minutes |
source:ghsa:map |
5,15,25,35,45,55 * * * * |
5 minutes | 4 minutes |
These defaults spread GHSA stages across the hour so fetch completes before parse/map fire. Override them via feedser.jobs.definitions[...] when coordinating multiple connectors on the same runner.
5. Provisioning credentials
Feedser requires a GitHub personal access token (classic) with the read:org and security_events scopes to pull GHSA data. Store it as a secret and reference it via feedser.sources.ghsa.apiToken.
Docker Compose (stack operators)
services:
feedser:
environment:
FEEDSER__SOURCES__GHSA__APITOKEN: /run/secrets/ghsa_pat
secrets:
- ghsa_pat
secrets:
ghsa_pat:
file: ./secrets/ghsa_pat.txt # contains only the PAT value
Helm values (cluster operators)
feedser:
extraEnv:
- name: FEEDSER__SOURCES__GHSA__APITOKEN
valueFrom:
secretKeyRef:
name: feedser-ghsa
key: apiToken
extraSecrets:
feedser-ghsa:
apiToken: "<paste PAT here or source from external secret store>"
After rotating the PAT, restart the Feedser workers (or run kubectl rollout restart deployment/feedser) to ensure the configuration reloads.
When enabling GHSA the first time, run a staged backfill:
- Trigger
source:ghsa:fetchmanually (CLI or API) outside of peak hours. - Watch
feedser.jobs.healthfor the GHSA jobs until they reporthealthy. - Allow the scheduled cron cadence to resume once the initial backlog drains (typically < 30 minutes).
6. Runbook steps when throttled
- Check
ghsa.ratelimit.exhaustedfor the affected phase (listvsdetail). - Confirm the connector is delaying—logs will show
GHSA rate limit exhausted...with the chosen backoff. - If rate limits stay exhausted:
- Verify no other jobs are sharing the PAT.
- Temporarily reduce
MaxPagesPerFetchorPageSizeto shrink burst size. - Consider provisioning a dedicated PAT (GHSA permissions only) for Feedser.
- After the quota resets, reset
rateLimitWarningThreshold/requestDelayto their normal values and monitor the histograms for at least one hour.
7. Alert integration quick reference
- Prometheus:
ghsa_ratelimit_remaining_bucket(from histogram) – usehistogram_quantile(0.99, ...)to trend capacity. - VictoriaMetrics:
LAST_over_time(ghsa_ratelimit_remaining_sum[5m])for simple last-value graphs. - Grafana: stack remaining + used to visualise total limit per resource.