8.1 KiB
Feedser CVE & KEV Connector Operations
This playbook equips operators with the steps required to roll out and monitor the CVE Services and CISA KEV connectors across environments.
1. CVE Services Connector (source:cve:*)
1.1 Prerequisites
- CVE Services API credentials (organisation ID, user ID, API key) with access to the JSON 5 API.
- Network egress to
https://cveawg.mitre.org(or a mirrored endpoint) from the Feedser workers. - Updated
feedser.yaml(or the matching environment variables) with the following section:
feedser:
sources:
cve:
baseEndpoint: "https://cveawg.mitre.org/api/"
apiOrg: "ORG123"
apiUser: "user@example.org"
apiKeyFile: "/var/run/secrets/feedser/cve-api-key"
pageSize: 200
maxPagesPerFetch: 5
initialBackfill: "30.00:00:00"
requestDelay: "00:00:00.250"
failureBackoff: "00:10:00"
ℹ️ Store the API key outside source control. When using
apiKeyFile, mount the secret file into the container/host; alternatively supplyapiKeyviaFEEDSER_SOURCES__CVE__APIKEY.
1.2 Smoke Test (staging)
- Deploy the updated configuration and restart the Feedser service so the connector picks up the credentials.
- Trigger one end-to-end cycle:
- Feedser CLI:
stella db jobs run source:cve:fetch --and-then source:cve:parse --and-then source:cve:map - REST fallback:
POST /jobs/run { "kind": "source:cve:fetch", "chain": ["source:cve:parse", "source:cve:map"] }
- Feedser CLI:
- Observe the following metrics (exported via OTEL meter
StellaOps.Feedser.Source.Cve):cve.fetch.attempts,cve.fetch.success,cve.fetch.documents,cve.fetch.failures,cve.fetch.unchangedcve.parse.success,cve.parse.failures,cve.parse.quarantinecve.map.success
- Verify Prometheus shows matching
feedser.source.http.requests_total{feedser_source="cve"}deltas (list vs detail phases) whilefeedser.source.http.failures_total{feedser_source="cve"}stays flat. - Confirm the info-level summary log
CVEs fetch window … pages=X detailDocuments=Y detailFailures=Zappears once per fetch run and showsdetailFailures=0. - Verify the MongoDB advisory store contains fresh CVE advisories (
advisoryKeyprefixcve/) and that the source cursor (source_statescollection) advanced.
1.3 Production Monitoring
- Dashboards – Plot
rate(cve_fetch_success_total[5m]),rate(cve_fetch_failures_total[5m]), andrate(cve_fetch_documents_total[5m])alongsidefeedser_source_http_requests_total{feedser_source="cve"}to confirm HTTP and connector counters stay aligned. Keepfeedser.range.primitives{scheme=~"semver|vendor"}on the same board for range coverage. Example alerts:rate(cve_fetch_failures_total[5m]) > 0for 10 minutes (severity=warning)rate(cve_map_success_total[15m]) == 0whilerate(cve_fetch_success_total[15m]) > 0(severity=critical)sum_over_time(cve_parse_quarantine_total[1h]) > 0to catch schema anomalies
- Logs – Monitor warnings such as
Failed fetching CVE record {CveId}andMalformed CVE JSON, and surface the summary info logCVEs fetch window … detailFailures=0 detailUnchanged=0on dashboards. A non-zerodetailFailuresusually indicates rate-limit or auth issues on detail requests. - Grafana pack – Import
docs/ops/feedser-cve-kev-grafana-dashboard.jsonand filter by panel legend (CVE,KEV) to reuse the canned layout. - Backfill window – Operators can tighten or widen
initialBackfill/maxPagesPerFetchafter validating throughput. Update config and restart Feedser to apply changes.
2. CISA KEV Connector (source:kev:*)
2.1 Prerequisites
- Network egress (or mirrored content) for
https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json. - No credentials are required, but the HTTP allow-list must include
www.cisa.gov. - Confirm the following snippet in
feedser.yaml(defaults shown; tune as needed):
feedser:
sources:
kev:
feedUri: "https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json"
requestTimeout: "00:01:00"
failureBackoff: "00:05:00"
2.2 Schema validation & anomaly handling
The connector validates each catalog against Schemas/kev-catalog.schema.json. Failures increment kev.parse.failures_total{reason="schema"} and the document is quarantined (status Failed). Additional failure reasons include download, invalidJson, deserialize, missingPayload, and emptyCatalog. Entry-level anomalies are surfaced through kev.parse.anomalies_total with reasons:
| Reason | Meaning |
|---|---|
missingCveId |
Catalog entry omitted cveID; the entry is skipped. |
countMismatch |
Catalog count field disagreed with the actual entry total. |
nullEntry |
Upstream emitted a null entry object (rare upstream defect). |
Treat repeated schema failures or growing anomaly counts as an upstream regression and coordinate with CISA or mirror maintainers.
2.3 Smoke Test (staging)
- Deploy the configuration and restart Feedser.
- Trigger a pipeline run:
- CLI:
stella db jobs run source:kev:fetch --and-then source:kev:parse --and-then source:kev:map - REST:
POST /jobs/run { "kind": "source:kev:fetch", "chain": ["source:kev:parse", "source:kev:map"] }
- CLI:
- Verify the metrics exposed by meter
StellaOps.Feedser.Source.Kev:kev.fetch.attempts,kev.fetch.success,kev.fetch.unchanged,kev.fetch.failureskev.parse.entries(tagcatalogVersion),kev.parse.failures,kev.parse.anomalies(tagreason)kev.map.advisories(tagcatalogVersion)
- Confirm
feedser.source.http.requests_total{feedser_source="kev"}increments once per fetch and that the pairedfeedser.source.http.failures_totalstays flat (zero increase). - Inspect the info logs
Fetched KEV catalog document … pendingDocuments=…andParsed KEV catalog document … entries=…—they should appear exactly once per run andMapped X/Y… skipped=0should match thekev.map.advisoriesdelta. - Confirm MongoDB documents exist for the catalog JSON (
raw_documents&dtos) and that advisories with prefixkev/are written.
2.4 Production Monitoring
- Alert when
rate(kev_fetch_success_total[8h]) == 0during working hours (daily cadence breach) and whenincrease(kev_fetch_failures_total[1h]) > 0. - Page the on-call if
increase(kev_parse_failures_total{reason="schema"}[6h]) > 0—this usually signals an upstream payload change. Treat repeatedreason="download"spikes as networking issues to the mirror. - Track anomaly spikes through
sum_over_time(kev_parse_anomalies_total{reason="missingCveId"}[24h]). RisingcountMismatchtrends point to catalog publishing bugs. - Surface the fetch/mapping info logs (
Fetched KEV catalog document …andMapped X/Y KEV advisories … skipped=S) on dashboards; absence of those logs while metrics show success typically means schema validation short-circuited the run.
2.5 Known good dashboard tiles
Add the following panels to the Feedser observability board:
| Metric | Recommended visualisation |
|---|---|
rate(kev_fetch_success_total[30m]) |
Single-stat (last 24 h) with warning threshold >0 |
rate(kev_parse_entries_total[1h]) by catalogVersion |
Stacked area – highlights daily release size |
sum_over_time(kev_parse_anomalies_total[1d]) by reason |
Table – anomaly breakdown (matches dashboard panel) |
rate(cve_map_success_total[15m]) vs rate(kev_map_advisories_total[24h]) |
Comparative timeseries for advisories emitted |
3. Runbook updates
- Record staging/production smoke test results (date, catalog version, advisory counts) in your team’s change log.
- Add the CVE/KEV job kinds to the standard maintenance checklist so operators can manually trigger them after planned downtime.
- Keep this document in sync with future connector changes (for example, new anomaly reasons or additional metrics).
- Version-control dashboard tweaks alongside
docs/ops/feedser-cve-kev-grafana-dashboard.jsonso operations can re-import the observability pack during restores.