DIFF sync mode
DIFF is the default sync mode. On finalize, the persister treats your upload as the complete state of the scope: anything that existed in the scope last time and is missing from this upload is deleted.
This is the right mode when you control the source of truth and want missing rows to mean "this no longer exists." It is the wrong mode when you only want to add or update part of the data — for that, see PATCH for partial updates in your own scope, or OVERRIDE for persistent partial updates on managed-integration entities.
When to use DIFF
- You own the full dataset and re-upload it on a schedule.
- You want stale entities to disappear automatically when they fall off the source.
- Both entities and relationships are in scope.
- You can include the entire dataset in a single sync job (chunked across multiple uploads, but one finalize).
Example use cases
Nightly vulnerability scanner output
Your team runs an internal vulnerability scanner every night and pushes the findings into JupiterOne so analysts can pivot from finding to affected asset in J1QL.
- Scope:
internal-vuln-scanner - Mode:
DIFF - Why DIFF: When a finding is remediated and stops appearing in the report, you want it gone from the graph automatically — no janitorial PATCH-and-delete needed.
POST /persister/synchronization/jobs
{
"source": "api",
"syncMode": "DIFF",
"scope": "internal-vuln-scanner"
}
Each entity is a finding; relationships connect findings to the assets they affect.
Mirroring an internal CMDB
You maintain a system of record outside JupiterOne — say, an internal asset CMDB — and want it reflected as graph entities so your queries can join CMDB context to integration-sourced data.
- Scope:
cmdb-assets - Mode:
DIFF - Why DIFF: Decommissioned assets disappear from the CMDB, and you want them to disappear from the graph the same day.
HRIS mirror for Person entities
You sync your HRIS so departures, role changes, and new hires flow into the graph as Person entities.
- Scope:
hris-people - Mode:
DIFF - Why DIFF: When someone leaves, the next sync drops them from the upload and the graph reconciles cleanly.
Required entity fields
Every entity in a DIFF upload needs:
_key— unique within the scope._type— your entity type, insnake_case._class— JupiterOne class (string or array of strings, max 5 items).
Required relationship fields
Standard relationships:
_key,_type,_class,_fromEntityKey,_toEntityKey.
Mapped relationships (for connecting to entities you don't own in this scope) require _mapping instead of _fromEntityKey/_toEntityKey. See the API reference for the mapped-relationship shape.
DIFF is the only sync mode that writes mapped relationships to the graph. PATCH, CROSS_SCOPE, and OVERRIDE ignore mapped relationships in the upload payload.
In DIFF mode, do not use _fromEntityId or _toEntityId to point at entities outside the scope. Those fields are rejected by DIFF. Use mapped relationships, or switch to CROSS_SCOPE if you only need cross-scope edges.
Example: full DIFF sync job
{
"source": "api",
"syncMode": "DIFF",
"scope": "internal-vuln-scanner"
}
Response:
{
"job": {
"id": "f445397d-8491-4a12-806a-04792839abe3",
"scope": "internal-vuln-scanner",
"status": "AWAITING_UPLOADS",
"numEntitiesUploaded": 0
}
}
Upload entities and relationships in one call:
{
"entities": [
{
"_key": "finding:CVE-2025-1234:host-42",
"_type": "vuln_finding",
"_class": "Finding",
"severity": "high",
"cve": "CVE-2025-1234"
},
{
"_key": "host:host-42",
"_type": "internal_host",
"_class": "Host",
"displayName": "host-42"
}
],
"relationships": [
{
"_key": "finding:CVE-2025-1234:host-42|affects|host:host-42",
"_type": "finding_affects_host",
"_class": "AFFECTS",
"_fromEntityKey": "finding:CVE-2025-1234:host-42",
"_toEntityKey": "host:host-42"
}
]
}
Finalize:
POST /persister/synchronization/jobs/{jobId}/finalize
After finalize, anything previously in internal-vuln-scanner that wasn't in this upload is deleted.
Operational notes
- Always include the full dataset. A partial DIFF upload deletes everything you left out.
- Stable
_keys matter. If your_keystrategy changes between runs, every entity will look new (created + deleted) and downstream queries will see churn. - Chunk across uploads, finalize once. A single sync job can accept many upload calls before finalize. Use this to handle large datasets without holding the whole payload in memory.
- For partial updates, use
PATCHinstead. PATCH is designed for cases where you don't have the full dataset.
Common errors
| Error | Cause | Fix |
|---|---|---|
/entities/0/_key is required | Missing _key in DIFF upload. | Add _key (DIFF requires it). |
Relationships are not allowed in PATCH jobs | You meant to use DIFF but started a PATCH job. | Start a DIFF job. |
| Unintended deletions after a small upload | Partial dataset uploaded to the same scope. | Always include the full dataset, or split into multiple scopes. |
See the API reference for the complete error table.