Overview
Sync jobs are how data enters JupiterOne from the API. Whether you are pushing a nightly inventory from an internal CMDB, layering risk scores onto entities owned by a managed integration, connecting assets that live in different integrations, or pinning a custom value so it survives the next integration run — each of those flows is a sync job.
A sync job groups a batch of changes under a scope, runs a comparison against the existing data in that scope when finalized, and applies the result to the graph. The behavior of that comparison depends on the sync mode you pick.
The four sync modes
| Mode | What finalize does | Entities | Relationships | Typical use |
|---|---|---|---|---|
DIFF | Replaces the full dataset in scope. Anything not in the upload is deleted. | yes | yes | Full inventory refresh from a custom source. |
PATCH | Adds or updates only. Nothing is deleted. | yes | no | Enriching existing entities with extra properties. |
CROSS_SCOPE | Adds/updates relationships whose from and to entities live in different scopes. | no | yes | Linking assets across integrations. |
OVERRIDE | Pins property values on managed-integration entities so they persist across future integration syncs. | yes (existing only) | no | Preserving customer-applied values that an integration would otherwise overwrite. |
Choose a mode
Use the questions below in order. The first "yes" picks your mode.
- Are you setting custom values on entities that a managed integration owns, and you need those values to survive the next integration run?
- For a batch of entities under a stable scope →
OVERRIDE - For one entity (or a handful of entities, scripted one at a time) → use the
updateEntityGraphQL mutation instead — it's a per-entity persistent edit, no sync job needed.
- For a batch of entities under a stable scope →
- Are you only creating relationships between entities that live in different scopes (different integrations or different sources)?
→
CROSS_SCOPE - Do you want to add or update properties on existing entities without ever deleting anything, and you accept that the next managed-integration sync will reset the values?
→
PATCH - Are you the source of truth for this set of data and want the upload to fully replace what was there last time?
→
DIFF
Sync jobs are batch-oriented: one scope, one finalize, many entities. The GraphQL updateEntity mutation is per-entity and gives you the same persistence guarantee as OVERRIDE for individual edits. Reach for it when you don't need a scope to manage a set together.
Lifecycle of a sync job
Every sync job, regardless of mode, follows the same shape:
- Start —
POST /persister/synchronization/jobscreates the job and returns an ID. The job begins inAWAITING_UPLOADS. - Upload — One or more upload calls push entities and/or relationships into the job. The persister tracks counters as data arrives but does not yet apply changes to the graph.
- Finalize —
POST /persister/synchronization/jobs/{id}/finalizetriggers the comparison. The persister reads the new state, compares it against existing data in the scope, and applies creates, updates, and (depending on the mode) deletes. - Poll —
GET /persister/synchronization/jobs/{id}returns status. In AWS environments, finalize is asynchronous: status moves throughFINALIZE_PENDINGtoFINISHED.
See the API reference for endpoints and payloads.
Scopes
A scope is a string label that groups everything in a sync job together for the purposes of comparison. Two sync jobs with the same scope and the same mode operate on the same logical dataset — the second finalize sees the first job's results as "the previous state."
A few rules of thumb:
- Use a stable, descriptive
scopefor each pipeline you own (vuln-scanner-nightly, notupload-2026-04-28). If your scope name changes between runs, the previous data won't be reconciled and may linger. - Choose a
scopegranularity that matches how you want deletions to behave. A single scope across all of your custom data means dropping a row from your source removes it from the graph everywhere. - For
CROSS_SCOPEandOVERRIDE,scopeworks the same way but the mode-specific pages cover the wrinkles.
Use case map
A handful of common patterns, with the mode each one calls for:
| Scenario | Mode |
|---|---|
| Push the output of an internal vulnerability scanner so stale findings disappear when they no longer appear in the report. | DIFF |
Mirror an HRIS into the graph as Person entities, deleting people who left the company. | DIFF |
Refresh a lastErrorCount on managed-integration entities every five minutes between hourly integration syncs (accepting that each integration sync resets the value). | PATCH |
Update a lastGoogleLoginAt property hourly on Person entities you already pushed via a custom HRIS DIFF. | PATCH |
Connect Okta User entities to the laptops they own in Jamf so you can query "show me every laptop assigned to a member of the SRE group." | CROSS_SCOPE |
| Link GitHub repositories to the AWS resources they deploy, where the integrations don't already produce that relationship. | CROSS_SCOPE |
Tag a managed Jamf device with complianceScope: "soc2" so the value survives every Jamf integration run. | OVERRIDE |
Set a criticality value on a managed GitHub repository entity so it stays put even after the GitHub integration re-syncs. | OVERRIDE for a batch; updateEntity for a single repository. |
Pin a one-off custom property on a single managed entity (e.g. betaForceDeviceUnificationKey on two specific hosts) so the value survives every managed integration sync. | updateEntity |
Validation, limits, and errors
Validation rules and error messages are consistent across modes. The API reference lists every error you can hit and how to resolve it. Mode-specific limits (for example, the 10,000-entity cap on OVERRIDE) are called out on the mode pages.
Authentication
Every sync job request needs:
- An API key in the
Authorization: Bearer <token>header. - The target account in the
jupiterone-accountheader.
See Authentication for full setup.
Next steps
- Pick a mode using the table or decision questions above.
- Read the mode page for full request shape, scenarios, and constraints.
- Skim the API reference for the endpoint list.