Skip to main content

Kubernetes based Collector

The JupiterOne Integration Operator is a Kubernetes-native solution for running JupiterOne integrations within your Kubernetes cluster. It manages Custom Resource Definitions (CRDs) for integration management and provides a scalable approach for organizations already using Kubernetes.

Prerequisites

  • Kubernetes 1.16+

  • Helm 3+

  • JupiterOne Account ID (found at /settings/account-management)

  • JupiterOne API Token with Collector permissions:

    • You can use either /settings/account-api-tokens or /settings/api-tokens
    • The token must have Collector Create/Read/Update permissions granted

    Set kubernetes token permissions

Installation

Installation is handled by two helm charts

  1. Kubernetes Operator This chart installs the controllers that manage the CRDs. This chart can be upgraded independently of the Integration Runner.
  2. Integration Runner This chart installs a Custom Resource integrationrunner that tells the operator to create a new collector and register with JupiterOne.

Kubernetes Operator

First, we need to install the operator which manages all the CRDs.

  1. Add the JupiterOne Helm Repository:

    helm repo add jupiterone https://jupiterone.github.io/helm-charts
    helm repo update
  2. Create a Namespace:

    kubectl create namespace jupiterone
  3. Install the Integration Operator:

    helm install integration-operator jupiterone/jupiterone-integration-operator --namespace jupiterone

Integration Runner

To install the Integration runner, you need to provide your API token. You can choose from two options for managing the API token secret:

note

The runner will create a Collector with the same name.

This is the simplest method. The Helm chart will automatically create a Kubernetes Secret in the same namespace with your API token.

Parameters:

  • runnerName: The name of the Runner.
  • apiToken: Your API token. This will be created as a Secret in Kubernetes.
  • accountID: Your JupiterOne Account ID.
  • jupiterOneEnvironment (optional): The JupiterOne environment. This defaults to us. This can be found by inspecting the URL when accessing the UI such as <environment>.jupiterone.io.

Installation command:

helm install <runnerName> jupiterone/jupiterone-integration-runner --namespace jupiterone --set apiToken=<apiToken> --set accountID=<accountID>

Verification

After installation, the runner should register with JupiterOne within 30 seconds.

kubectl get integrationrunner -n jupiterone

Expected output:

NAME     STATE     DETAIL   REGISTRATION   AGE
runner running registered 2m38s

Assigning an Integration

Assigning an integration job to a collector first requires that there are collectors registered and available. Once collectors are available, the process for defining an integration job and assigning it to a collector is straightforward.

For integrations that are collector compatible, complete the integration configuration as normal. During configuration, you'll notice there's an additional option to choose where the integration should run.

Select Collector on the integration instance, and choose the corresponding collector for which you'd like the integration to run.

Choosing run on Collector within the JupiterOne integration instance configuration

Updating the Operator

To update to the latest version:

helm repo update
helm upgrade integration-operator jupiterone/jupiterone-integration-operator --namespace jupiterone

Uninstalling

To remove the operator:

helm uninstall <runnerName> --namespace jupiterone
helm uninstall integration-operator --namespace jupiterone
kubectl delete namespace jupiterone

Troubleshooting

If pods are not starting or integrations are not running:

  • Check pod logs: kubectl logs -n jupiterone <pod-name>
  • Verify CRDs are present: kubectl get crd | grep jupiterone
  • Double-check authentication credentials
  • Ensure network connectivity to JupiterOne services

Known Limitations

  • Unable to migrate integration jobs between collectors.
  • Integration job distribution across multiple pods is still being enhanced.
  • Some integrations may not be compatible with collectors.