Need a description here.

cbcontainers-runtime

The cbcontainers-runtime container is part of every pod within the cbcontainers-node-agent DaemonSet. It is a privileged container that uses eBPF to attach to the Linux kernel on each Kubernetes node and generate a stream of events of observed network connections. These events are batched together and sent by gRPC to the cbcontainers-runtime-resolver deployment. The cbcontainers-runtime container does not connect directly to the Carbon Black Cloud backend.

Image cbartifactory/runtime-kubernetes-sensor
Opened ports None
Connects to Kubernetes services cbcontainers-runtime-resolver.cbcontainers-dataplane.svc.cluster.local:8080
Connects to backend No
NO_PROXY requirements cbcontainers-runtime-resolver.cbcontainers-data plane.svc.cluster.local and the Kubernetes API server IP addresses (resolved from kubernetes.default.svc within the cluster)
Requested resources CPU- 30m, Memory - 64Mi
Resource limits CPU- 2, Memory - 4Gi
Replica count (min & def) Min- 1, Default = Kubernetes node count
Horizontal Scaling Because it is a part of DaemonSet, new Kubernetes nodes automaticallyget a replica. There is no need for manual scaling.
Tolerances

node.kubernetes.io/disk-pressure:NoSchedule op=Exists

node.kubernetes.io/memory-pressure:NoSchedule op=Exists

node.kubernetes.io/network-unavailable:NoSchedule op=Exists

node.kubernetes.io/not-ready:NoExecute op=Exists

node.kubernetes.io/pid-pressure:NoSchedule op=Exists

node.kubernetes.io/unreachable:NoExecute op=Exists

node.kubernetes.io/unschedulable:NoSchedule op=Exists

Is privileged Yes

cbcontainers-cluster-scanner

The cbcontainers-cluster-scanner container is part of every pod within the cbcontainers-node-agent DaemonSet. Different container runtime endpoints (Containerd, dockershim, CRI-O) are mounted inside the pod to communicate with the container runtime of the node. The cluster scanner calls the container runtime using gRPC to list containers and images, read their contents, and perform scans on the images that detect vulnerabilities, malware, and secrets.

For clusters utilizing CRI-O, additional paths from the host are mounted and utilized. These are paths where CRI-O stores image data and are required to fully scan images because some operations are not natively supported by the CRI-O API.

Most communication from cbcontainers-cluster-scanner goes through the cbcontainers-image-scanning-reporterb efore reaching the Carbon Black Cloudbackend — except for generating certificates for mTLS connections, which is done by directly calling the Carbon Black Cloud backend.

Image cbartifactory/cluster-scanner
Opened ports None
Connects to Kubernetes services cbcontainers-image-scanning-reporter.cbcontainers-dataplane.s vc.cluster.local:443

kubernetes.default.svc (Kubernetes API server)

Connects to backend defense-prod05.conferdeploy.net:443
NO_PROXY requirements cbcontainers-runtime-resolver.cbcontainers-data plane.svc.cluster.local and the Kubernetes API server IP addresses (resolved from kubernetes.default.svc within the cluster)
Requested resources CPU- 30m, Memory - 64Mi
Resource limits CPU- 2, Memory - 4Gi
Replica count (min & def) Min- 1, Default = Kubernetes node count
Horizontal Scaling Because it is a part of DaemonSet, new Kubernetes nodes automaticallyget a replica. There is no need for manual scaling.
Tolerances

node.kubernetes.io/disk-pressure:NoSchedule op=Exists

node.kubernetes.io/memory-pressure:NoSchedule op=Exists

node.kubernetes.io/not-ready:NoExecute op=Exists

node.kubernetes.io/pid-pressure:NoSchedule op=Exists

node.kubernetes.io/unreachable:NoExecute op=Exists

node.kubernetes.io/unschedulable:NoSchedule op=Exists

Is privileged Yes

cbcontainers-cndr

TheCNDR container contains the Carbon Black Cloud Linux Sensor. It uses eBPF probes for monitoring container process actions, file access events and network events.

Events are processed, attributed to workloads, passed through a rules engine to generate alerts if needed, and sent to the Carbon Black Cloud backend for presentation and analysis.

Image cbartifactory/cndr
Opened ports None
Connects to Kubernetes services

kubernetes.default.svc (Kubernetes API server)

Connects to backend runtime.events.containers.carbonblack.io:443 (gRPC)

defense-prod05.conferdeploy.net:443

NO_PROXY requirements Kubernetes API server IP addresses (resolved from kubernetes.default.svc within the cluster)
Requested resources CPU- 30m, Memory - 64Mi
Resource limits CPU- 500m, Memory - 1Gi
Replica count (min & def) Min- 1, Default = Kubernetes node count
Horizontal Scaling Because it is a part of DaemonSet, new Kubernetes nodes automaticallyget a replica. There is no need for manual scaling.
Tolerances

node.kubernetes.io/disk-pressure:NoSchedule op=Exists

node.kubernetes.io/memory-pressure:NoSchedule op=Exists

node.kubernetes.io/network-unavailable:NoSchedule op=Exists

node.kubernetes.io/not-ready:NoExecute op=Exists

node.kubernetes.io/pid-pressure:NoSchedule op=Exists

node.kubernetes.io/unreachable:NoExecute op=Exists

node.kubernetes.io/unschedulable:NoSchedule op=Exists

Is privileged Yes