To run continuous integration tasks and custom tasks, you must configure a workspace for your Code Stream pipeline.

In the pipeline workspace, select the Type as Docker or Kubernetes, and provide the respective endpoint. The Docker and Kubernetes platforms manage the entire life cycle of the container that Code Stream deploys for running the continuous integration (CI) task or custom task.

  • The Docker workspace requires the Docker host endpoint, builder image URL, image registry, working directory, cache, environment variables, CPU limit, and memory limit. You can also create a clone of the Git repository.
  • The Kubernetes workspace requires the Kubernetes API endpoint, builder image URL, image registry, namespace, NodePort, Persistent Volume Claim (PVC), working directory, environment variables, CPU limit, and memory limit. You can also create a clone of the Git repository.

The pipeline workspace configuration has many common parameters, and other parameters that are specific to the type of workspace, as the following table describes.

Table 1. Workspace areas, details, and availability
Selection Description Details and availability
Type Type of workspace. Available with Docker or Kubernetes.
Host Endpoint Host endpoint where the continuous integration (CI) and custom tasks run.

Available with the Docker workspace when you select the Docker host endpoint.

Available with the Kubernetes workspace when you select the Kubernetes API endpoint.

Builder image URL Name and location of the builder image. A container is created by using this image on the Docker host and the Kubernetes cluster. The continuous integration (CI) tasks and custom tasks run inside this container.

Example: fedora:latest

The builder image must have curl or wget.

Image registry If the builder image is available in a registry, and if the registry requires credentials, you must first create an Image Registry endpoint, then select it here so that the image can be pulled from the registry. Available with the Docker and Kubernetes workspaces.
Working directory The working directory is the location inside the container where the steps of the continuous integration (CI) task run, and is the location where the code is cloned when a Git webhook triggers a pipeline run. Available with Docker or Kubernetes.
Namespace If you do not enter a Namespace, Code Stream creates a unique name in the Kubernetes cluster that you provided. Specific to the Kubernetes workspace.
Proxy

To communicate with the workspace pod in the Kubernetes cluster, Code Stream deploys a single proxy instance in the namespace codestream-proxy for each Kubernetes cluster. You can choose either the NodePort or LoadBalancer type, based on the cluster configuration.

Which option you choose depends on the nature of the deployed Kubernetes cluster.

  • Typically, if the Kubernetes API server URL that gets specified in the endpoint is exposed through one of the master nodes, choose NodePort.
  • If the Kubernetes API server URL is exposed by a Load Balancer, as in the case of Amazon EKS (Elastic Kubernetes Service), choose LoadBalancer.
NodePort

Code Stream uses NodePort to communicate with the container running inside the Kubernetes cluster.

If you do not select a port, Code Stream uses an ephemeral port that Kubernetes assigns. You must ensure that the firewall rules are configured to allow ingress to the ephemeral port range (30000-32767).

If you enter a port, you must ensure that another service in the cluster is not already using it, and that the port is allowed in the firewall rules.

Specific to the Kubernetes workspace.
Persistent Volume Claim

Provides a way for the Kubernetes workspace to persist files across pipeline runs. When you provide a persistent volume claim name, it can store the logs, artifacts, and cache.

For more information about creating a persistent volume claim, see the Kubernetes documentation at https://kubernetes.io/docs/concepts/storage/persistent-volumes/.

Specific to the Kubernetes workspace.
Environment variables Key-value pairs that are passed here will be available to all continuous integration (CI) tasks and custom tasks in a pipeline when it runs.

Available with Docker or Kubernetes.

References to variables can be passed here.

Environment variables provided in the workspace are passed to all continuous integration (CI) tasks and custom tasks in the pipeline.

If environment variables are not passed here, those variables must be explicitly passed to each continuous integration (CI) task and custom task in the pipeline.

CPU limits Limits for CPU resources for the continuous integration (CI) container or custom task container. The default is 1.
Memory limits Limits for memory for the continuous integration (CI) container or custom task container. The unit is MB.
Git clone When you select Git clone, and a Git webhook invokes the pipeline, the code is cloned into the workspace (container). If Git clone is not enabled, you must configure an additional, explicit continuous integration (CI) task in the pipeline to clone the code first, then perform other steps such as build and test.
Cache

The Code Stream workspace allows you to cache a set of directories or files to speed up subsequent pipeline runs. Examples of these directories include .m2 and npm_modules. If you do not require caching of data between pipeline runs, you do not need to provide a persistent volume claim.

Artifacts such as files or directories in the container are cached for re-use across pipeline runs. For example, node_modules or .m2 folders can be cached. Cache accepts a list of paths.

For example:

workspace:
  type: K8S
  endpoint: K8S-Micro
  image: fedora:latest
  registry: Docker Registry
  path: ''
  cache:
    - /path/to/m2
    - /path/to/node_modules

Specific to type of workspace.

In the Docker workspace, Cache is achieved by using a shared path in the Docker host for persisting the cached data, artifacts, and logs.

In the Kubernetes workspace, you can use Cache only when you provide a persistent volume claim. If you do not provide a persistent volume claim, Cache is not enabled.

When using a Kubernetes API endpoint in the pipeline workspace, Code Stream creates the necessary Kubernetes resources such as ConfigMap, Secret, and Pod to run the continuous integration (CI) task or custom task. Code Stream communicates with the container by using the NodePort.

To share data across pipeline runs, you must provide a persistent volume claim, and Code Stream will mount the persistent volume claim to the container to store the data, and use it for subsequent pipeline runs.