To run continuous integration tasks and custom tasks, you must configure a workspace for your Code Stream pipeline.
In the pipeline workspace, select the Type as Docker or Kubernetes, and provide the respective endpoint. The Docker and Kubernetes platforms manage the entire life cycle of the container that Code Stream deploys for running the continuous integration (CI) task or custom task.
- The Docker workspace requires the Docker host endpoint, builder image URL, image registry, working directory, cache, environment variables, CPU limit, and memory limit. You can also create a clone of the Git repository.
- The Kubernetes workspace requires the Kubernetes API endpoint, builder image URL, image registry, namespace, NodePort, Persistent Volume Claim (PVC), working directory, environment variables, CPU limit, and memory limit. You can also create a clone of the Git repository.
The pipeline workspace configuration has many common parameters, and other parameters that are specific to the type of workspace, as the following table describes.
Selection | Description | Details and availability |
---|---|---|
Type | Type of workspace. | Available with Docker or Kubernetes. |
Host Endpoint | Host endpoint where the continuous integration (CI) and custom tasks run. | Available with the Docker workspace when you select the Docker host endpoint. Available with the Kubernetes workspace when you select the Kubernetes API endpoint. |
Builder image URL | Name and location of the builder image. A container gets created by using this image on the Docker host and the Kubernetes cluster. The continuous integration (CI) tasks and custom tasks run inside this container. | Example: fedora:latest The builder image must have curl or wget. |
Image registry | If the builder image is available in a registry, and if the registry requires credentials, you must create an Image Registry endpoint, then select it here so that the image can be pulled from the registry. | Available with the Docker and Kubernetes workspaces. |
Working directory | The working directory is the location inside the container where the steps of the continuous integration (CI) task run, and is the location where the code gets cloned when a Git webhook triggers a pipeline run. | Available with Docker or Kubernetes. |
Namespace | If you do not enter a Namespace, Code Stream creates a unique name in the Kubernetes cluster that you provided. | Specific to the Kubernetes workspace. |
Proxy | To communicate with the workspace pod in the Kubernetes cluster, Code Stream deploys a single proxy instance in the namespace Which option you select depends on the nature of the deployed Kubernetes cluster.
|
|
NodePort | Code Stream uses NodePort to communicate with the container running inside the Kubernetes cluster. If you do not select a port, Code Stream uses an ephemeral port that Kubernetes assigns. You must ensure that the configuration of firewall rules allows ingress to the ephemeral port range (30000-32767). If you enter a port, you must ensure that another service in the cluster is not already using it, and that the firewall rules allow the port. |
Specific to the Kubernetes workspace. |
Persistent Volume Claim | Provides a way for the Kubernetes workspace to persist files across pipeline runs. When you provide a persistent volume claim name, it can store the logs, artifacts, and cache. For more information about creating a persistent volume claim, see the Kubernetes documentation at https://kubernetes.io/docs/concepts/storage/persistent-volumes/. |
Specific to the Kubernetes workspace. |
Environment variables | Key-value pairs that get passed here will be available to all continuous integration (CI) tasks and custom tasks in a pipeline when it runs. | Available with Docker or Kubernetes. References to variables can be passed here. Environment variables provided in the workspace get passed to all continuous integration (CI) tasks and custom tasks in the pipeline. If environment variables do not get passed here, those variables must be explicitly passed to each continuous integration (CI) task and custom task in the pipeline. |
CPU limits | Limits for CPU resources for the continuous integration (CI) container or custom task container. | The default is 1. |
Memory limits | Limits for memory for the continuous integration (CI) container or custom task container. | The unit is MB. |
Git clone | When you select Git clone, and a Git webhook invokes the pipeline, the code gets cloned into the workspace (container). | If you do not enable Git clone, you must configure another, explicit continuous integration (CI) task in the pipeline to clone the code first, then perform other steps such as build and test. |
Cache | The Code Stream workspace allows you to cache a set of directories or files to speed up subsequent pipeline runs. Examples of these directories include .m2 and npm_modules. If you do not require caching of data between pipeline runs, a persistent volume claim is not necessary. Artifacts such as files or directories in the container get cached for re-use across pipeline runs. For example, node_modules or .m2 folders can be cached. Cache accepts a list of paths. For example: workspace: type: K8S endpoint: K8S-Micro image: fedora:latest registry: Docker Registry path: '' cache: - /path/to/m2 - /path/to/node_modules |
Specific to type of workspace. In the Docker workspace, you achieve the Cache by using a shared path in the Docker host for persisting the cached data, artifacts, and logs. In the Kubernetes workspace, to enable the use of Cache, you must provide a persistent volume claim. Otherwise, Cache is unavailable. |
When using a Kubernetes API endpoint in the pipeline workspace, Code Stream creates the necessary Kubernetes resources such as ConfigMap, Secret, and Pod to run the continuous integration (CI) task or custom task. Code Stream communicates with the container by using the NodePort.
To share data across pipeline runs, you must provide a persistent volume claim, and Code Stream will mount the persistent volume claim to the container to store the data, and use it for subsequent pipeline runs.