This topic provides you with the answers to commonly asked questions about Tanzu Build Service (commonly known as TBS).
CNBs are build tools that adhere to the CNB v3 Specification and transform source code into an OCI compliant runnable image. The v3 specification, lifecycle, and local CLI (pack
) are governed by the open source Cloud Native Buildpacks project.
kpack is a collection of open source resource controllers that together function as a Kubernetes native build service. The product provides a declarative image type that builds an image and schedules image rebuilds when dependencies of the image change. kpack is a platform implementation of CNBs in that it utilizes CNBs and the v3 lifecycle to execute image builds.
Tanzu Build Service is a commercial product owned and operated by VMware that utilizes kpack and CNBs. Build Service provides additional abstractions intended to ease the use of the above technologies in Enterprise settings. These abstractions are covered in detail throughout the documentation on this site. Additionally, customers of Build Service are entitled to support and VMware Tanzu buildpacks.
By default Build Service will tag each built image twice. The first tag is the configured image tag. The second tag is a unique tag with the build number and build timestamp. The second tag is added to ensure that previous images are not deleted on registries that garbage collect untagged images.
Build Service is installed and deployed using Carvel tools. Therefore, the imgpkg copy
command can create a .tar
file composed of the Kubernetes config and images required to successfully install Build Service. The imgpkg copy
command also ensures that all the images can be relocated to air-gapped registries. By providing the credentials to the air-gapped registry when executing the kapp install
command, Build Service can then use that secret to pull images from the registry and so work in air-gapped environments.
Currently, kbld package
and kbld unpackage
must be used to import dependencies to an air-gapped environment.
For more details on air-gapped installation, see Installation to Air-Gapped Environment.
For more details on air-gapped builds, see Offline Builds.
Yes, documentation is available on Tanzu Buildpacks Documenation.
When interacting with a registry or a Git repo that has been deployed using a self-signed certificate, Build Service must be provided with the certificate during install time. You will either need to target a registry that does not have self-signed certificates or re-install Build Service to work with this registry.
Create a dockerhub secret with the kp
cli:
kp secret create my-dockerhub-creds --dockerhub DOCKERHUB-USERNAME
Where DOCKERHUB-USERNAME
is your dockerhub username. You will be prompted for your dockerhub password.
Create a github secret with the kp
cli:
Using a git ssh key
kp secret create my-git-ssh-cred --git [email protected] --git-ssh-key PATH-TO-GITHUB-PRIVATE-KEY
Where PATH-TO-GITHUB-PRIVATE-KEY
is the absolute local path to the github ssh private key.
Or with a basic auth github username and password.
kp secret create my-git-cred --git https://github.com --git-user GITHUB-USERNAME
Where GITHUB-USERNAME
is your github username. You will be prompted for your github password.
The run image must be publicly readable or readable with the registry credentials configured in a project/namespace.
To see where the build service run image is located, run: kp stack status STACK-NAME
.
If you cannot make the run image publicly readable, you must kp
to create a registry secret within the namespace where your builds reside. This can be accomplished by using kp secret create
.
There is a known bug in Harbor that, at times, prevents the UI from showing images. If you are unable to see a recently built image in the Harbor UI, try pulling it using the docker
or crane
CLI to verify that it exists.
Some builders are very large and can overwhelm Harbor’s default database connection. You can remediate this issue by increasing the database.maxOpenConns
setting in the helm values.yaml
file. Increase this value from 100 to 300. The exact setting can be found in the harbor-helm values.yaml file on GitHub.
You can use Google Container Registry for your Tanzu Build Service installation registry.
If you have trouble configuring the registry credentials for gcr when following the install docs, use the following to set the gcr credentials:
registry_name="_json_key"
registry_password="$(cat /path/to/gcp/service/account/key.json)"
ytt -f /tmp/bundle/config/ \
-v kp_default_repository='<IMAGE-REPOSITORY>' \
-v kp_default_repository_username='<REGISTRY-USERNAME>' \
-v kp_default_repository_password='<REGISTRY-PASSWORD>' \
--data-value-yaml pull_from_kp_default_repo=true \
| kbld -f /tmp/bundle/.imgpkg/images.yml -f- \
| kapp deploy -a tanzu-build-service -f- -y
TBS can be configured with a proxy at installation time by specifying additional parameters:
http_proxy
: The HTTP proxy to use for network traffic.https_proxy
: The HTTPS proxy to use for network traffic.no_proxy
: A comma-separated list of hostnames, IP addresses, or IP ranges in CIDR format that should not use a proxy.Note: When proxy server is enabled using http_proxy
and/or https_proxy
, traffic to the kubernetes API server will also flow through the proxy server. This is a known limitation and can be circumvented by using no_proxy
to specify the kubernetes API server.
ytt -f /tmp/bundle/config/ \
-v kp_default_repository='<IMAGE-REPOSITORY>' \
-v kp_default_repository_username='<REGISTRY-USERNAME>' \
-v kp_default_repository_password='<REGISTRY-PASSWORD>' \
--data-value-yaml pull_from_kp_default_repo=true \
-v http_proxy='<HTTP-PROXY-URL>' \
-v https_proxy='<HTTPS-PROXY-URL>' \
-v no_proxy='<KUBERNETES-API-SERVER-URL>' \
| kbld -f /tmp/bundle/.imgpkg/images.yml -f- \
| kapp deploy -a tanzu-build-service -f- -y
You can use the pack cli with your kpack builders to test them locally before checking in your code. By using your kpack builder locally, you can guarantee that the buildpacks, stacks, and lifecycle used to build the image config will also be used by the pack CLI, resulting in a container image that is the exact same, whether it is built by kpack
or pack
.
Note: Make sure that you docker login
or crane auth login
to the image repository containing your kpack builder.
pack build my-app --path ~/workspace/my-app --builder gcr.io/my-project/my-image:latest --trust-builder
From kp CLI v1.0.3+ the --dry-run
and --output
flags are made available to kp commands that create or update any kpack Kubernetes resources.
The --dry-run
flag lets you perform a quick validation with no side-effects as no objects are sent to the server. And the --output
flag lets you view the resource in yaml or json format.
The --dry-run-with-image-upload
flag is similar to the --dry-run
flag in that no kpack Kubernetes resources are updated. This flag is provided as a convenience for kp commands that can output Kubernetes resource with generated container image references.
For example, consider the command below
$ kp clusterstack create test-stack \
--dry-run \
--output yaml \
--build-image gcr.io/paketo-buildpacks/build@sha256:f550ab24b72586cb26215817b874b9e9ec2ca615ede03206833286934779ab5d \
--run-image gcr.io/paketo-buildpacks/run@sha256:21c1fb65033ae5a765a1fb44bfefdea37024ceac86ac6098202b891d27b8671f
Creating ClusterStack... (dry run)
Uploading to 'gcr.io/my-project/my-repo'... (dry run)
Skipping 'gcr.io/my-project/my-repo/build@sha256:f550ab24b72586cb26215817b874b9e9ec2ca615ede03206833286934779ab5d'
Skipping 'gcr.io/my-project/my-repo/run@sha256:21c1fb65033ae5a765a1fb44bfefdea37024ceac86ac6098202b891d27b8671f'
apiVersion: kpack.io/v1alpha1
kind: ClusterStack
metadata:
creationTimestamp: null
name: test-stack
spec:
buildImage:
image: gcr.io/my-project/my-repo/build@sha256:f550ab24b72586cb26215817b874b9e9ec2ca615ede03206833286934779ab5d
id: io.buildpacks.stacks.jammy
runImage:
image: gcr.io/my-project/my-repo/run@sha256:21c1fb65033ae5a765a1fb44bfefdea37024ceac86ac6098202b891d27b8671f
status:
buildImage: {}
runImage: {}
The resource yaml output above has the relocated build and run image urls. However, the images were never uploaded.
If you now apply the resource output using kubectl apply -f
as shown below, then the resource will be created but will be faulty since the referenced images do not exist.
$ kp clusterstack create test-stack \
--dry-run \
--output yaml \
--build-image gcr.io/paketo-buildpacks/build@sha256:f550ab24b72586cb26215817b874b9e9ec2ca615ede03206833286934779ab5d \
--run-image gcr.io/paketo-buildpacks/run@sha256:21c1fb65033ae5a765a1fb44bfefdea37024ceac86ac6098202b891d27b8671f \
| kubectl apply -f -
Creating ClusterStack... (dry run)
Uploading to 'gcr.io/my-project/my-repo'... (dry run)
Skipping 'gcr.io/my-project/my-repo/build@sha256:f550ab24b72586cb26215817b874b9e9ec2ca615ede03206833286934779ab5d'
Skipping 'gcr.io/my-project/my-repo/run@sha256:21c1fb65033ae5a765a1fb44bfefdea37024ceac86ac6098202b891d27b8671f'
clusterstack.kpack.io/test-stack created
Running the same command above with the --dry-run-with-image-upload
flag (instead of --dry-run
) ensures the created resource refers to images exist.
Yes! Azure DevOps Git is fully supported as of TBS 1.2
ECR is supported but requires manually creating each repository that TBS will use. With other registries, the repositories will be created automatically.
Like many Kubernetes native products, operating TBS involves orchestrating resources that depend on each other to function. If a resource is in a “not ready” state it is likely that there is a problem with one of the resources it depends on.
If you are encountering a not ready Image
, check and see which builder it uses and then check the status of that builder for additional information that could help you troubleshoot the problem.
$ kp image status <image-name>
$ kp clusterbuilder status <clusterbuilder-name>
Similarly, if a builder resource is in a “not ready” state, it is possible that there is a problem with the clusterstack
or clusterstore
resources it is referencing.
$ kp clusterstack status <clusterstack-name> --verbose
$ kp clusterstore status <clusterstore-name> --verbose
All Build Service concepts are also Kubernetes resources. Therefore, customers can interact with them using the kubectl
CLI to see all the information that can be provided by the Kubernetes API.
$ kubectl describe image <image-name>
$ kubectl describe clusterbuilder <clusterbuilder-name>
During imgpkg copy
Ensure you are logged in locally to both registries with:
If using the docker
CLI, run:
docker logout registry.tanzu.vmware.com && docker login registry.tanzu.vmware.com
docker logout <tbs-registry> && docker login <tbs-registry>
If using the crane
CLI, run:
rm ~/.docker/config.json
crane auth login <tbs-registry>
crane auth login registry.tanzu.vmware.com
On linux, if you have installed docker
with snap
you will need to copy /root/snap/docker/471/.docker/config.json
to ~/.docker/config.json
which is where imgpkg
is looking for the docker
credentials.
Ensure your credentials have write access to your registry. This is the same repository used during installation with the ytt/kapp
command.
For the docker
CLI:
docker push <tbs-registry>/<build-service-repository>
For the crane
CLI:
crane pull alpine alpine.tar && crane push alpine.tar <tbs-registry>/<build-service-repository>
During kp import
Ensure you are logged in locally to both registries with:
If using the docker
CLI, run:
docker logout registry.tanzu.vmware.com && docker login registry.tanzu.vmware.com
docker logout <tbs-registry> && docker login <tbs-registry>
If using the crane
CLI, run:
rm ~/.docker/config.json
crane auth login <tbs-registry>
crane auth login registry.tanzu.vmware.com
Ensure the credentials used to install TBS have write access to your registry as they sometimes differ from local credentials:
Run docker login <tbs-registry>
or crane auth login <tbs-registry>
using the credentials used to install TBS with ytt/kapp
.
Try to write to the image repository used during installation with the ytt/kapp
command:
For the docker
CLI:
docker push <tbs-registry>/<build-service-repository>
For the crane
CLI:
crane pull alpine alpine.tar && crane push alpine.tar <tbs-registry>/<build-service-repository>
All TBS builds happen in pods. By default, TBS will not delete the last ten successful builds and the last ten failed builds for the purpose of providing historical logging and debugging. If this behavior is not desired, users can configure the number of stored build pods by modifying the failedBuildHistoryLimit
and successBuildHistoryLimit
on the Image resource. This is not currently supported in the kp
CLI, but users can apply yaml configuration using kubectl
to update these fields. Follow this link for documentation.
After successfully installing tanzu-build-service In terminal run the command kubectl describe configMap build-service-version -n build-service
Under the data
field you will see the version of TBS you are currently using. EX:
data:
version: 1.3.0
Note: This will only work for TBS versions 1.2 and above
When running imgpkg copy
, the command will output the following message:
Skipped layer due to it being non-distributable. If you would like to include non-distributable layers, use the --include-non-distributable flag
This is because TBS ships with windows images to support windows builds. Windows images contain “foreign layers” that are references to proprietary windows layers that cannot be distributed without proper Microsoft licensing.
By default, imgpkg
will not relocate the proprietary windows layers to your registry. TBS also will not pull any windows layers to the cluster unless windows builds are being run so if you do not need windows this message can be ignored.
“Image resource” describes a kubernetes custom resource that produces OCI images by way of build resources. This resource will continue producing new builds that, when successful, output container images to the registry as configured by the image resource.
When you configure the Tanzu Network Auto Updater during installation, dependencies are pulled in from network.tanzu.vmware.com as they are released to keep the all app OCI images up-to-date and patched.
Using the steps from the installation docs will result in all dependencies to stay up-to-date with the latest buildpacks.
To pin to a TBS “descriptor” version, find the desired version of the descriptor from the TBS Dependencies Tanzu Network page. Run the following command to re-install TBS:
ytt -f /tmp/bundle/config/ \
-v kp_default_repository='<IMAGE-REPOSITORY>' \
-v kp_default_repository_username='<REGISTRY-USERNAME>' \
-v kp_default_repository_password='<REGISTRY-PASSWORD>' \
--data-value-yaml pull_from_kp_default_repo=true \
-v tanzunet_username='<TANZUNET-USERNAME>' \
-v tanzunet_password='<TANZUNET-PASSWORD>' \
-v descriptor_name='<DESCRIPTOR-NAME>' \
-v descriptor_version='<DESCRIPTOR-VERSION>' \
| kbld -f /tmp/bundle/.imgpkg/images.yml -f- \
| kapp deploy -a tanzu-build-service -f- -y
Where:
DESCRIPTOR-VERSION
is the desired descriptor version from the TBS Dependencies Tanzu Network page.In Kubernetes 1.23 on EKS, the CSIMigrationAWS in this K8s version version means that users need to install the Amazon EBS CSI Driver in order to use the default storage class.
If you have no installed the CSI Driver, you may see the following error in build pods:
'running PreBind plugin "VolumeBinding": binding volumes: timed out waiting for the condition'
Some managed Kubernetes services use dockerd as the default container runtime. Dockerd has a known limitation, in which the maximum supported layer depth is 125. When an image’s layer depth exceeds 125, the docker runtime will report an error which reads, “max depth exceeded”.
This error can be avoided by configuring your cluster to use containerd or CRI-O as its default container runtime. Please refer to your Kubernetes cluster provider’s documentation for specific instructions.
If the ca_cert_data
field is set during the installation of Tanzu Build Service, that certificate will be used to interact with the image registry.
Tanzu Build Service will also automatically insert this certificate into the system truststore during the build process.
If your run image has been built with a builder that includes the Paketo CA Certificates Buildpack, you can provide a runtime binding of type: ca-certificates
and it will be automatically added to the system truststore.
Note: This requires the language family buildpack used to include the Paketo CA Certificates Buildpack
For more information about using the buildpack, see VMware Tanzu Buildpacks documentation.