For any source-based supply chains, that is, supply chains not taking a pre-built image, when you specify the new dockerfile
parameter in a workload, the builds switch from using Kpack to using Kaniko. Kaniko is an open-source tool for building container images from a Dockerfile without the need for running Docker inside a container.
parameter name | meaning | example |
---|---|---|
dockerfile |
relative path to the Dockerfile file in the build context | ./Dockerfile |
docker_build_context |
relative path to the directory where the build context is | . |
docker_build_extra_args |
list of flags to pass directly to Kaniko (such as providing arguments, and so on to a build) | - –build-arg=FOO=BAR |
For example, assuming that you want to build a container image our of a repository named github.com/foo/bar
whose Dockerfile resides in the root of that repository, you can switch from using Kpack to building from that Dockerfile by passing the dockerfile
parameter:
$ tanzu apps workload create foo \
--git-repo https://github.com/foo/bar \
--git-branch dev \
--param dockerfile=./Dockerfile
Create workload:
1 + |---
2 + |apiVersion: carto.run/v1alpha1
3 + |kind: Workload
4 + |metadata:
5 + | name: foo
6 + | namespace: default
7 + |spec:
8 + | params:
9 + | - name: dockerfile
10 + | value: ./Dockerfile
11 + | source:
12 + | git:
13 + | ref:
14 + | branch: dev
15 + | url: https://github.com/foo/bar
Similarly, if the context to be used for the build must be set to a different directory within the repository, you can make use of the docker_build_context
to change that:
$ tanzu apps workload create foo \
--git-repo https://github.com/foo/bar \
--git-branch dev \
--param dockerfile=MyDockerfile \
--param docker_build_context=./src
ImportantThis feature has no platform operator configurations to be passed through
tap-values.yaml
, but ifootb-supply-chain-*.registry.ca_cert_data
orshared.ca_cert_data
is configured intap-values
, the certificates are considered when pushing the container image.
Despite that Kaniko can perform container image builds without needing either a Docker daemon or privileged containers, it does require the use of:
To overcome such limitations imposed by the default unprivileged SecurityContextConstraints (SCC), VMware recommends:
Creating a more permissive SCC with just enough extra privileges for Kaniko to properly operate:
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
name: ootb-templates-kaniko-restricted-v2-with-anyuid
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: false
allowPrivilegedContainer: false
allowedCapabilities: [CHOWN, FOWNER, SETUID, SETGID, DAC_OVERRIDE]
defaultAddCapabilities:
fsGroup:
type: RunAsAny
groups: []
priority:
readOnlyRootFilesystem: false
requiredDropCapabilities:
- MKNOD
runAsUser:
type: RunAsAny
seLinuxContext:
type: MustRunAs
seccompProfiles:
- runtime/default
supplementalGroups:
type: RunAsAny
users: []
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- projected
- secret
Creating a ClusterRole to permit the use of such SCC to any actor binding to that cluster role:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ootb-templates-kaniko-restricted-v2-with-anyuid
rules:
- apiGroups:
- security.openshift.io
resourceNames:
- ootb-templates-kaniko-restricted-v2-with-anyuid
resources:
- securitycontextconstraints
verbs:
- use
Binding the role to an actor, ServiceAccount, as instructed in Set up developer namespaces to use installed packages:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: workload-kaniko-scc
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ootb-templates-kaniko-restricted-v2-with-anyuid
subjects:
- kind: ServiceAccount
name: default
With the SCC created and the ServiceAccount bound to the role that permits the use of the SCC, OpenShift accepts the pods created to run Kaniko to build the container images.
NoteSuch restrictions are due to well-known limitations in how Kaniko performs the image builds, and there is currently no solution. For more information, see kaniko#105.