This topic tells you about how VMware Tanzu Application Service for VMs (TAS for VMs) secures the containers that host app instances on Linux.
For an overview of other TAS for VMs security features, see TAS for VMs Security.
The sections in this topic provide the following information:
Container Mechanics provides an overview of container isolation.
Inbound and Outbound Traffic from TAS for VMs provides an overview of container networking and describes how TAS for VMs admins customize container network traffic rules for their deployment.
Container Security describes how TAS for VMs secures containers by running app instances in unprivileged containers and by hardening them.
Each instance of an app deployed to TAS for VMs runs within its own self-contained environment, a Garden container. This container isolates processes, memory, and the filesystem using operating system features and the characteristics of the virtual and physical infrastructure where TAS for VMs is deployed. For more information about Garden containers, see Garden.
TAS for VMs achieves container isolation by namespacing kernel resources that would otherwise be shared. The intended level of isolation is set to prevent multiple containers that are present on the same host from detecting each other. Every container includes a private root filesystem, which includes a Process ID (PID), namespace, network namespace, and mount namespace.
TAS for VMs creates container filesystems using the Garden Rootfs (GrootFS) tool. It stacks the following using OverlayFS:
A read-only base filesystem: This filesystem has the minimal set of operating system packages and Garden-specific modifications common to all containers. Containers can share the same read-only base filesystem because all writes are applied to the read-write layer.
A container-specific read-write layer: This layer is unique to each container and its size is limited by XFS project quotas. The quotas prevent the read-write layer from overflowing into unallocated space.
For more information about GrootFS, see Garden RootFS (GrootFS) in Garden.
Resource control is managed using Linux control groups. Associating each container with its own cgroup or job object limits the amount of memory that the container may use. Linux cgroups also require the container to use a fair share of CPU compared to the relative CPU share of other containers.
Note TAS for VMs does not support a RedHat Enterprise Linux OS stemcell. This is due to an inherent security issue with the way RedHat handles user namespacing and container isolation.
Each container is placed in its own cgroup. Cgroups make each container use a fair share of CPU relative to the other containers. This prevents oversubscription on the host VM where one or more containers hog the CPU and leave no computing resources to the others.
The way cgroups allocate CPU time is based on shares. CPU shares do not work as direct percentages of total CPU usage. Instead, a share is relative in a given time window to the shares held by the other containers. The total amount of CPU that can be overall divided among the cgroups is what is left by other processes that may run in the host VM.
Generally, cgroups offers two possibilities for limiting the CPU usage: CPU affinity and CPU bandwidth, the latter of which is used in TAS for VMs.
CPU affinity consists of binding a cgroup to certain CPU cores. The actual amount of CPU cycles that can be consumed by the cgroup is thus limited to what is available on the bound CPU cores.
CPU bandwidth sets the weight of a cgroup with the process scheduler. The process scheduler divides the available CPU cycles among cgroups depending on the shares held by each cgroup, relative to the shares held by the others. For example, consider two cgroups, one holding two shares and one holding four. Assuming the process scheduler gets to administer 60 CPU cycles, the first cgroup with two shares will get one third of those available CPU cycles, as it holds one third of the overall shares. Similarly, the second cgroup will get 40 cycles, as it holds two thirds of the collective shares.
The calculation of the CPU usage based on the percentage of the total CPU power available is quite sophisticated and is performed regularly as the CPU demands of the various containers fluctuates. Specifically, the percentage of CPU cycles a cgroup gets can be calculated by dividing the cpu.shares
it holds by the sum of the cpu.shares
of all the cgroups that are currently doing CPU activity, as shown in the following calculation:
process_cpu_share_percent = cpu.shares / sum_cpu_shares * 100
The actual number of shares a cgroup gets can be read from the cpu.shares
file of the cgroup configurations pseudo-file-system available in the container at /sys/fs/cgroup/cpu
. The number of shares given to an apps cgroup depends on the amount of memory the app declares to need in the manifest. TAS for VMs scales the number of allocated shares linearly with the amount of memory.
The following algorithm is used to determine the number of CPU shares for an app:
process_cpu.shares = app_memory_in_mb + 32
The next example helps to illustrate this better. Consider three processes: P1, P2 and P3, which are assigned cpu.shares
of 5, 20 and 30, respectively.
P1 is active, while P2 and P3 require no CPU. Hence, P1 may use the whole CPU. When P2 joins in and is doing some actual work, such as when a request comes in, the CPU share between P1 and P2 is calculated as follows:
At some point, process P3 joins in. Then the distribution is recalculated again:
Should P1 become idle, the following recalculation between P2 and P3 takes place:
If P3 finishes or becomes idle, then P2 can consume all the CPU, as another recalculation is performed.
In summary, the cgroup shares are the minimum guaranteed CPU share that the process can get. This limitation becomes effective only when processes on the same host compete for resources.
Learn about container networking and how TAS for VMs admins customize container network traffic rules for their deployment.
A host VM has a single IP address. If you configure the deployment with the cluster on a VLAN, as VMware recommends, then all traffic goes through the following levels of network address translation, as shown in the diagram below.
Inbound requests flow from the load balancer through the router to the host Diego Cell, then into the app container. The router determines which app instance receives each request.
Outbound traffic flows from the app container to the Diego Cell, then to the gateway on the Diego Cell’s virtual network interface. Depending on your IaaS, this gateway may be a NAT to external networks.
The networking diagram shows the following:
DMZ on left side
Inbound requests go to Load Balancer
Outbound traffic comes from NAT
Cloud Foundry vLAN (on right side):
Load balancer goes to router and to the app container inside the Diego Cell
App container response goes to NAT
Admins configure rules to govern container network traffic. This is how containers send traffic outside of TAS for VMs and receive traffic from outside, the Internet. These rules can prevent system access from external networks and between internal components and determine if apps can establish connections over the virtual network interface.
Admins configure these rules at two levels:
Application Security Groups (ASGs) apply network traffic rules at the container level. For information about creating and configuring ASGs, see App Security Groups.
Container-to-container networking policies determine app-to-app communication. Within TAS for VMs, apps can communicate directly with each other, but the containers are isolated from outside TAS for VMs. For information about configuring container-to-container network policies, see Configuring Container-to-Container Networking.
TAS for VMs secures containers through the following measures:
Running app instances in unprivileged containers by default. For more information, see Types.
Hardening containers by limiting functionality and access rights. For more information, see Hardening.
Allowing admins to configure ASGs to block outbound connections from app containers. For information about creating and configuring ASGs, see App Security Groups.
Garden has the following container types:
Currently, TAS for VMs runs all app instances and staging tasks in unprivileged containers by default. This measure increases security by eliminating the threat of root escalation inside the container.
TAS for VMs mitigates against container breakout and denial of service attacks in the following ways:
TAS for VMs uses the full set of Linux namespaces - IPC, Network, Mount, PID, User, UTS - to provide isolation between containers running on the same host. The User namespace is not used for privileged containers. For more information about Linux namespaces, see namespaces - overview of Linux namespaces in the Ubuntu documentation.
In unprivileged containers, Ops Manager maps UID/GID 0 (root) inside the container user namespace to a different UID/GID on the host to prevent an app from inheriting UID/GID 0 on the host if it breaks out of the container.
MAX_UID-1
outside of the container namespace.TAS for VMs mounts /proc
and /sys
as read-only inside containers.
TAS for VMs disallows dmesg
access for unprivileged users and all users in unprivileged containers.
TAS for VMs uses chroot
when importing docker images from docker registries.
TAS for VMs establishes a container-specific overlay filesystem mount. TAS for VMs uses pivot_root
to move the root filesystem into this overlay, in order to isolate the container from the host system’s filesystem. For more information about pivot_root
, see pivot_root - change the root filesystem in the Ubuntu documentation.
TAS for VMs does not call any binary or script inside the container filesystem, in order to eliminate any dependencies on scripts and binaries inside the root filesystem.
TAS for VMs avoids side-loading binaries in the container through bind mounts or other methods. Instead, it re-executes the same binary by reading it from /proc/self/exe
whenever it needs to run a binary in a container.
TAS for VMs establishes a virtual ethernet pair for each container for network traffic. For more information, see Container Network Traffic. The virtual ethernet pair has the following features:
TAS for VMs applies disk quotas using container-specific XFS quotas with the specified disk-quota capacity.
TAS for VMs applies a total memory usage quota through the memory cgroup and destroys the container if the memory usage exceeds the quota.
TAS for VMs applies a fair-use limit to CPU usage for processes inside the container through the cpu.shares
control group.
TAS for VMs allows admins to rate limit the maximum bandwidth consumed by single-app containers, configuring rate
and burst
properties on the silk-cni
job.
TAS for VMs limits access to devices using cgroups but explicitly allows the following safe device nodes:
/dev/full
/dev/fuse
/dev/null
/dev/ptmx
/dev/pts/*
/dev/random
/dev/tty
/dev/tty0
/dev/tty1
/dev/urandom
/dev/zero
/dev/tap
/dev/tun
TAS for VMs drops the following Linux capabilities for all container processes. Every dropped capability limits the actions the root user can perform.
CAP_DAC_READ_SEARCH
CAP_LINUX_IMMUTABLE
CAP_NET_BROADCAST
CAP_NET_ADMIN
CAP_IPC_LOCK
CAP_IPC_OWNER
CAP_SYS_MODULE
CAP_SYS_RAWIO
CAP_SYS_PTRACE
CAP_SYS_PACCT
CAP_SYS_BOOT
CAP_SYS_NICE
CAP_SYS_RESOURCE
CAP_SYS_TIME
CAP_SYS_TTY_CONFIG
CAP_LEASE
CAP_AUDIT_CONTROL
CAP_MAC_OVERRIDE
CAP_MAC_ADMIN
CAP_SYSLOG
CAP_WAKE_ALARM
CAP_BLOCK_SUSPEND
CAP_SYS_ADMIN
for unprivileged containers