This topic describes how to customize HTTP/HTTPS proxies for individual VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) provisioned clusters.
TKGI applies your HTTP/HTTPS cluster proxies to traffic from the cluster’s Kubernetes and Docker processes, such as the Kubernetes API server, Kube Controller, Kubelet, and Docker daemon.
To create or change a cluster’s proxy configuration, see:
To view a cluster’s proxy configuration, see:
To configure global HTTP/HTTPS proxies for TKGI on vSphere or AWS, see:
These two topics also cover how the proxies work, and how they can be useful.
To create a cluster with a custom proxy configuration:
Define the proxy settings in a configuration file, as described in Proxy Configuration Settings, below.
Pass the file location to the --config-file
flag of tkgi create-cluster
. See Creating Clusters for more information.
To change a cluster’s proxy configuration:
{}
(for an object) or ""
(for a string) in the configuration file.Note: When you when you use tkgi update-cluster
to update an existing cluster, the attached network-profile must consist of only updatable settings.
Run the following command to update the cluster with the configuration file:
tkgi update-cluster CLUSTER-NAME --config-file CONFIG-FILE-NAME
Where:
CLUSTER-NAME
is the name of the existing Kubernetes cluster.CONFIG-FILE-NAME
is the path and filename of the configuration file you want to apply to the cluster.WARNING: Update the configuration file only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
To configure HTTP/HTTPS settings for a TKGI cluster, you first define them in a cluster configuration JSON file on your local filesystem.
Proxy settings that you can configure are:
Setting | Description |
---|---|
http_proxy |
HTTP proxy URL and credentials. This overrides the global HTTP Proxy settings in the TKGI tile > Networking pane. |
https_proxy |
HTTPS proxy URL and credentials. This overrides the global HTTP Proxy settings in the TKGI tile. |
no_proxy |
Comma-separated list of IP addresses that bypass the proxy for internal communication. This interacts with the tile’s global No Proxy setting based on the global_no_proxy_merge setting, below. |
global_no_proxy_merge |
Boolean value. The default false setting merges the no_proxy setting above with the global No Proxy list set in the tile. Setting this to true overrides the global No Proxy list. |
For example, the following configuration file overrides the http_proxy
and https_proxy
settings in the tile, and merges the no_proxy
list here with the no_proxy
list set in the tile:
{
"http_proxy":{
"url":"http://example.com",
"username":"admin",
"password":"admin"
},
"https_proxy":{
"url":"http://example.com",
"username":"admin",
"password":"admin"
},
"no_proxy":"127.0.0.1,localhost,*.example.com,198.51.100.0/24",
"global_no_proxy_merge":true
}
Note: Cluster configuration files can also include settings for non-proxy features, such as enabling cluster access by group Managed Service Accounts (gMSAs). You combine all such settings into a single, general-purpose configuration file to pass to the –config-file
flag.
You can see a cluster’s current proxy configuration by viewing its BOSH manifest:
To identify the names of your cluster deployments:
bosh deployments
Note: Cluster deployment names start with service-instance_
.
To download the manifest for any cluster you want to view:
bosh -d DEPLOYMENT-NAME manifest > /tmp/YOUR-DEPLOYMENT-MANIFEST.yml
Where:
DEPLOYMENT-NAME
is the name of your Kubernetes cluster deployment.YOUR-DEPLOYMENT-MANIFEST
is the name of your Kubernetes cluster deployment manifest.Search the manifest for proxy
to see its proxy settings under jobs.properties.env
, for example:
jobs:
- name: docker
release: kubo
properties:
bridge: cni0
default_ulimits:
- nofile=1048576
env:
http_proxy: ""
https_proxy: ""
no_proxy: .internal,.svc,.svc.cluster.local,.svc.cluster,api.pks.local,10.100.200.0/24,10.200.0.0/16,88.0.0.0/24,192.168.111.0/24,192.168.139.1,192.168.160.0/24,nsxmanager.pks.vmware.local
flannel: false
ip_masq: false
iptables: false
live_restore: true
log_level: error
log_options:
- max-size=128m
- max-file=2
storage_driver: overlay2
store_dir: /var/vcap/store