This topic explains how you can use a Kubernetes profile in Tanzu Kubernetes Grid Integrated Edition (TKGI) to override the default Identity Provider (IDP).
The TKGI UAA pane configures a default IDP for all the clusters that TKGI creates. You can use a Kubernetes profile to override this default IDP.
The Kubernetes profile applies a custom OIDC-compatible IDP to a cluster by deploying an OIDC connector as a service pod on the cluster.
The following diagram provides an overview of how this configuration works:
dex-host.example.com
dex.example.com:32000
cluster.example.com
dex.example.com:32000
to authenticate the user whenever a user requests an app hosted on the clusterThe Kubernetes profile in this topic deploys dex as an OIDC provider, but you can use any OIDC service.
For more information and other uses of Kubernetes profiles, see Using Kubernetes Profiles.
To use UAA as your OIDC provider, the TKGI API Certificate to secure the TKGI API field on the TKGI tile must be a proper certificate chain and have a SAN field. For more information, see configuring TKGI API in the Installing TKGI topic for your IaaS.
To configure a custom OIDC provider for TKGI clusters, complete the following:
To configure dex as an OIDC provider for an LDAP directory:
Create a cluster in TKGI for installing dex as a pod:
tkgi create-cluster dex -p small -e dex-host.example.com
Run tkgi cluster
for the cluster and record its Kubernetes Master IP
address.
For example:
$ tkgi cluster dex
TKGI Version: 1.9.0-build.1
Name: dex
K8s Version: 1.24.3
Plan Name: smallUUID: dbe1d880-478f-4d0d-bb2e-0da3d9641f0d
Last Action: CREATE
Last Action State: succeeded
Last Action Description: Instance provisioning completed
Kubernetes Master Host: dex-host.example.com
Kubernetes Master Port: 8443
Worker Nodes: 1
Kubernetes Master IP(s): 10.0.11.11
Network Profile Name:
Kubernetes Profile Name:
Tags:
Add the Kubernetes Master IP
address to your local /etc/hosts
file.
Populate your ~/.kube/config
with context for dex:
tkgi get-credentials dex
Switch to the admin
context of the dex cluster:
kubectl config use-context dex
To deploy a dex workload on a Kubernetes cluster, follow the steps in Deploying dex on Kubernetes in the dex GitHub repo.
To set up \etcd\hosts
and TLS so that clusters can access dex securely:
Add the /etc/hosts
entry for the public IP and the hostname dex.example.com
on your local workstation. This lets you retrieve a token to access your OIDC-profile cluster later.
10.0.11.11 dex.example.com
To generate TLS assets for the dex deployment, complete the steps in Generate TLS assets in the dex documentation.
To add the generated TLS assets to the cluster as a secret, complete the steps in Create cluster secrets in the dex documentation.
To run dex as a local service within a pod and exposes its endpoint via an IP address:
On a Kubernetes cluster, deploy dex using the example YAML file linked above.
Wait for the deployment to succeed.
Expose the dex deployment as a service named dex-service
:
kubectl expose deployment dex --type=LoadBalancer --name=dex-service
For example:
$ kubectl expose deployment dex –type=LoadBalancer –name=dex-service
> service/dex-service exposed
This should create a dex service with a public IP address that clusters can use as an OIDC issuer URL. Retrieve the IP address by running:
kubectl get services dex-service
Add the IP of the dex service to your /etc/hosts
:
35.222.29.10 dex.example.com
dex.example.com
, which the dex binary expects as issuer_url
and for TLS handshakes.https://dex.example.com:32000
.To create a Kubernetes profile that lets a cluster’s kube-api-server
connect to the dex service:
Create a Kubernetes profile /tmp/profile.json
containing your custom OIDC settings under the kube-apiserver
component.
For example:
$ cat /tmp/profile.jsonOf all the supported
{
“name”: “oidc-config”,
“description”: “Kubernetes profile with OIDC configuration”,
“customizations”: [
{
“component”: “kube-apiserver”,
“arguments”: {
“oidc-client-id”: “example-app”,
“oidc-issuer-url”: “https://dex.example.com:32000”,
“oidc-username-claim”: “email”
},
“file-arguments”: {
“oidc-ca-file”: “/tmp/oidc-ca.pem”
}
}
]
}
kube-apiserver
flags, the following are specific to OIDC:arguments
block:oidc-issuer-url
: Set this to "https://dex.example.com:32000"
.oidc-client-id
oidc-username-claim
: Set this to "email"
for testing with the example app below.oidc-groups-claim
file-arguments
block:oidc-ca-file
: Set this to a path in the local file system that contains a CA certificate file. The certificate must be a proper certificate chain and have a SAN field. For more information on kube-apiserver
flags, see kube-apiserver in the Kubernetes documentation.
Create the profile:
tkgi create-kubernetes-profile /tmp/profile.json
In the example above, the file-path /tmp/oidc-ca.pem
points to a CA certificate on the local file system, and the tkgi create-kubernetes-profile
command sends this certificate to the API server when it creates the profile.
To create a cluster using the Kubernetes profile created above:
Run the following:
tkgi create-cluster cluster-with-custom-oidc -e cluster.example.com -p small --kubernetes-profile oid-config
Note: Use only lowercase characters when naming your cluster if you manage your clusters with Tanzu Mission Control (TMC). Clusters with names that include an uppercase character cannot be attached to TMC.
Confirm the cluster has custom OIDC settings from the profile.
To test that the cluster uses the OIDC provider to control access, install an app, generate an ID token, and test the cluster.
You can use an example app from the dex repo or test with a full-fledged application, such as Gangway, instead of the example app.
To test that the cluster uses the OIDC provider:
To install an example app, complete the steps to install the example-app
in Logging into the cluster in the dex documentation.
Run the dex example app:
./bin/example-app --issuer https://dex.example.com:32000
--issuer-root-ca /tmp/ca.pem
email
scope.To fetch the token, complete the steps to generate an ID token in Logging into the cluster in the dex documentation.
Log in using the Log in with Email option and enter the email and password of an account in your OIDC IDP.
A page appears listing the ID Token, Access Token, Refresh Token, ID Token Issuer (iss
claim), and other information.
Wait for the token to be generated.
Edit your .kube/config
file to add a new context for the test user:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CA-CERT
server: CLUSTER-URL
name: TEST-CLUSTER
contexts:
- [EXISTING-CONTEXTS]
- context:
cluster: TEST-CLUSTER
user: TEST-USER
name: TEST-CONTEXT
current-context: TEST-CONTEXT
kind: Config
preferences: {}
users:
- [EXISTING-USERS]
- name: TEST-USER
user:
token: ID-TOKEN
Where:
CA
is your CA certificate.CLUSTER-URL
is the address of the test service, such as https://cluster.example.com:8443
.TEST-CLUSTER
is the name of the test cluster, such as cluster-with-custom-oidc
.TEST-USER
is the test account username, such as alana
.TEST-CONTEXT
is a name you create for the new context, such as cluster-with-custom-oidc-ldap-alana
.ID-TOKEN
is the ID Token retrieved by the example-app
app above.Include the cluster.server
and user.token
values retrieved using the example app.
Create a ClusterRole
YAML file that grants permissions to access services and pods in the default
namespace:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: default
name: pod-reader-clusterRolebinding
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods", "services"]
verbs: ["get", "watch", "list"]
Run kubectl apply
or kubectl create
to pass the ClusterRole
spec file to the kube controller:
kubectl apply -f ClusterRole.yml
Create a ClusterRoleBinding
YAML file that applies the ClusterRole
role to the test user.
For example:
apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "[email protected]" to read pods in the "default" namespace.
kind: ClusterRoleBinding
metadata:
name: read-pods-clusterRolebinding
namespace: default
subjects:
- kind: User
name: [email protected] # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole #this must be Role or ClusterRole
name: pod-reader-clusterRolebinding # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
Run kubectl apply
or kubectl create
to pass the ClusterRoleBinding
spec file to the kube controller:
kubectl apply -f ClusterRoleBinding.yml
Confirm the test user can run the following:
kubectl get pods
The cluster is successfully authenticating the user by connecting to the dex OIDC provider via OIDC if the test user can run kubectl get pods
.