This topic explains how you can use a Kubernetes profile in Tanzu Kubernetes Grid Integrated Edition (TKGI) to override the default Identity Provider (IDP).
The TKGI UAA pane configures a default IDP for all the clusters that TKGI creates. You can use a Kubernetes profile to override this default IDP.
The Kubernetes profile applies a custom OIDC-compatible IDP to a cluster by deploying an OIDC connector as a service pod on the cluster.
The following diagram provides an overview of how this configuration works:
dex-host.example.com
dex.example.com:32000
cluster.example.com
dex.example.com:32000
to authenticate the user whenever a user requests an app hosted on the clusterThe Kubernetes profile in this topic deploys dex as an OIDC provider, but you can use any OIDC service.
For more information and other uses of Kubernetes profiles, see Using Kubernetes Profiles.
To use UAA as your OIDC provider, the TKGI API Certificate to secure the TKGI API field on the TKGI tile must be a proper certificate chain and have a SAN field. For more information, see configuring TKGI API in the Installing TKGI topic for your IaaS.
To configure a custom OIDC provider for TKGI clusters, complete the following:
To configure dex as an OIDC provider for an LDAP directory:
Create a cluster in TKGI for installing dex as a pod:
tkgi create-cluster dex -p small -e dex-host.example.com
Run tkgi cluster
for the cluster and record its Kubernetes Master IP
address.
For example:
$ tkgi cluster dex
TKGI Version: 1.9.0-build.1
Name: dex
K8s Version: 1.24.3
Plan Name: smallUUID: dbe1d880-478f-4d0d-bb2e-0da3d9641f0d
Last Action: CREATE
Last Action State: succeeded
Last Action Description: Instance provisioning completed
Kubernetes Master Host: dex-host.example.com
Kubernetes Master Port: 8443
Worker Nodes: 1
Kubernetes Master IP(s): 10.0.11.11
Network Profile Name:
Kubernetes Profile Name:
Tags:
Add the Kubernetes Master IP
address to your local /etc/hosts
file.
Populate your ~/.kube/config
with context for dex:
tkgi get-credentials dex
Switch to the admin
context of the dex cluster:
kubectl config use-context dex
To deploy a dex workload on a Kubernetes cluster, follow the steps in Deploying dex on Kubernetes in the dex GitHub repo.
dex.yaml
file using the format in Example dex YAML File below.To set up \etcd\hosts
and TLS so that clusters can access dex securely:
Add the /etc/hosts
entry for the public IP and the hostname dex.example.com
on your local workstation. This lets you retrieve a token to access your OIDC-profile cluster later.
10.0.11.11 dex.example.com
To generate TLS assets for the dex deployment, complete the steps in Generate TLS assets in the dex documentation.
To add the generated TLS assets to the cluster as a secret, complete the steps in Create cluster secrets in the dex documentation.
To run dex as a local service within a pod and exposes its endpoint via an IP address:
On a Kubernetes cluster, deploy dex using the example YAML file linked above.
Wait for the deployment to succeed.
Expose the dex deployment as a service named dex-service
:
kubectl expose deployment dex --type=LoadBalancer --name=dex-service
For example:
$ kubectl expose deployment dex –type=LoadBalancer –name=dex-service > service/dex-service exposed
This should create a dex service with a public IP address that clusters can use as an OIDC issuer URL. Retrieve the IP address by running:
kubectl get services dex-service
Add the IP of the dex service to your /etc/hosts
:
35.222.29.10 dex.example.com
dex.example.com
, which the dex binary expects as issuer_url
and for TLS handshakes.https://dex.example.com:32000
.To create a Kubernetes profile that lets a cluster’s kube-api-server
connect to the dex service:
Create a Kubernetes profile /tmp/profile.json
containing your custom OIDC settings under the kube-apiserver
component.
For example:
$ cat /tmp/profile.json
{
"name": "oidc-config",
"description": "Kubernetes profile with OIDC configuration",
"customizations": [
{
"component": "kube-apiserver",
"arguments": {
"oidc-client-id": "example-app",
"oidc-issuer-url": "https://dex.example.com:32000",
"oidc-username-claim": "email"
},
"file-arguments": {
"oidc-ca-file": "/tmp/oidc-ca.pem"
}
}
]
}
Of all the supported kube-apiserver
flags, the following are specific to OIDC:
* In the arguments
block:
* oidc-issuer-url
: Set this to "https://dex.example.com:32000"
.
* oidc-client-id
* oidc-username-claim
: Set this to "email"
for testing with the example app below.
* oidc-groups-claim
* In the file-arguments
block:
* oidc-ca-file
: Set this to a path in the local file system that contains a CA certificate file. The certificate must be a proper certificate chain and have a SAN field.
For more information on kube-apiserver
flags, see kube-apiserver in the Kubernetes documentation.
Create the profile:
tkgi create-kubernetes-profile /tmp/profile.json
In the example above, the file-path /tmp/oidc-ca.pem
points to a CA certificate on the local file system, and the tkgi create-kubernetes-profile
command sends this certificate to the API server when it creates the profile.
To create a cluster using the Kubernetes profile created above:
Run the following:
tkgi create-cluster cluster-with-custom-oidc -e cluster.example.com -p small --kubernetes-profile oid-config
Note: Use only lowercase characters when naming your cluster if you manage your clusters with Tanzu Mission Control (TMC). Clusters with names that include an uppercase character cannot be attached to TMC.
Confirm the cluster has custom OIDC settings from the profile.
To test that the cluster uses the OIDC provider to control access, install an app, generate an ID token, and test the cluster.
You can use an example app from the dex repo or test with a full-fledged application, such as Gangway, instead of the example app.
To test that the cluster uses the OIDC provider:
To install an example app, complete the steps to install the example-app
in Logging into the cluster in the dex documentation.
Run the dex example app:
./bin/example-app --issuer https://dex.example.com:32000
--issuer-root-ca /tmp/ca.pem
email
scope.To fetch the token, complete the steps to generate an ID token in Logging into the cluster in the dex documentation.
Log in using the Log in with Email option and enter the email and password of an account in your OIDC IDP.
A page appears listing the ID Token, Access Token, Refresh Token, ID Token Issuer (iss
claim), and other information.
Wait for the token to be generated.
Edit your .kube/config
file to add a new context for the test user:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CA-CERT
server: CLUSTER-URL
name: TEST-CLUSTER
contexts:
- [EXISTING-CONTEXTS]
- context:
cluster: TEST-CLUSTER
user: TEST-USER
name: TEST-CONTEXT
current-context: TEST-CONTEXT
kind: Config
preferences: {}
users:
- [EXISTING-USERS]
- name: TEST-USER
user:
token: ID-TOKEN
Where:
CA
is your CA certificate.CLUSTER-URL
is the address of the test service, such as https://cluster.example.com:8443
.TEST-CLUSTER
is the name of the test cluster, such as cluster-with-custom-oidc
.TEST-USER
is the test account username, such as alana
.TEST-CONTEXT
is a name you create for the new context, such as cluster-with-custom-oidc-ldap-alana
.ID-TOKEN
is the ID Token retrieved by the example-app
app above.Include the cluster.server
and user.token
values retrieved using the example app.
Create a ClusterRole
YAML file that grants permissions to access services and pods in the default
namespace:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: default
name: pod-reader-clusterRolebinding
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods", "services"]
verbs: ["get", "watch", "list"]
Run kubectl apply
or kubectl create
to pass the ClusterRole
spec file to the kube controller:
kubectl apply -f ClusterRole.yml
Create a ClusterRoleBinding
YAML file that applies the ClusterRole
role to the test user.
For example:
apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "[email protected]" to read pods in the "default" namespace.
kind: ClusterRoleBinding
metadata:
name: read-pods-clusterRolebinding
namespace: default
subjects:
- kind: User
name: [email protected] # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole #this must be Role or ClusterRole
name: pod-reader-clusterRolebinding # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
Run kubectl apply
or kubectl create
to pass the ClusterRoleBinding
spec file to the kube controller:
kubectl apply -f ClusterRoleBinding.yml
Confirm the test user can run the following:
kubectl get pods
The cluster is successfully authenticating the user by connecting to the dex OIDC provider via OIDC if the test user can run kubectl get pods
.
To create a dex YAML configuration file for connecting to an LDAP server:
If you do not have a TLS Certificate, create a certificate, key, and secret:
DNS.1 = dex.example.com
as an alt_names
value.gencert.sh
script.Create a secret using the newly created certificate and key:
kubectl create secret tls SECRET-NAME --key ssl/key.pem --cert ssl/cert.pem
Where SECRET-NAME
is the secret name, for example dex.example.com.tls
.
Use the following example configurations to create your dex YAML file:
To create a dex deployment configuration, use the following example configurations:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: dex
name: dex
spec:
selector:
matchLabels:
app: dex
replicas: 3
template:
metadata:
labels:
app: dex
spec:
serviceAccountName: dex # This is created below
containers:
- image: quay.io/dexidp/dex:v2.10.0
name: dex
command: ["/usr/local/bin/dex", "serve", "/etc/dex/cfg/config.yaml"]
ports:
- name: https
containerPort: 5556
env:
- name: KUBERNETES_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: config
mountPath: /etc/dex/cfg
- name: tls
mountPath: /etc/dex/tls
volumes:
- name: config
configMap:
name: dex
items:
- key: config.yaml
path: config.yaml
- name: tls
secret:
secretName: SECRET-NAME
Where SECRET-NAME
is the secret name, for example dex.example.com.tls
.
To create a dex ConfigMap, refer to the following example configuration:
---
kind: ConfigMap
apiVersion: v1
metadata:
name: dex
data:
config.yaml: |
issuer: ISSUER-URL
storage:
type: kubernetes
config:
inCluster: true
web:
https: IP-ADDRESS
tlsCert: TLS-CERT-PATH
tlsKey: TLS-KEY-PATH
connectors:
- type: ldap
id: ldap
name: LDAP
config:
host: HOST-IP-ADDRESS
insecureNoSSL: true
insecureSkipVerify: true
bindDN: BIND-DN
bindPW: BIND-PASS
userSearch:
baseDN: BASE-DN-USER
username: cn
idAttr: uid
emailAttr: mail
nameAttr: cn
groupSearch:
baseDN: BASE-DN-GROUP
userMatchers:
# A user is a member of a group when their DN matches
# the value of a "member" attribute on the group entity.
- userAttr: DN
groupAttr: member
nameAttr: cn
oauth2:
skipApprovalScreen: true
staticClients:
- id: APP-ID
redirectURIs:
- 'APP-URL'
name: 'APP-NAME'
secret: CERT-SECRET
enablePasswordDB: true
staticPasswords:
- email: "USER-EMAIL"
# bcrypt hash of the string "password"
hash: "HASH"
username: "USER-NAME"
userID: "LDAP-USER-ID"
Where:
ISSUER-URL
is the issuer URL, for example https://11.111.11.111:11000
.IP-ADDRESS
is the host IP Address, for example 0.0.0.0:1111
.TLS-CERT-PATH
is the path to your TLS Certificate, for example /etc/dex/tls/tls.crt
.TLS-KEY-PATH
is the path to your TLS Key, for example /etc/dex/tls/tls.key
.HOST-IP-ADDRESS
is the host IP Address, for example 33.88.11.111
.BIND-DN
is the distinguished name, for example cn=admin,dc=ldap,dc=example,dc=com
.BIND-PASS
is the password for the provided cn
user.BASE-DN-USER
is the distinguished name, for example dc=ldap,dc=example,dc=com
.BASE-DN-GROUP
is the group distinguished name, for example dc=ldap,dc=example,dc=com
.APP-ID
is the ID of a static client app, for example example-app
.APP-URL
is the URL for the static client app, for example http://127.0.0.1:5555/callback
.APP-NAME
is the name of the static client app, for example Example App
.CERT-SECRET
is the certificate secret.USER-EMAIL
is the user email, for example [email protected]
.HASH
is the Bcrypt hash of the password.USER-NAME
is the cn
user user name.LDAP-USER-ID
is the LDAP ID for the cn
user.To create a dex service account, refer to the following example configuration:
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: dex
name: dex
To create a dex ClusterRole, refer to the following example configuration:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: dex
rules:
- apiGroups: ["dex.coreos.com"] # API group created by dex
resources: ["*"]
verbs: ["*"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["create"] # To manage its own resources, dex must be able to create customresourcedefinitions
To create a dex ClusterRoleBinding, refer to the following example configuration:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dex
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: dex
subjects:
- kind: ServiceAccount
name: dex # Service account assigned to the dex pod, created above
namespace: default # The namespace dex is running in