The container network interface (CNI) plug-in, Open vSwitch (OVS), and the NCP Docker image must be installed on the OpenShift nodes. The installation is performed by running an Ansible playbook.
This step is not necessary if you install NCP and OpenShift using a single playbook. See Install NCP and OpenShift Using a Single Playbook.
The playbook contains instructions to configure NSX-T resources for the nodes. You can also configure the NSX-T Data Center resources manually as described in Setting Up NSX-T Resources. The parameter perform_nsx_config indicates whether or not to configure the resources when the playbook is run.
- Update the parameter values in roles/ncp_prep/default/main.yaml and roles/nsx_config/default/main.yaml, including the URLs where CNI plugin RPM, OVS and its corresponding kernel module RPM can be downloaded. In addition, uplink_port is the name of the uplink port VNIC on the node VM. The remaining variables pertain to the NSX-T Data Center management plane configuration.
Parameters that need to be specified:
perform_nsx_config: whether to perform the resource configuration. Set it to false if the configuration will be done manually, and nsx_config script will not be run.
nsx_manager_ip: IP of NSX Manager
nsx_edge_cluster_name: name of the Edge Cluster to be used by the tier-0 router
nsx_transport_zone_name: name of the overlay Transport Zone
os_node_name_list: comma-separated list of node names
For example, node1,node2,node3
subnet_cidr: CIDR address for IP administrator will assign to br-int on the node
vc_host: IP address of vCenter Server
vc_user: user name of vCenter Server administrator
vc_password: password of vCenter Server administrator
vms: comma-separated list of VM names. The order must match os_node_name_list.
The following parameters have default values. You can modify them as needed.
nsx_t0_router_name: name of tier-0 Logical Router for the cluster. Default: t0
pod_ipblock_name: name of IP block for pods. Default: podIPBlock
pod_ipblock_cidr: CIDR address for this IP block. Default: 172.20.0.0/16
snat_ippool_name: name of the IP block for SNAT. Default is externalIP.
snat_ippool_cidr: CIDR address for this IP block. Default: 172.30.0.0/16
start_range: the start IP address of CIDR specified for this IP pool. Default: 172.30.0.1
end_range: the end IP address of CIDR specified for this IP pool. Default: 172.30.255.254
os_cluster_name: name of the OpenShift cluster. Default: occl-one
nsx_node_ls_name: name of Llogical switch connected to the nodes. Default: node_ls
nsx_node_lr_name: name of logical router for the switch node_ls. Default: node_lr
The nsx-config playbook supports creating only one IP pool and one IP block. If you want more, you must create them manually.
- Change to the openshift-ansible-nsx directory and run the ncp_prep role.
ansible-playbook -i /PATH/TO/HOSTS/hosts ncp_prep.yaml
The playbook contains instructions to perform the following actions:
Download the CNI plug-in installation file.
The filename is nsx-cni-22.214.171.124.0.xxxxxxx-1.x86_64.rpm, where xxxxxxx is the build number.
Install the CNI plug-in installation file.
The plug-in is installed in /opt/cni/bin. The CNI configuration file 10.net.conf is copied to /etc/cni/net.d. The rpm will also install the configuration file /etc/cni/net.d/99-loopback.conf for the loopback plug-in.
Download and install the OVS installation files.
The files are openvswitch-2.9.1.xxxxxxx-1.x86_64.rpm and openvswitch-kmod-2.9.1.xxxxxxx-1.el7.x86_64.rpm, where xxxxxxx is the build number.
Create the br-int instance if it is not already created.
# ovs-vsctl add-br br-int
Add the network interface (node-if) that is attached to the node logical switch to br-int .
Make sure that the br-int and node-if link status is up.
# ip link set br-int up # ip link set <node-if> up
Update the network configuration file to ensure that the network interface is up after a reboot.
Download the NCP tar file and load the Docker image from the tar file.
Download the ncp-rbac yaml file and change the apiVersion to v1.
Create a logical topology and related resources in NSX-T Data Center, and create tags on them so that they can be recognized by NCP.
Update ncp.ini with NSX-T Data Center resource information.
What to do next
Install OpenShift Container Platform. See Install OpenShift Container Platform.