The container network interface (CNI) plug-in and Open vSwitch (OVS) must be installed on the OpenShift nodes. The installation is performed by running an Ansible playbook.

About this task

The playbook contains instructions to configure NSX-T resources for the nodes. You can also configure the NSX-T resources manually as described in Setting Up NSX-T Resources. The parameter perform_nsx_config indicates whether or not to configure the resources when the playbook is run.

Procedure

  1. Update the parameter values in roles/ncp_prep/default/main.yaml and roles/nsx_config/default/main.yaml, including the URLs where CNI plugin RPM, OVS and its corresponding kernel module RPM can be downloaded. In addition, uplink_port is the name of the uplink port VNIC on the Node VM. The remaining variables pertain to the NSX management plane configuration, including:
    • perform_nsx_config: whether to perform the resource configuration. Set it to false if the configuration will be done manually, and nsx_config script will not be run.

    • nsx_config_script_path: absolute path of the nsx_config.py script.

    • nsx_cert_file_path: absolute path of NSX client certificate file.

    • nsx_manager_ip: IP of NSX manager.

    • nsx_edge_cluster_name: name of the Edge Cluster to be used by the T0 router

    • nsx_transport_zone_name: name of the Overlay Transport Zone

    • nsx_t0_router_name: name of Tier-0 Logical Router for the cluster

    • pod_ipblock_name: name of IP block for pods.

    • pod_ipblock_cidr: CIDR address for this IP block

    • snat_ipblock_name: name of the IP block for SNAT

    • snat_ipblock_cidr: CIDR address for this IP block

    • os_cluster_name: name of the OpenShift cluster

    • vnic_mac_list: comma-separated list of MAC addresses the nodes

    • os_node_name_list: comma-separated list of node names, the ordering must match the above list's.

    • nsx_node_ls_name: name of Logical Switch connected to the nodes

    The names must match the NSX-T resources you created. Otherwise the resources will be created with the names specified. The playbook is idempotent, and ensures resources with specified names exist and are tagged accordingly.

  2. Change to the openshift-ansible-nsx directory and run the ncp_prep role.
        ansible-playbook -i /PATH/TO/HOSTS/hosts ncp_prep.yaml

Results

The playbook contains instructions to perform the following actions:

  • Download the CNI plug-in installation file.

    The filename is nsx-cni-1.0.0.0.0.xxxxxxx-1.x86_64.rpm, where xxxxxxx is the build number.

  • Install the CNI plug-in installation file.

    The plug-in is installed in /opt/cni/bin. The CNI configuration file 10.net.conf is copied to /etc/cni/net.d. The rpm will also install the configuration file /etc/cni/net.d/99-loopback.conf for the loopback plug-in.

  • Download and install the OVS installation files.

    The files are openvswitch-2.7.0.xxxxxxx-1.x86_64.rpm and openvswitch-kmod-2.7.0.xxxxxxx-1.el7.x86_64.rpm, where xxxxxxx is the build number.

  • Make sure that OVS is running.

        # service openvswitch-switch status

  • Create the br-int instance if it is not already created.

        # ovs-vsctl add-br br-int

  • Add the network interface (node-if) that is attached to the node logical switch to br-int .

  • Make sure that the br-int and node-if link status is up.

        # ip link set br-int up
        # ip link set <node-if> up

  • Update the network configuration file to ensure that the network interface is up after a reboot.

What to do next

Install OpenShift Container Platform. See Install OpenShift Container Platform.