The Ansible hosts file defines the nodes in the OpenShift cluster.

Procedure

  1. Clone the NCP GitHub repository at https://github.com/vmware/nsx-integration-for-openshift. The hosts file is in the openshift-ansible-nsx directory.
  2. In the [masters] and [nodes] sections, specify the host names and IP addresses of the OpenShift VMs. For example,
        [masters]
        admin.rhel.osmaster ansible_ssh_host=101.101.101.4
      
        [single_master]
        admin.rhel.osmaster ansible_ssh_host=101.101.101.4
     
        [nodes]
        admin.rhel.osmaster ansible_ssh_host=101.101.101.4 openshift_ip=101.101.101.4 openshift_schedulable=true openshift_hostname=admin.rhel.osmaster
        admin.rhel.osnode ansible_ssh_host=101.101.101.5 openshift_ip=101.101.101.5 openshift_hostname=admin.rhel.osnode
     
        [etcd]
     
        [OSEv3:children]
        masters
        nodes
        etcd

    Note that openshift_ip identifies the cluster internal IP and needs to be set if the interface to be used is not the default one. The single_master variable is used by ncp-related roles from a master node to perform certain tasks only once, e.g. NSX-T management plane resource configuration.

  3. Set up SSH access so that all the nodes can be accessed without password from the node where the Ansible role is run (typically it is the master node):
        ssh-keygen
        ssh-copy-id -i ~/.ssh/id_rsa.pub root@admin.rhel.osnode
  4. Update the [OSEv3:vars] section. Details about all the parameters can be found in the OpenShift Container Platform Documentation for the Advanced Installation (https://docs.openshift.com/container-platform/3.6/install_config/install/advanced_install.html). For example,
        # Set the default route fqdn
        openshift_master_default_subdomain=apps.yves.local
    
        os_sdn_network_plugin_name=cni
        openshift_use_openshift_sdn=false
        openshift_node_sdn_mtu=1500
    
        # If ansible_ssh_user is not root, ansible_become must be set to true
        ansible_become=true
    
        # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
        openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
    
        openshift_master_default_subdomain
          This is the default subdomain used in the OpenShift routes for External LB
    
        os_sdn_network_plugin_name
          Set to 'cni' for the NSX Integration
    
        openshift_use_openshift_sdn
          Set to false to disable the built-in OpenShift SDN solution
    
        openshift_hosted_manage_router
          Set to false to disable creation of router during installation. The router has to be manually started after NCP and nsx-node-agent are running.
    
        openshift_hosted_manage_registry
          Set to false to disable creation of registry during installation. The registry has to be manually started after NCP and nsx-node-agent are running.
     
        deployment_type
          Set to origin or openshift-enterprise for the open source or Enterprise version
          of OpenShift respectively
     
        openshift_master_htpasswd_file
          This file is holding the htpasswd password file. You will need to fix the
          path to it for the deployment to work, so exchange <enter_full_path_here>
          with your 'real path'. You need to install htpasswd and create password
          file with it.
  5. Check that you have connectivity to all hosts:
        ansible OSEv3 -i /PATH/TO/HOSTS/hosts -m ping

    The results should look like the following. If not, resolve the connectivity problem.

        openshift-node1 | SUCCESS => {
           "changed": false,
           "ping": "pong"
        }
        openshift-master | SUCCESS => {
           "changed": false,
           "ping": "pong"
        }

What to do next

Install CNI plug-in and OVS. See Install CNI Plug-in and OVS.