Um CLI-Befehle auszuführen, melden Sie sich beim NSX Container Plug-in-Container an, öffnen Sie ein Terminal, und führen Sie den Befehl nsxcli aus.

Sie können die CLI-Eingabeaufforderung auch aufrufen, indem Sie den folgenden Befehl für einen Knoten ausführen:

  kubectl exec -it <pod name> nsxcli

Tabelle 1. CLI-Befehle für den NCP-Container

Typ

Befehl

Status

get ncp-master status

Status

get ncp-nsx status

Status

get ncp-watcher <Watcher-Name>

Status

get ncp-watchers

Status

get ncp-k8s-api-server status

Status

check projects

Status

check project <project-name>

Cache

get project-cache <Projektname>

Cache

get project-caches

Cache

get namespace-cache <Namensraum-Name>

Cache

get namespace-caches

Cache

get pod-cache <Pod-Name>

Cache

get pod-caches

Cache

get ingress-caches

Cache

get ingress-cache <ingress-name>

Cache

get ingress-controllers

Cache

get ingress-controller <ingress-controller-name>

Cache

get network-policy-caches

Cache

get network-policy-cache <pod-name>

Support

get ncp-log file <Dateiname>

Support

get ncp-log-level

Support

set ncp-log-level <log-level>

Support

get support-bundle file <Dateiname>

Support

get node-agent-log file <Dateiname>

Support

get node-agent-log file <Dateiname> <Knotenname>

Tabelle 2. CLI-Befehle für den NSX-Knoten-Agent-Container

Typ

Befehl

Status

get node-agent-hyperbus status

Cache

get container-cache <Containername>

Cache

get container-caches

Tabelle 3. CLI-Befehle für den NSX-Kube-Proxy-Container

Typ

Befehl

Status

get ncp-k8s-api-server status

Status

get kube-proxy-watcher <Watcher-Name>

Status

get kube-proxy-watchers

Status

dump ovs-flows

Statusbefehle für den NCP-Container

  • Status des NCP-Masters anzeigen

    get ncp-master status

    Beispiel:

    kubenode> get ncp-master status
    This instance is not the NCP master
    Current NCP Master id is a4h83eh1-b8dd-4e74-c71c-cbb7cc9c4c1c
    Last master update at Wed Oct 25 22:46:40 2017
  • Verbindungsstatus zwischen NCP und NSX Manager anzeigen

    get ncp-nsx status

    Beispiel:

    kubenode> get ncp-nsx status
    NSX Manager status: Healthy
  • Watcher-Status für eingehenden Datenverkehr, Namensraum, Pod und Dienst anzeigen

    get ncp-watcher <watcher-name>
    get ncp-watchers

    Beispiel 1:

    kubenode> get ncp-watcher pod
        Average event processing time: 1174 msec (in past 3600-sec window)
        Current watcher started time: Mar 02 2017 10:47:35 PST
        Number of events processed: 1 (in past 3600-sec window)
        Total events processed by current watcher: 1
        Total events processed since watcher thread created: 1
        Total watcher recycle count: 0
        Watcher thread created time: Mar 02 2017 10:47:35 PST
        Watcher thread status: Up

    Beispiel 2:

    kubenode> get ncp-watchers
        pod:
            Average event processing time: 1145 msec (in past 3600-sec window)
            Current watcher started time: Mar 02 2017 10:51:37 PST
            Number of events processed: 1 (in past 3600-sec window)
            Total events processed by current watcher: 1
            Total events processed since watcher thread created: 1
            Total watcher recycle count: 0
            Watcher thread created time: Mar 02 2017 10:51:37 PST
            Watcher thread status: Up
     
        namespace:
            Average event processing time: 68 msec (in past 3600-sec window)
            Current watcher started time: Mar 02 2017 10:51:37 PST
            Number of events processed: 2 (in past 3600-sec window)
            Total events processed by current watcher: 2
            Total events processed since watcher thread created: 2
            Total watcher recycle count: 0
            Watcher thread created time: Mar 02 2017 10:51:37 PST
            Watcher thread status: Up
     
        ingress:
            Average event processing time: 0 msec (in past 3600-sec window)
            Current watcher started time: Mar 02 2017 10:51:37 PST
            Number of events processed: 0 (in past 3600-sec window)
            Total events processed by current watcher: 0
            Total events processed since watcher thread created: 0
            Total watcher recycle count: 0
            Watcher thread created time: Mar 02 2017 10:51:37 PST
            Watcher thread status: Up
     
        service:
            Average event processing time: 3 msec (in past 3600-sec window)
            Current watcher started time: Mar 02 2017 10:51:37 PST
            Number of events processed: 1 (in past 3600-sec window)
            Total events processed by current watcher: 1
            Total events processed since watcher thread created: 1
            Total watcher recycle count: 0
            Watcher thread created time: Mar 02 2017 10:51:37 PST
            Watcher thread status: Up
  • Verbindungsstatus zwischen NCP und Kubernetes-API-Server anzeigen

    get ncp-k8s-api-server status

    Beispiel:

    kubenode> get ncp-k8s-api-server status
    Kubernetes ApiServer status: Healthy
  • Alle Projekte oder ein bestimmtes Projekt überprüfen

    check projects
    check project <project-name>

    Beispiel:

    kubenode> check projects
        default:
            Tier-1 link port for router 1b90a61f-0f2c-4768-9eb6-ea8954b4f327 is missing
            Switch 40a6829d-c3aa-4e17-ae8a-7f7910fdf2c6 is missing
    
        ns1:
            Router 8accc9cd-9883-45f6-81b3-0d1fb2583180 is missing
    
    kubenode> check project default
        Tier-1 link port for router 1b90a61f-0f2c-4768-9eb6-ea8954b4f327 is missing
        Switch 40a6829d-c3aa-4e17-ae8a-7f7910fdf2c6 is missing

Cachebefehle für den NCP-Container

  • Internen Cache für Projekte oder Namensräume abrufen

    get project-cache <project-name>
    get project-caches
    get namespace-cache <namespace-name>
    get namespace-caches

    Beispiel:

    kubenode> get project-caches
        default:
            logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180
            logical-switch:
                id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d
                ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e
                subnet: 10.0.0.0/24
                subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
     
        kube-system:
            logical-router: 5032b299-acad-448e-a521-19d272a08c46
            logical-switch:
                id: 85233651-602d-445d-ab10-1c84096cc22a
                ip_pool_id: ab1c5b09-7004-4206-ac56-85d9d94bffa2
                subnet: 10.0.1.0/24
                subnet_id: 73e450af-b4b8-4a61-a6e3-c7ddd15ce751
    
        testns:
            ext_pool_id: 346a0f36-7b5a-4ecc-ad32-338dcb92316f
            labels:
                ns: myns
                project: myproject
            logical-router: 4dc8f8a9-69b4-4ff7-8fb7-d2625dc77efa
            logical-switch:
                id: 6111a99a-6e06-4faa-a131-649f10f7c815
                ip_pool_id: 51ca058d-c3dc-41fd-8f2d-e69006ab1b3d
                subnet: 50.0.2.0/24
                subnet_id: 34f79811-bd29-4048-a67d-67ceac97eb98
            project_nsgroup: 9606afee-6348-4780-9dbe-91abfd23e475
            snat_ip: 4.4.0.3
    
    kubenode> get project-cache default
        logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180
        logical-switch:
            id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d
            ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e
            subnet: 10.0.0.0/24
            subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
    
    kubenode> get namespace-caches          
        default:
            logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180
            logical-switch:
                id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d
                ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e
                subnet: 10.0.0.0/24
                subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
    
        kube-system:
            logical-router: 5032b299-acad-448e-a521-19d272a08c46
            logical-switch:
                id: 85233651-602d-445d-ab10-1c84096cc22a
                ip_pool_id: ab1c5b09-7004-4206-ac56-85d9d94bffa2
                subnet: 10.0.1.0/24
                subnet_id: 73e450af-b4b8-4a61-a6e3-c7ddd15ce751
    
        testns:
            ext_pool_id: 346a0f36-7b5a-4ecc-ad32-338dcb92316f
            labels:
                ns: myns
                project: myproject
            logical-router: 4dc8f8a9-69b4-4ff7-8fb7-d2625dc77efa
            logical-switch:
                id: 6111a99a-6e06-4faa-a131-649f10f7c815
                ip_pool_id: 51ca058d-c3dc-41fd-8f2d-e69006ab1b3d
                subnet: 50.0.2.0/24
                subnet_id: 34f79811-bd29-4048-a67d-67ceac97eb98
            project_nsgroup: 9606afee-6348-4780-9dbe-91abfd23e475
            snat_ip: 4.4.0.3
    
    kubenode> get namespace-cache default          
        logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180
        logical-switch:
            id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d
            ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e
            subnet: 10.0.0.0/24
            subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
  • Internen Cache für Pods abrufen

    get pod-cache <pod-name>
    get pod-caches

    Beispiel:

    kubenode> get pod-caches
        nsx.default.nginx-rc-uq2lv:
            cif_id: 2af9f734-37b1-4072-ba88-abbf935bf3d4
            gateway_ip: 10.0.0.1
            host_vif: d6210773-5c07-4817-98db-451bd1f01937
            id: 1c8b5c52-3795-11e8-ab42-005056b198fb
            ingress_controller: False
            ip: 10.0.0.2/24
            labels:
                app: nginx
            mac: 02:50:56:00:08:00
            port_id: d52c833a-f531-4bdf-bfa2-e8a084a8d41b
            vlan: 1
    
        nsx.testns.web-pod-1:
            cif_id: ce134f21-6be5-43fe-afbf-aaca8c06b5cf
            gateway_ip: 50.0.2.1
            host_vif: d6210773-5c07-4817-98db-451bd1f01937
            id: 3180b521-270e-11e8-ab42-005056b198fb
            ingress_controller: False
            ip: 50.0.2.3/24
            labels:
                app: nginx-new
                role: db
                tier: cache
            mac: 02:50:56:00:20:02
            port_id: 81bc2b8e-d902-4cad-9fc1-aabdc32ecaf8
            vlan: 3
    
    kubenode> get pod-cache nsx.default.nginx-rc-uq2lv
        cif_id: 2af9f734-37b1-4072-ba88-abbf935bf3d4
        gateway_ip: 10.0.0.1
        host_vif: d6210773-5c07-4817-98db-451bd1f01937
        id: 1c8b5c52-3795-11e8-ab42-005056b198fb
        ingress_controller: False
        ip: 10.0.0.2/24
        labels:
            app: nginx
        mac: 02:50:56:00:08:00
        port_id: d52c833a-f531-4bdf-bfa2-e8a084a8d41b
        vlan: 1
    
  • Netzwerkrichtlinien-Caches oder einen bestimmten Netzwerkrichtlinien-Cache abrufen

    get network-policy caches
    get network-policy-cache <network-policy-name>

    Beispiel:

    kubenode> get network-policy-caches
        nsx.testns.allow-tcp-80:
            dest_labels: None
            dest_pods:
                50.0.2.3
            match_expressions:
                key: tier
                operator: In
                values:
                    cache
            name: allow-tcp-80
            np_dest_ip_set_ids:
                22f82d76-004f-4d12-9504-ce1cb9c8aa00
                np_except_ip_set_ids:
            np_ip_set_ids:
                14f7f825-f1a0-408f-bbd9-bb2f75d44666
            np_isol_section_id: c8d93597-9066-42e3-991c-c550c46b2270
            np_section_id: 04693136-7925-44f2-8616-d809d02cd2a9
            ns_name: testns
            src_egress_rules: None
            src_egress_rules_hash: 97d170e1550eee4afc0af065b78cda302a97674c
            src_pods:
                50.0.2.0/24
            src_rules:
                from:
                    namespaceSelector:
                        matchExpressions:
                            key: tier
                            operator: DoesNotExist
                        matchLabels:
                            ns: myns
                ports:
                    port: 80
                    protocol: TCP
            src_rules_hash: e4ea7b8d91c1e722670a59f971f8fcc1a5ac51f1
    
    
    kubenode> get network-policy-cache nsx.testns.allow-tcp-80
        dest_labels: None
        dest_pods:
            50.0.2.3
        match_expressions:
            key: tier
            operator: In
            values:
                cache
        name: allow-tcp-80
        np_dest_ip_set_ids:
            22f82d76-004f-4d12-9504-ce1cb9c8aa00
            np_except_ip_set_ids:
        np_ip_set_ids:
            14f7f825-f1a0-408f-bbd9-bb2f75d44666
        np_isol_section_id: c8d93597-9066-42e3-991c-c550c46b2270
        np_section_id: 04693136-7925-44f2-8616-d809d02cd2a9
        ns_name: testns
        src_egress_rules: None
        src_egress_rules_hash: 97d170e1550eee4afc0af065b78cda302a97674c
        src_pods:
            50.0.2.0/24
        src_rules:
            from:
                namespaceSelector:
                    matchExpressions:
                        key: tier
                        operator: DoesNotExist
                    matchLabels:
                        ns: myns
            ports:
                port: 80
                protocol: TCP
        src_rules_hash: e4ea7b8d91c1e722670a59f971f8fcc1a5ac51f1
    

Supportbefehle für den NCP-Container

  • NCP-Support-Paket im Dateispeicher speichern

    Das Support-Paket umfasst die Protokolldateien für alle Container in Pods mit der Bezeichnung tier:nsx-networking. Die Paketdatei liegt im TGZ-Format vor und befindet sich im CLI-Standard-Dateispeicherverzeichnis /var/vmware/nsx/file-store. Mithilfe des CLI-Befehls „file-store“ können Sie die Paketdatei in eine Remote-Site kopieren.

    get support-bundle file <filename>

    Beispiel:

    kubenode>get support-bundle file foo
    Bundle file foo created in tgz format
    kubenode>copy file foo url scp://nicira@10.0.0.1:/tmp
  • NCP-Protokolle im Dateispeicher speichern

    Die Protokolldatei wird im TGZ-Format im CLI-Standard-Dateispeicherverzeichnis /var/vmware/nsx/file-store gespeichert. Mithilfe des CLI-Befehls „file-store“ können Sie die Paketdatei in eine Remote-Site kopieren.

    get ncp-log file <filename>

    Beispiel:

    kubenode>get ncp-log file foo
    Log file foo created in tgz format
  • Knoten-Agent-Protokolle im Dateispeicher speichern

    Speichern Sie die Knoten-Agent-Protokolle von einem oder allen Knoten. Die Protokolle werden im TGZ-Format im CLI-Standard-Dateispeicherverzeichnis /var/vmware/nsx/file-store gespeichert. Mithilfe des CLI-Befehls „file-store“ können Sie die Paketdatei in eine Remote-Site kopieren.

    get node-agent-log file <filename>
    get node-agent-log file <filename> <node-name>

    Beispiel:

    kubenode>get node-agent-log file foo
    Log file foo created in tgz format
  • Protokollebene abrufen und einrichten

    Zu den verfügbaren Protokollebenen gehören: NOTSET, DEBUG, INFO, WARNING, ERROR und CRITICAL.

    get ncp-log-level
    set ncp-log-level <log level>

    Beispiel:

    kubenode>get ncp-log-level
    NCP log level is INFO
     
    kubenode>set ncp-log-level DEBUG
    NCP log level is changed to DEBUG

Status-Befehle für den NSX-Knoten-Agent-Container

  • Zeigen Sie den Verbindungsstatus zwischen dem Knoten-Agent und HyperBus auf diesem Knoten an.

    get node-agent-hyperbus status

    Beispiel:

    kubenode> get node-agent-hyperbus status
    HyperBus status: Healthy

Cache-Befehle für den NSX-Knoten-Agent-Container

  • Internen Cache für NSX-Knoten-Agent-Container abrufen

    get container-cache <container-name>
    get container-caches

    Beispiel 1:

    kubenode> get container-cache cif104
        ip: 192.168.0.14/32
        mac: 50:01:01:01:01:14
        gateway_ip: 169.254.1.254/16
        vlan_id: 104

    Beispiel 2:

    kubenode> get container-caches
        cif104:
            ip: 192.168.0.14/32
            mac: 50:01:01:01:01:14
            gateway_ip: 169.254.1.254/16
            vlan_id: 104

Status-Befehle für den NSX-Kube-Proxy-Container

  • Verbindungsstatus zwischen Kube-Proxy und Kubernetes-API-Server anzeigen

    get ncp-k8s-api-server status

    Beispiel:

    kubenode> get kube-proxy-k8s-api-server status
    Kubernetes ApiServer status: Healthy
  • Kube-Proxy-Watcher-Status anzeigen

    get kube-proxy-watcher <watcher-name>
    get kube-proxy-watchers

    Beispiel 1:

    kubenode> get kube-proxy-watcher endpoint
        Average event processing time: 15 msec (in past 3600-sec window)
        Current watcher started time: May 01 2017 15:06:24 PDT
        Number of events processed: 90 (in past 3600-sec window)
        Total events processed by current watcher: 90
        Total events processed since watcher thread created: 90
        Total watcher recycle count: 0
        Watcher thread created time: May 01 2017 15:06:24 PDT
        Watcher thread status: Up

    Beispiel 2:

    kubenode> get kube-proxy-watchers
        endpoint:
            Average event processing time: 15 msec (in past 3600-sec window)
            Current watcher started time: May 01 2017 15:06:24 PDT
            Number of events processed: 90 (in past 3600-sec window)
            Total events processed by current watcher: 90
            Total events processed since watcher thread created: 90
            Total watcher recycle count: 0
            Watcher thread created time: May 01 2017 15:06:24 PDT
            Watcher thread status: Up
    
         service:
            Average event processing time: 8 msec (in past 3600-sec window)
            Current watcher started time: May 01 2017 15:06:24 PDT
            Number of events processed: 2 (in past 3600-sec window)
            Total events processed by current watcher: 2
            Total events processed since watcher thread created: 2
            Total watcher recycle count: 0
            Watcher thread created time: May 01 2017 15:06:24 PDT
            Watcher thread status: Up
  • OVS-Flows für Speicherabbild an einem Knoten

    dump ovs-flows

    Beispiel:

    kubenode> dump ovs-flows
        NXST_FLOW reply (xid=0x4):
        cookie=0x0, duration=8.876s, table=0, n_packets=0, n_bytes=0, idle_age=8, priority=100,ip actions=ct(table=1)
        cookie=0x0, duration=8.898s, table=0, n_packets=0, n_bytes=0, idle_age=8, priority=0 actions=NORMAL
        cookie=0x0, duration=8.759s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=100,tcp,nw_dst=10.96.0.1,tp_dst=443 actions=mod_tp_dst:443
        cookie=0x0, duration=8.719s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=100,ip,nw_dst=10.96.0.10 actions=drop
        cookie=0x0, duration=8.819s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=90,ip,in_port=1 actions=ct(table=2,nat)
        cookie=0x0, duration=8.799s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=80,ip actions=NORMAL
        cookie=0x0, duration=8.856s, table=2, n_packets=0, n_bytes=0, idle_age=8, actions=NORMAL