CLI 명령을 실행하려면 NSX Container Plug-in 컨테이너에 로그인하고 터미널을 연 후 nsxcli 명령을 실행합니다.

노드에서 다음 명령을 실행하여 CLI 프롬프트를 표시할 수도 있습니다.
  kubectl exec -it <pod name> nsxcli
표 1. NCP 컨테이너의 CLI 명령
유형 명령
상태 get ncp-master status
상태 get ncp-nsx status
상태 get ncp-watcher <watcher-name>
상태 get ncp-watchers
상태 get ncp-k8s-api-server status
상태 check projects
상태 check project <project-name>
캐시 get project-cache <project-name>
캐시 get project-caches
캐시 get namespace-cache <namespace-name>
캐시 get namespace-caches
캐시 get pod-cache <pod-name>
캐시 get pod-caches
캐시 get ingress-caches
캐시 get ingress-cache <ingress-name>
캐시 get ingress-controllers
캐시 get ingress-controller <ingress-controller-name>
캐시 get network-policy-caches
캐시 get network-policy-cache <pod-name>
지원 get ncp-log file <filename>
지원 get ncp-log-level
지원 set ncp-log-level <log-level>
지원 get support-bundle file <filename>
지원 get node-agent-log file <filename>
지원 get node-agent-log file <filename> <node-name>
표 2. NSX 노드 에이전트 컨테이너의 CLI 명령
유형 명령
상태 get node-agent-hyperbus status
캐시

get container-cache <container-name>

캐시

get container-caches

표 3. NSX Kube Proxy 컨테이너의 CLI 명령
유형 명령
상태 get ncp-k8s-api-server status
상태 get kube-proxy-watcher <watcher-name>
상태 get kube-proxy-watchers
상태 dump ovs-flows

NCP 컨테이너의 상태 명령

  • NCP 마스터의 상태 표시
    get ncp-master status

    예:

    kubenode> get ncp-master status
    This instance is not the NCP master
    Current NCP Master id is a4h83eh1-b8dd-4e74-c71c-cbb7cc9c4c1c
    Last master update at Wed Oct 25 22:46:40 2017
  • NCP와 NSX Manager 간의 연결 상태 표시
    get ncp-nsx status

    예:

    kubenode> get ncp-nsx status
    NSX Manager status: Healthy
  • 수신, 네임스페이스, 포드 및 서비스에 대한 감시자 상태 표시
    get ncp-watcher <watcher-name>
    get ncp-watchers

    예 1:

    kubenode> get ncp-watcher pod
        Average event processing time: 1174 msec (in past 3600-sec window)
        Current watcher started time: Mar 02 2017 10:47:35 PST
        Number of events processed: 1 (in past 3600-sec window)
        Total events processed by current watcher: 1
        Total events processed since watcher thread created: 1
        Total watcher recycle count: 0
        Watcher thread created time: Mar 02 2017 10:47:35 PST
        Watcher thread status: Up

    예 2:

    kubenode> get ncp-watchers
        pod:
            Average event processing time: 1145 msec (in past 3600-sec window)
            Current watcher started time: Mar 02 2017 10:51:37 PST
            Number of events processed: 1 (in past 3600-sec window)
            Total events processed by current watcher: 1
            Total events processed since watcher thread created: 1
            Total watcher recycle count: 0
            Watcher thread created time: Mar 02 2017 10:51:37 PST
            Watcher thread status: Up
     
        namespace:
            Average event processing time: 68 msec (in past 3600-sec window)
            Current watcher started time: Mar 02 2017 10:51:37 PST
            Number of events processed: 2 (in past 3600-sec window)
            Total events processed by current watcher: 2
            Total events processed since watcher thread created: 2
            Total watcher recycle count: 0
            Watcher thread created time: Mar 02 2017 10:51:37 PST
            Watcher thread status: Up
     
        ingress:
            Average event processing time: 0 msec (in past 3600-sec window)
            Current watcher started time: Mar 02 2017 10:51:37 PST
            Number of events processed: 0 (in past 3600-sec window)
            Total events processed by current watcher: 0
            Total events processed since watcher thread created: 0
            Total watcher recycle count: 0
            Watcher thread created time: Mar 02 2017 10:51:37 PST
            Watcher thread status: Up
     
        service:
            Average event processing time: 3 msec (in past 3600-sec window)
            Current watcher started time: Mar 02 2017 10:51:37 PST
            Number of events processed: 1 (in past 3600-sec window)
            Total events processed by current watcher: 1
            Total events processed since watcher thread created: 1
            Total watcher recycle count: 0
            Watcher thread created time: Mar 02 2017 10:51:37 PST
            Watcher thread status: Up
  • NCP와 Kubernetes API 서버 간의 연결 상태 표시
    get ncp-k8s-api-server status

    예:

    kubenode> get ncp-k8s-api-server status
    Kubernetes ApiServer status: Healthy
  • 모든 프로젝트 또는 특정 프로젝트 확인
    check projects
    check project <project-name>

    예:

    kubenode> check projects
        default:
            Tier-1 link port for router 1b90a61f-0f2c-4768-9eb6-ea8954b4f327 is missing
            Switch 40a6829d-c3aa-4e17-ae8a-7f7910fdf2c6 is missing
    
        ns1:
            Router 8accc9cd-9883-45f6-81b3-0d1fb2583180 is missing
    
    kubenode> check project default
        Tier-1 link port for router 1b90a61f-0f2c-4768-9eb6-ea8954b4f327 is missing
        Switch 40a6829d-c3aa-4e17-ae8a-7f7910fdf2c6 is missing

NCP 컨테이너의 캐시 명령

  • 프로젝트 또는 네임스페이스에 대한 내부 캐시 가져오기
    get project-cache <project-name>
    get project-caches
    get namespace-cache <namespace-name>
    get namespace-caches

    예:

    kubenode> get project-caches
        default:
            logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180
            logical-switch:
                id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d
                ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e
                subnet: 10.0.0.0/24
                subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
     
        kube-system:
            logical-router: 5032b299-acad-448e-a521-19d272a08c46
            logical-switch:
                id: 85233651-602d-445d-ab10-1c84096cc22a
                ip_pool_id: ab1c5b09-7004-4206-ac56-85d9d94bffa2
                subnet: 10.0.1.0/24
                subnet_id: 73e450af-b4b8-4a61-a6e3-c7ddd15ce751
    
        testns:
            ext_pool_id: 346a0f36-7b5a-4ecc-ad32-338dcb92316f
            labels:
                ns: myns
                project: myproject
            logical-router: 4dc8f8a9-69b4-4ff7-8fb7-d2625dc77efa
            logical-switch:
                id: 6111a99a-6e06-4faa-a131-649f10f7c815
                ip_pool_id: 51ca058d-c3dc-41fd-8f2d-e69006ab1b3d
                subnet: 50.0.2.0/24
                subnet_id: 34f79811-bd29-4048-a67d-67ceac97eb98
            project_nsgroup: 9606afee-6348-4780-9dbe-91abfd23e475
            snat_ip: 4.4.0.3
    
    kubenode> get project-cache default
        logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180
        logical-switch:
            id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d
            ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e
            subnet: 10.0.0.0/24
            subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
    
    kubenode> get namespace-caches          
        default:
            logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180
            logical-switch:
                id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d
                ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e
                subnet: 10.0.0.0/24
                subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
    
        kube-system:
            logical-router: 5032b299-acad-448e-a521-19d272a08c46
            logical-switch:
                id: 85233651-602d-445d-ab10-1c84096cc22a
                ip_pool_id: ab1c5b09-7004-4206-ac56-85d9d94bffa2
                subnet: 10.0.1.0/24
                subnet_id: 73e450af-b4b8-4a61-a6e3-c7ddd15ce751
    
        testns:
            ext_pool_id: 346a0f36-7b5a-4ecc-ad32-338dcb92316f
            labels:
                ns: myns
                project: myproject
            logical-router: 4dc8f8a9-69b4-4ff7-8fb7-d2625dc77efa
            logical-switch:
                id: 6111a99a-6e06-4faa-a131-649f10f7c815
                ip_pool_id: 51ca058d-c3dc-41fd-8f2d-e69006ab1b3d
                subnet: 50.0.2.0/24
                subnet_id: 34f79811-bd29-4048-a67d-67ceac97eb98
            project_nsgroup: 9606afee-6348-4780-9dbe-91abfd23e475
            snat_ip: 4.4.0.3
    
    kubenode> get namespace-cache default          
        logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180
        logical-switch:
            id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d
            ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e
            subnet: 10.0.0.0/24
            subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
  • 포드에 대한 내부 캐시 가져오기
    get pod-cache <pod-name>
    get pod-caches

    예:

    kubenode> get pod-caches
        nsx.default.nginx-rc-uq2lv:
            cif_id: 2af9f734-37b1-4072-ba88-abbf935bf3d4
            gateway_ip: 10.0.0.1
            host_vif: d6210773-5c07-4817-98db-451bd1f01937
            id: 1c8b5c52-3795-11e8-ab42-005056b198fb
            ingress_controller: False
            ip: 10.0.0.2/24
            labels:
                app: nginx
            mac: 02:50:56:00:08:00
            port_id: d52c833a-f531-4bdf-bfa2-e8a084a8d41b
            vlan: 1
    
        nsx.testns.web-pod-1:
            cif_id: ce134f21-6be5-43fe-afbf-aaca8c06b5cf
            gateway_ip: 50.0.2.1
            host_vif: d6210773-5c07-4817-98db-451bd1f01937
            id: 3180b521-270e-11e8-ab42-005056b198fb
            ingress_controller: False
            ip: 50.0.2.3/24
            labels:
                app: nginx-new
                role: db
                tier: cache
            mac: 02:50:56:00:20:02
            port_id: 81bc2b8e-d902-4cad-9fc1-aabdc32ecaf8
            vlan: 3
    
    kubenode> get pod-cache nsx.default.nginx-rc-uq2lv
        cif_id: 2af9f734-37b1-4072-ba88-abbf935bf3d4
        gateway_ip: 10.0.0.1
        host_vif: d6210773-5c07-4817-98db-451bd1f01937
        id: 1c8b5c52-3795-11e8-ab42-005056b198fb
        ingress_controller: False
        ip: 10.0.0.2/24
        labels:
            app: nginx
        mac: 02:50:56:00:08:00
        port_id: d52c833a-f531-4bdf-bfa2-e8a084a8d41b
        vlan: 1
    
  • 네트워크 정책 캐시 또는 특정 네트워크 정책 캐시 가져오기
    get network-policy caches
    get network-policy-cache <network-policy-name>

    예:

    kubenode> get network-policy-caches
        nsx.testns.allow-tcp-80:
            dest_labels: None
            dest_pods:
                50.0.2.3
            match_expressions:
                key: tier
                operator: In
                values:
                    cache
            name: allow-tcp-80
            np_dest_ip_set_ids:
                22f82d76-004f-4d12-9504-ce1cb9c8aa00
                np_except_ip_set_ids:
            np_ip_set_ids:
                14f7f825-f1a0-408f-bbd9-bb2f75d44666
            np_isol_section_id: c8d93597-9066-42e3-991c-c550c46b2270
            np_section_id: 04693136-7925-44f2-8616-d809d02cd2a9
            ns_name: testns
            src_egress_rules: None
            src_egress_rules_hash: 97d170e1550eee4afc0af065b78cda302a97674c
            src_pods:
                50.0.2.0/24
            src_rules:
                from:
                    namespaceSelector:
                        matchExpressions:
                            key: tier
                            operator: DoesNotExist
                        matchLabels:
                            ns: myns
                ports:
                    port: 80
                    protocol: TCP
            src_rules_hash: e4ea7b8d91c1e722670a59f971f8fcc1a5ac51f1
    
    
    kubenode> get network-policy-cache nsx.testns.allow-tcp-80
        dest_labels: None
        dest_pods:
            50.0.2.3
        match_expressions:
            key: tier
            operator: In
            values:
                cache
        name: allow-tcp-80
        np_dest_ip_set_ids:
            22f82d76-004f-4d12-9504-ce1cb9c8aa00
            np_except_ip_set_ids:
        np_ip_set_ids:
            14f7f825-f1a0-408f-bbd9-bb2f75d44666
        np_isol_section_id: c8d93597-9066-42e3-991c-c550c46b2270
        np_section_id: 04693136-7925-44f2-8616-d809d02cd2a9
        ns_name: testns
        src_egress_rules: None
        src_egress_rules_hash: 97d170e1550eee4afc0af065b78cda302a97674c
        src_pods:
            50.0.2.0/24
        src_rules:
            from:
                namespaceSelector:
                    matchExpressions:
                        key: tier
                        operator: DoesNotExist
                    matchLabels:
                        ns: myns
            ports:
                port: 80
                protocol: TCP
        src_rules_hash: e4ea7b8d91c1e722670a59f971f8fcc1a5ac51f1
    

NCP 컨테이너의 지원 명령

  • 파일 저장소에 NCP 지원 번들 저장

    지원 번들은 레이블이 tier:nsx-networking인 포드의 모든 컨테이너에 대한 로그 파일로 구성됩니다. 번들 파일은 tgz 형식으로 되어 있으며 CLI 기본 파일 저장소 디렉토리 /var/vmware/nsx/file-store에 저장됩니다. CLI file-store 명령을 사용하여 번들 파일을 원격 사이트에 복사할 수 있습니다.

    get support-bundle file <filename>

    예:

    kubenode>get support-bundle file foo
    Bundle file foo created in tgz format
    kubenode>copy file foo url scp://[email protected]:/tmp
  • 파일 저장소에 NCP 로그 저장

    로그 파일은 tgz 형식으로 CLI 기본 파일 저장소 디렉토리 /var/vmware/nsx/file-store에 저장됩니다. CLI file-store 명령을 사용하여 번들 파일을 원격 사이트에 복사할 수 있습니다.

    get ncp-log file <filename>

    예:

    kubenode>get ncp-log file foo
    Log file foo created in tgz format
  • 파일 저장소에 노드 에이전트 로그 저장

    하나의 노드 또는 모든 노드의 노드 에이전트 로그를 저장합니다. 로그는 tgz 형식으로 CLI 기본 파일 저장소 디렉토리 /var/vmware/nsx/file-store에 저장됩니다. CLI file-store 명령을 사용하여 번들 파일을 원격 사이트에 복사할 수 있습니다.

    get node-agent-log file <filename>
    get node-agent-log file <filename> <node-name>

    예:

    kubenode>get node-agent-log file foo
    Log file foo created in tgz format
  • 로그 수준 가져오기 및 설정

    사용 가능한 로그 수준은 NOTSET, DEBUG, INFO, WARNING, ERRORCRITICAL입니다.

    get ncp-log-level
    set ncp-log-level <log level>

    예:

    kubenode>get ncp-log-level
    NCP log level is INFO
     
    kubenode>set ncp-log-level DEBUG
    NCP log level is changed to DEBUG

NSX 노드 에이전트 컨테이너의 상태 명령

  • 이 노드의 노드 에이전트와 HyperBus 사이의 연결 상태 표시
    get node-agent-hyperbus status

    예:

    kubenode> get node-agent-hyperbus status
    HyperBus status: Healthy

NSX 노드 에이전트 컨테이너의 캐시 명령

  • NSX 노드 에이전트 컨테이너의 내부 캐시 가져오기
    get container-cache <container-name>
    get container-caches

    예 1:

    kubenode> get container-cache cif104
        ip: 192.168.0.14/32
        mac: 50:01:01:01:01:14
        gateway_ip: 169.254.1.254/16
        vlan_id: 104

    예 2:

    kubenode> get container-caches
        cif104:
            ip: 192.168.0.14/32
            mac: 50:01:01:01:01:14
            gateway_ip: 169.254.1.254/16
            vlan_id: 104

NSX Kube Proxy 컨테이너의 상태 명령

  • Kube Proxy와 Kubernetes API 서버 간의 연결 상태 표시
    get ncp-k8s-api-server status

    예:

    kubenode> get kube-proxy-k8s-api-server status
    Kubernetes ApiServer status: Healthy
  • Kube Proxy 감시자 상태 표시
    get kube-proxy-watcher <watcher-name>
    get kube-proxy-watchers

    예 1:

    kubenode> get kube-proxy-watcher endpoint
        Average event processing time: 15 msec (in past 3600-sec window)
        Current watcher started time: May 01 2017 15:06:24 PDT
        Number of events processed: 90 (in past 3600-sec window)
        Total events processed by current watcher: 90
        Total events processed since watcher thread created: 90
        Total watcher recycle count: 0
        Watcher thread created time: May 01 2017 15:06:24 PDT
        Watcher thread status: Up

    예 2:

    kubenode> get kube-proxy-watchers
        endpoint:
            Average event processing time: 15 msec (in past 3600-sec window)
            Current watcher started time: May 01 2017 15:06:24 PDT
            Number of events processed: 90 (in past 3600-sec window)
            Total events processed by current watcher: 90
            Total events processed since watcher thread created: 90
            Total watcher recycle count: 0
            Watcher thread created time: May 01 2017 15:06:24 PDT
            Watcher thread status: Up
    
         service:
            Average event processing time: 8 msec (in past 3600-sec window)
            Current watcher started time: May 01 2017 15:06:24 PDT
            Number of events processed: 2 (in past 3600-sec window)
            Total events processed by current watcher: 2
            Total events processed since watcher thread created: 2
            Total watcher recycle count: 0
            Watcher thread created time: May 01 2017 15:06:24 PDT
            Watcher thread status: Up
  • 노드에서 OVS 흐름 덤프
    dump ovs-flows

    예:

    kubenode> dump ovs-flows
        NXST_FLOW reply (xid=0x4):
        cookie=0x0, duration=8.876s, table=0, n_packets=0, n_bytes=0, idle_age=8, priority=100,ip actions=ct(table=1)
        cookie=0x0, duration=8.898s, table=0, n_packets=0, n_bytes=0, idle_age=8, priority=0 actions=NORMAL
        cookie=0x0, duration=8.759s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=100,tcp,nw_dst=10.96.0.1,tp_dst=443 actions=mod_tp_dst:443
        cookie=0x0, duration=8.719s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=100,ip,nw_dst=10.96.0.10 actions=drop
        cookie=0x0, duration=8.819s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=90,ip,in_port=1 actions=ct(table=2,nat)
        cookie=0x0, duration=8.799s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=80,ip actions=NORMAL
        cookie=0x0, duration=8.856s, table=2, n_packets=0, n_bytes=0, idle_age=8, actions=NORMAL