要运行 CLI 命令,请登录到 NSX Container Plugin 容器,然后打开一个终端并运行 nsxcli 命令。

也可以在节点上运行以下命令以显示 CLI 提示符:
  kubectl exec -it <pod name> nsxcli
表 1. 用于 NCP 容器的 CLI 命令
类型 命令 备注
状态 get ncp-master status 对于 Kubernetes 和 TAS。
状态 get ncp-nsx status 对于 Kubernetes 和 TAS。
状态 get ncp-watcher <watcher-name> 对于 Kubernetes 和 TAS。
状态 get ncp-watchers 对于 Kubernetes 和 TAS。
状态 get ncp-k8s-api-server status 仅适用于 Kubernetes。
状态 check projects 仅适用于 Kubernetes。
状态 check project <project-name> 仅适用于 Kubernetes。
状态 get ncp-bbs status 仅适用于 TAS。
状态 get ncp-capi status 仅适用于 TAS。
状态 get ncp-policy-server status 仅适用于 TAS。
缓存 get project-caches 仅适用于 Kubernetes。
缓存 get project-cache <project-name> 仅适用于 Kubernetes。
缓存 get namespace-caches 仅适用于 Kubernetes。
缓存 get namespace-cache <namespace-name> 仅适用于 Kubernetes。
缓存 get pod-caches 仅适用于 Kubernetes。
缓存 get pod-cache <pod-name> 仅适用于 Kubernetes。
缓存 get ingress-caches 仅适用于 Kubernetes。
缓存 get ingress-cache <ingress-name> 仅适用于 Kubernetes。
缓存 get ingress-controllers 仅适用于 Kubernetes。
缓存 get ingress-controller <ingress-controller-name> 仅适用于 Kubernetes。
缓存 get network-policy-caches 仅适用于 Kubernetes。
缓存 get network-policy-cache <pod-name> 仅适用于 Kubernetes。
缓存 get asg-caches 仅适用于 TAS。
缓存 get asg-cache <asg-ID> 仅适用于 TAS。
缓存 get org-caches 仅适用于 TAS。
缓存 get org-cache <org-ID> 仅适用于 TAS。
缓存 get space-caches 仅适用于 TAS。
缓存 get space-cache <space-ID> 仅适用于 TAS。
缓存 get app-caches 仅适用于 TAS。
缓存 get app-cache <app-ID> 仅适用于 TAS。
缓存 get instance-caches <app-ID> 仅适用于 TAS。
缓存 get instance-cache <app-ID> <instance-ID> 仅适用于 TAS。
缓存 get policy-caches 仅适用于 TAS。
支持 get ncp-log file <filename> 对于 Kubernetes 和 TAS。
支持 get ncp-log-level [组件] 对于 Kubernetes 和 TAS。
支持 set ncp-log-level <log-level> [组件] 对于 Kubernetes 和 TAS。
支持 get support-bundle file <filename> 仅适用于 Kubernetes。
支持 get node-agent-log file <filename> 仅适用于 Kubernetes。
支持 get node-agent-log file <filename> <node-name> 仅适用于 Kubernetes。
表 2. 用于 NSX 节点代理容器的 CLI 命令
类型 命令
状态 get node-agent-hyperbus status
缓存

get container-cache <container-name>

缓存

get container-caches

表 3. 用于 NSX Kube 代理容器的 CLI 命令
类型 命令
状态 get ncp-k8s-api-server status
状态 get kube-proxy-watcher <watcher-name>
状态 get kube-proxy-watchers
状态 dump ovs-flows

用于 NCP 容器的状态命令

  • 显示 NCP 主节点的状态
    get ncp-master status

    示例:

    kubenode> get ncp-master status
    This instance is not the NCP master
    Current NCP Master id is a4h83eh1-b8dd-4e74-c71c-cbb7cc9c4c1c
    Last master update at Wed Oct 25 22:46:40 2017
  • 显示 NCP 和 NSX Manager 之间的连接状态
    get ncp-nsx status

    示例:

    kubenode> get ncp-nsx status
    NSX Manager status: Healthy
  • 显示 Ingress、命名空间、pod 和服务的监视程序状态
    get ncp-watchers
    get ncp-watcher <watcher-name>

    示例:

    kubenode> get ncp-watchers
        pod:
            Average event processing time: 1145 msec (in past 3600-sec window)
            Current watcher started time: Mar 02 2017 10:51:37 PST
            Number of events processed: 1 (in past 3600-sec window)
            Total events processed by current watcher: 1
            Total events processed since watcher thread created: 1
            Total watcher recycle count: 0
            Watcher thread created time: Mar 02 2017 10:51:37 PST
            Watcher thread status: Up
     
        namespace:
            Average event processing time: 68 msec (in past 3600-sec window)
            Current watcher started time: Mar 02 2017 10:51:37 PST
            Number of events processed: 2 (in past 3600-sec window)
            Total events processed by current watcher: 2
            Total events processed since watcher thread created: 2
            Total watcher recycle count: 0
            Watcher thread created time: Mar 02 2017 10:51:37 PST
            Watcher thread status: Up
     
        ingress:
            Average event processing time: 0 msec (in past 3600-sec window)
            Current watcher started time: Mar 02 2017 10:51:37 PST
            Number of events processed: 0 (in past 3600-sec window)
            Total events processed by current watcher: 0
            Total events processed since watcher thread created: 0
            Total watcher recycle count: 0
            Watcher thread created time: Mar 02 2017 10:51:37 PST
            Watcher thread status: Up
     
        service:
            Average event processing time: 3 msec (in past 3600-sec window)
            Current watcher started time: Mar 02 2017 10:51:37 PST
            Number of events processed: 1 (in past 3600-sec window)
            Total events processed by current watcher: 1
            Total events processed since watcher thread created: 1
            Total watcher recycle count: 0
            Watcher thread created time: Mar 02 2017 10:51:37 PST
            Watcher thread status: Up
    
    
    kubenode> get ncp-watcher pod
        Average event processing time: 1174 msec (in past 3600-sec window)
        Current watcher started time: Mar 02 2017 10:47:35 PST
        Number of events processed: 1 (in past 3600-sec window)
        Total events processed by current watcher: 1
        Total events processed since watcher thread created: 1
        Total watcher recycle count: 0
        Watcher thread created time: Mar 02 2017 10:47:35 PST
        Watcher thread status: Up
  • 显示 NCP 和 Kubernetes API 服务器之间的连接状态
    get ncp-k8s-api-server status

    示例:

    kubenode> get ncp-k8s-api-server status
    Kubernetes ApiServer status: Healthy
  • 检查所有项目或特定项目
    check projects
    check project <project-name>

    示例:

    kubenode> check projects
        default:
            Tier-1 link port for router 1b90a61f-0f2c-4768-9eb6-ea8954b4f327 is missing
            Switch 40a6829d-c3aa-4e17-ae8a-7f7910fdf2c6 is missing
    
        ns1:
            Router 8accc9cd-9883-45f6-81b3-0d1fb2583180 is missing
    
    kubenode> check project default
        Tier-1 link port for router 1b90a61f-0f2c-4768-9eb6-ea8954b4f327 is missing
        Switch 40a6829d-c3aa-4e17-ae8a-7f7910fdf2c6 is missing
  • 检查 NCP 和 TAS BBS 之间的连接状态
    get ncp-bbs status

    示例:

    node> get ncp-bbs status
    BBS Server status: Healthy
  • 检查 NCP 和 TAS CAPI 之间的连接状态
    get ncp-capi status

    示例:

    node> get ncp-capi status
    CAPI Server status: Healthy
  • 检查 NCP 和 TAS 策略服务器之间的连接状态
    get ncp-policy-server status

    示例:

    node> get ncp-bbs status
    Policy Server status: Healthy

用于 NCP 容器的缓存命令

  • 获取项目或命名空间的内部缓存
    get project-cache <project-name>
    get project-caches
    get namespace-cache <namespace-name>
    get namespace-caches

    示例:

    kubenode> get project-caches
        default:
            logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180
            logical-switch:
                id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d
                ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e
                subnet: 10.0.0.0/24
                subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
     
        kube-system:
            logical-router: 5032b299-acad-448e-a521-19d272a08c46
            logical-switch:
                id: 85233651-602d-445d-ab10-1c84096cc22a
                ip_pool_id: ab1c5b09-7004-4206-ac56-85d9d94bffa2
                subnet: 10.0.1.0/24
                subnet_id: 73e450af-b4b8-4a61-a6e3-c7ddd15ce751
    
        testns:
            ext_pool_id: 346a0f36-7b5a-4ecc-ad32-338dcb92316f
            labels:
                ns: myns
                project: myproject
            logical-router: 4dc8f8a9-69b4-4ff7-8fb7-d2625dc77efa
            logical-switch:
                id: 6111a99a-6e06-4faa-a131-649f10f7c815
                ip_pool_id: 51ca058d-c3dc-41fd-8f2d-e69006ab1b3d
                subnet: 50.0.2.0/24
                subnet_id: 34f79811-bd29-4048-a67d-67ceac97eb98
            project_nsgroup: 9606afee-6348-4780-9dbe-91abfd23e475
            snat_ip: 4.4.0.3
    
    
    kubenode> get project-cache default
        logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180
        logical-switch:
            id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d
            ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e
            subnet: 10.0.0.0/24
            subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
    
    
    kubenode> get namespace-caches          
        default:
            logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180
            logical-switch:
                id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d
                ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e
                subnet: 10.0.0.0/24
                subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
    
        kube-system:
            logical-router: 5032b299-acad-448e-a521-19d272a08c46
            logical-switch:
                id: 85233651-602d-445d-ab10-1c84096cc22a
                ip_pool_id: ab1c5b09-7004-4206-ac56-85d9d94bffa2
                subnet: 10.0.1.0/24
                subnet_id: 73e450af-b4b8-4a61-a6e3-c7ddd15ce751
    
        testns:
            ext_pool_id: 346a0f36-7b5a-4ecc-ad32-338dcb92316f
            labels:
                ns: myns
                project: myproject
            logical-router: 4dc8f8a9-69b4-4ff7-8fb7-d2625dc77efa
            logical-switch:
                id: 6111a99a-6e06-4faa-a131-649f10f7c815
                ip_pool_id: 51ca058d-c3dc-41fd-8f2d-e69006ab1b3d
                subnet: 50.0.2.0/24
                subnet_id: 34f79811-bd29-4048-a67d-67ceac97eb98
            project_nsgroup: 9606afee-6348-4780-9dbe-91abfd23e475
            snat_ip: 4.4.0.3
    
    
    kubenode> get namespace-cache default          
        logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180
        logical-switch:
            id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d
            ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e
            subnet: 10.0.0.0/24
            subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
  • 获取 pod 的内部缓存
    get pod-cache <pod-name>
    get pod-caches

    示例:

    kubenode> get pod-caches
        nsx.default.nginx-rc-uq2lv:
            cif_id: 2af9f734-37b1-4072-ba88-abbf935bf3d4
            gateway_ip: 10.0.0.1
            host_vif: d6210773-5c07-4817-98db-451bd1f01937
            id: 1c8b5c52-3795-11e8-ab42-005056b198fb
            ingress_controller: False
            ip: 10.0.0.2/24
            labels:
                app: nginx
            mac: 02:50:56:00:08:00
            port_id: d52c833a-f531-4bdf-bfa2-e8a084a8d41b
            vlan: 1
    
        nsx.testns.web-pod-1:
            cif_id: ce134f21-6be5-43fe-afbf-aaca8c06b5cf
            gateway_ip: 50.0.2.1
            host_vif: d6210773-5c07-4817-98db-451bd1f01937
            id: 3180b521-270e-11e8-ab42-005056b198fb
            ingress_controller: False
            ip: 50.0.2.3/24
            labels:
                app: nginx-new
                role: db
                tier: cache
            mac: 02:50:56:00:20:02
            port_id: 81bc2b8e-d902-4cad-9fc1-aabdc32ecaf8
            vlan: 3
    
    
    kubenode> get pod-cache nsx.default.nginx-rc-uq2lv
        cif_id: 2af9f734-37b1-4072-ba88-abbf935bf3d4
        gateway_ip: 10.0.0.1
        host_vif: d6210773-5c07-4817-98db-451bd1f01937
        id: 1c8b5c52-3795-11e8-ab42-005056b198fb
        ingress_controller: False
        ip: 10.0.0.2/24
        labels:
            app: nginx
        mac: 02:50:56:00:08:00
        port_id: d52c833a-f531-4bdf-bfa2-e8a084a8d41b
        vlan: 1
    
  • 获取所有 Ingress 缓存或特定 Ingress 缓存
    get ingress caches
    get ingress-cache <ingress-name>

    示例:

    kubenode> get ingress-caches     
        nsx.default.cafe-ingress:
            ext_pool_id: cc02db70-539a-4934-a938-5b851b3e485b
            lb_virtual_server:
                id: 895c7f43-c56e-4b67-bb4c-09d68459d416
                lb_service_id: 659eefc6-33d1-4672-a419-344b877f528e
                name: dgo2-http
                type: http
            lb_virtual_server_ip: 5.5.0.2
            name: cafe-ingress
            rules:
                host: cafe.example.com
                http:
                    paths:
                        path: /coffee
                        backend:
                            serviceName: coffee-svc
                            servicePort: 80
                        lb_rule:
                            id: 4bc16bdd-abd9-47fb-a09e-21e58b2131c3
                            name: dgo2-default-cafe-ingress/coffee
     
    
    kubenode> get ingress-cache nsx.default.cafe-ingress
        ext_pool_id: cc02db70-539a-4934-a938-5b851b3e485b
        lb_virtual_server:
            id: 895c7f43-c56e-4b67-bb4c-09d68459d416
            lb_service_id: 659eefc6-33d1-4672-a419-344b877f528e
            name: dgo2-http
            type: http
        lb_virtual_server_ip: 5.5.0.2
        name: cafe-ingress
        rules:
            host: cafe.example.com
            http:
                paths:
                    path: /coffee
                        backend:
                            serviceName: coffee-svc
                            servicePort: 80
                        lb_rule:
                            id: 4bc16bdd-abd9-47fb-a09e-21e58b2131c3
                            name: dgo2-default-cafe-ingress/coffee
    
    
  • 获取有关所有 Ingress 控制器或特定 Ingress 控制器的信息,包括已禁用的控制器
    get ingress controllers
    get ingress-controller <ingress-controller-name>

    示例:

    kubenode> get ingress-controllers
        native-load-balancer:
            ingress_virtual_server:
                http:
                    default_backend_tags:
                    id: 895c7f43-c56e-4b67-bb4c-09d68459d416
                    pool_id: None
                https_terminated:
                    default_backend_tags:
                    id: 293282eb-f1a0-471c-9e48-ba28d9d89161
                    pool_id: None
                lb_ip_pool_id: cc02db70-539a-4934-a938-5b851b3e485b
            loadbalancer_service:
                first_avail_index: 0
                lb_services:
                    id: 659eefc6-33d1-4672-a419-344b877f528e
                    name: dgo2-bfmxi
                    t1_link_port_ip: 100.64.128.5
                    t1_router_id: cb50deb2-4460-45f2-879a-1b94592ae886
                    virtual_servers:
                        293282eb-f1a0-471c-9e48-ba28d9d89161
                        895c7f43-c56e-4b67-bb4c-09d68459d416
            ssl:
                ssl_client_profile_id: aff205bb-4db8-5a72-8d67-218cdc54d27b
            vip: 5.5.0.2
     
        nsx.default.nginx-ingress-rc-host-ed3og
            ip: 10.192.162.201
            mode: hostnetwork
            pool_id: 5813c609-5d3a-4438-b9c3-ea3cd6de52c3
    
    
    kubenode> get ingress-controller native-load-balancer
        ingress_virtual_server:
            http:
                default_backend_tags:
                id: 895c7f43-c56e-4b67-bb4c-09d68459d416
                pool_id: None
            https_terminated:
                default_backend_tags:
                id: 293282eb-f1a0-471c-9e48-ba28d9d89161
                pool_id: None
        lb_ip_pool_id: cc02db70-539a-4934-a938-5b851b3e485b
            loadbalancer_service:
                first_avail_index: 0
                lb_services:
                    id: 659eefc6-33d1-4672-a419-344b877f528e
                    name: dgo2-bfmxi
                    t1_link_port_ip: 100.64.128.5
                    t1_router_id: cb50deb2-4460-45f2-879a-1b94592ae886
                    virtual_servers:
                        293282eb-f1a0-471c-9e48-ba28d9d89161
                        895c7f43-c56e-4b67-bb4c-09d68459d416
            ssl:
                ssl_client_profile_id: aff205bb-4db8-5a72-8d67-218cdc54d27b
            vip: 5.5.0.2
    
  • 获取网络策略缓存或特定缓存
    get network-policy caches
    get network-policy-cache <network-policy-name>

    示例:

    kubenode> get network-policy-caches
        nsx.testns.allow-tcp-80:
            dest_labels: None
            dest_pods:
                50.0.2.3
            match_expressions:
                key: tier
                operator: In
                values:
                    cache
            name: allow-tcp-80
            np_dest_ip_set_ids:
                22f82d76-004f-4d12-9504-ce1cb9c8aa00
                np_except_ip_set_ids:
            np_ip_set_ids:
                14f7f825-f1a0-408f-bbd9-bb2f75d44666
            np_isol_section_id: c8d93597-9066-42e3-991c-c550c46b2270
            np_section_id: 04693136-7925-44f2-8616-d809d02cd2a9
            ns_name: testns
            src_egress_rules: None
            src_egress_rules_hash: 97d170e1550eee4afc0af065b78cda302a97674c
            src_pods:
                50.0.2.0/24
            src_rules:
                from:
                    namespaceSelector:
                        matchExpressions:
                            key: tier
                            operator: DoesNotExist
                        matchLabels:
                            ns: myns
                ports:
                    port: 80
                    protocol: TCP
            src_rules_hash: e4ea7b8d91c1e722670a59f971f8fcc1a5ac51f1
    
    
    kubenode> get network-policy-cache nsx.testns.allow-tcp-80
        dest_labels: None
        dest_pods:
            50.0.2.3
        match_expressions:
            key: tier
            operator: In
            values:
                cache
        name: allow-tcp-80
        np_dest_ip_set_ids:
            22f82d76-004f-4d12-9504-ce1cb9c8aa00
            np_except_ip_set_ids:
        np_ip_set_ids:
            14f7f825-f1a0-408f-bbd9-bb2f75d44666
        np_isol_section_id: c8d93597-9066-42e3-991c-c550c46b2270
        np_section_id: 04693136-7925-44f2-8616-d809d02cd2a9
        ns_name: testns
        src_egress_rules: None
        src_egress_rules_hash: 97d170e1550eee4afc0af065b78cda302a97674c
        src_pods:
            50.0.2.0/24
        src_rules:
            from:
                namespaceSelector:
                    matchExpressions:
                        key: tier
                        operator: DoesNotExist
                    matchLabels:
                        ns: myns
            ports:
                port: 80
                protocol: TCP
        src_rules_hash: e4ea7b8d91c1e722670a59f971f8fcc1a5ac51f1
    
  • 获取所有 ASG 缓存或特定 ASG 缓存
    get asg-caches
    get asg-cache <asg-ID>

    示例:

    node> get asg-caches
        edc04715-d04c-4e63-abbc-db601a668db6:
            fws_id: 3c66f40a-5378-46d7-a7e2-bee4ba72a4cc
            name: org-85_tcp_80_asg
            rules:
                destinations:
                    66.10.10.0/24
                ports:
                    80
                protocol: tcp
                rule_id: 4359
            running_default: False
            running_spaces:
                75bc164d-1214-46f9-80bb-456a8fbccbfd
            staging_default: False
            staging_spaces:
    
    
    node> get asg-cache edc04715-d04c-4e63-abbc-db601a668db6
        fws_id: 3c66f40a-5378-46d7-a7e2-bee4ba72a4cc
        name: org-85_tcp_80_asg
        rules:
            destinations:
                66.10.10.0/24
            ports:
                80
            protocol: tcp
            rule_id: 4359
        running_default: False
        running_spaces:
            75bc164d-1214-46f9-80bb-456a8fbccbfd
        staging_default: False
        staging_spaces:
    
  • 获取所有组织缓存或特定组织缓存
    get org-caches
    get org-cache <org-ID>

    示例:

    node> get org-caches
        ebb8b4f9-a40f-4122-bf21-65c40f575aca:
            ext_pool_id: 9208a8b8-57d7-4582-9c1f-7a7cefa104f5
            isolation:
                isolation_section_id: d6e7ff95-4737-4e34-91d4-27601897353f
            logical-router: 94a414a2-551e-4444-bae6-3d79901a165f
            logical-switch:
                id: d74807e8-8f74-4575-b26b-87d4fdbafd3c
                ip_pool_id: 1b60f16f-4a30-4a3d-93cc-bfb08a5e3e02
                subnet: 50.0.48.0/24
                subnet_id: a458d3aa-bea9-4684-9957-d0ce80d11788
            name: org-50
            snat_ip: 70.0.0.49
            spaces:
                e8ab7aa0-d4e3-4458-a896-f33177557851
    
    
    node> get org-cache ebb8b4f9-a40f-4122-bf21-65c40f575aca
        ext_pool_id: 9208a8b8-57d7-4582-9c1f-7a7cefa104f5
        isolation:
            isolation_section_id: d6e7ff95-4737-4e34-91d4-27601897353f
        logical-router: 94a414a2-551e-4444-bae6-3d79901a165f
        logical-switch:
            id: d74807e8-8f74-4575-b26b-87d4fdbafd3c
            ip_pool_id: 1b60f16f-4a30-4a3d-93cc-bfb08a5e3e02
            subnet: 50.0.48.0/24
            subnet_id: a458d3aa-bea9-4684-9957-d0ce80d11788
        name: org-50
        snat_ip: 70.0.0.49
        spaces:
            e8ab7aa0-d4e3-4458-a896-f33177557851
    
  • 获取所有空间缓存或特定空间缓存
    get space-caches
    get space-cache <space-ID>

    示例:

    node> get space-caches
        global_security_group:
            name: global_security_group
            running_nsgroup: 226d4292-47fb-4c2e-a118-449818d8fa98
            staging_nsgroup: 7ebbf7f5-38c9-43a3-9292-682056722836
    
        7870d134-7997-4373-b665-b6a910413c47:
            name: test-space1
            org_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09
            running_nsgroup: 4a3d9bcc-be36-47ae-bff8-96448fecf307
            running_security_groups:
                aa0c7c3f-a478-4d45-8afa-df5d5d7dc512
            staging_security_groups:
                aa0c7c3f-a478-4d45-8afa-df5d5d7dc512
    
    
    node> get space-cache 7870d134-7997-4373-b665-b6a910413c47
        name: test-space1
        org_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09
        running_nsgroup: 4a3d9bcc-be36-47ae-bff8-96448fecf307
        running_security_groups:
            aa0c7c3f-a478-4d45-8afa-df5d5d7dc512
        staging_security_groups:
            aa0c7c3f-a478-4d45-8afa-df5d5d7dc512
    
  • 获取所有应用程序缓存或特定应用程序缓存
    get app-caches
    get app-cache <app-ID>

    示例:

    node> get app-caches
         aff2b12b-b425-4d9f-b8e6-b6308644efa8:
             instances:
                 b72199cc-e1ab-49bf-506d-478d:
                 app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8
                 cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544
                 cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b
                 gateway_ip: 192.168.5.1
                 host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab
                 id: b72199cc-e1ab-49bf-506d-478d
                 index: 0
                 ip: 192.168.5.4/24
                 last_updated_time: 1522965828.45
                 mac: 02:50:56:00:60:02
                 port_id: a7c6f6bb-c472-4239-a030-bce615d5063e
                 state: RUNNING
                 vlan: 3
             name: hello2
             rg_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09
             space_id: 7870d134-7997-4373-b665-b6a910413c47
    
    
    node> get app-cache aff2b12b-b425-4d9f-b8e6-b6308644efa8
        instances:
            b72199cc-e1ab-49bf-506d-478d:
            app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8
            cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544
            cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b
            gateway_ip: 192.168.5.1
            host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab
            id: b72199cc-e1ab-49bf-506d-478d
            index: 0
            ip: 192.168.5.4/24
            last_updated_time: 1522965828.45
            mac: 02:50:56:00:60:02
            port_id: a7c6f6bb-c472-4239-a030-bce615d5063e
            state: RUNNING
            vlan: 3
        name: hello2
        org_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09
        space_id: 7870d134-7997-4373-b665-b6a910413c47
    
  • 获取某个应用程序的所有实例缓存或特定实例缓存
    get instance-caches <app-ID>
    get instance-cache <app-ID> <instance-ID>

    示例:

    node> get instance-caches aff2b12b-b425-4d9f-b8e6-b6308644efa8
        b72199cc-e1ab-49bf-506d-478d:
            app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8
            cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544
            cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b
            gateway_ip: 192.168.5.1
            host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab
            id: b72199cc-e1ab-49bf-506d-478d
            index: 0
            ip: 192.168.5.4/24
            last_updated_time: 1522965828.45
            mac: 02:50:56:00:60:02
            port_id: a7c6f6bb-c472-4239-a030-bce615d5063e
            state: RUNNING
            vlan: 3
    
    
    node> get instance-cache aff2b12b-b425-4d9f-b8e6-b6308644efa8 b72199cc-e1ab-49bf-506d-478d
        app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8
        cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544
        cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b
        gateway_ip: 192.168.5.1
        host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab
        id: b72199cc-e1ab-49bf-506d-478d
        index: 0
        ip: 192.168.5.4/24
        last_updated_time: 1522965828.45
        mac: 02:50:56:00:60:02
        port_id: a7c6f6bb-c472-4239-a030-bce615d5063e
        state: RUNNING
        vlan: 3
    
  • 获取所有策略缓存
    get policy-caches

    示例:

    node> get policy-caches
        aff2b12b-b425-4d9f-b8e6-b6308644efa8:
            fws_id: 3fe27725-f139-479a-b83b-8576c9aedbef
            nsg_id: 30583a27-9b56-49c1-a534-4040f91cc333
            rules:
                8272:
                    dst_app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8
                    ports: 8382
                    protocol: tcp
                    src_app_id: f582ec4d-3a13-440a-afbd-97b7bfae21d1
    
        f582ec4d-3a13-440a-afbd-97b7bfae21d1:
            nsg_id: d24b9f77-e2e0-4fba-b258-893223683aa6
            rules:
                8272:
                    dst_app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8
                    ports: 8382
                    protocol: tcp
                    src_app_id: f582ec4d-3a13-440a-afbd-97b7bfae21d1
    

用于 NCP 容器的支持命令

  • 在文件存储中保存 NCP 支持包

    支持包包含 pod 中的所有容器的日志文件并具有 tier:nsx-networking 标签。包文件采用 tgz 格式,并保存在 CLI 默认文件存储目录 /var/vmware/nsx/file-store 中。您可以使用 CLI file-store 命令将包文件复制到远程站点中。

    get support-bundle file <filename>

    示例:

    kubenode>get support-bundle file foo
    Bundle file foo created in tgz format
    kubenode>copy file foo url scp://[email protected]:/tmp
  • 在文件存储中保存 NCP 日志

    日志文件以 tgz 格式保存在 CLI 默认文件存储目录 /var/vmware/nsx/file-store 中。您可以使用 CLI file-store 命令将包文件复制到远程站点中。

    get ncp-log file <filename>

    示例:

    kubenode>get ncp-log file foo
    Log file foo created in tgz format
  • 在文件存储中保存节点代理日志

    保存一个节点或所有节点的节点代理日志。日志以 tgz 格式保存在 CLI 默认文件存储目录 /var/vmware/nsx/file-store 中。您可以使用 CLI file-store 命令将包文件复制到远程站点中。

    get node-agent-log file <filename>
    get node-agent-log file <filename> <node-name>

    示例:

    kubenode>get node-agent-log file foo
    Log file foo created in tgz format
  • 全局获取并设置特定组件的日志级别。

    可用的日志级别为 NOTSETDEBUGINFOWARNINGERRORCRITICAL

    可用的组件为 nsx_ujo.ncpnsx_ujo.ncp.k8s nsx_ujo.ncp.pcfvmware_nsxlib.v3nsxrpcnsx_ujo.ncp.nsx

    get ncp-log-level [component]
    set ncp-log-level <log level> [component]

    示例:

    kubenode> get ncp-log-level
    NCP log level is INFO
    
    kubenode> get ncp-log-level nsx_ujo.ncp
    nsx_ujo.ncp log level is INFO
     
    kubenode>set ncp-log-level DEBUG
    NCP log level is changed to DEBUG
    
    kubenode> set ncp-log-level DEBUG nsx_ujo.ncp
    nsx_ujo.ncp log level has been changed to DEBUG

用于 NSX 节点代理容器的状态命令

  • 显示节点代理和该节点上的 HyperBus 之间的连接状态。
    get node-agent-hyperbus status

    示例:

    kubenode> get node-agent-hyperbus status
    HyperBus status: Healthy

用于 NSX 节点代理容器的缓存命令

  • 获取 NSX 节点代理容器的内部缓存。
    get container-cache <container-name>
    get container-caches

    示例:

    kubenode> get container-caches
        cif104:
            ip: 192.168.0.14/32
            mac: 50:01:01:01:01:14
            gateway_ip: 169.254.1.254/16
            vlan_id: 104
    
    
    kubenode> get container-cache cif104
        ip: 192.168.0.14/32
        mac: 50:01:01:01:01:14
        gateway_ip: 169.254.1.254/16
        vlan_id: 104
    

用于 NSX Kube 代理容器的状态命令

  • 显示 Kube 代理和 Kubernetes API 服务器之间的连接状态
    get ncp-k8s-api-server status

    示例:

    kubenode> get kube-proxy-k8s-api-server status
    Kubernetes ApiServer status: Healthy
  • 显示 Kube 代理监视程序状态
    get kube-proxy-watcher <watcher-name>
    get kube-proxy-watchers

    示例:

    kubenode> get kube-proxy-watchers
        endpoint:
            Average event processing time: 15 msec (in past 3600-sec window)
            Current watcher started time: May 01 2017 15:06:24 PDT
            Number of events processed: 90 (in past 3600-sec window)
            Total events processed by current watcher: 90
            Total events processed since watcher thread created: 90
            Total watcher recycle count: 0
            Watcher thread created time: May 01 2017 15:06:24 PDT
            Watcher thread status: Up
    
         service:
            Average event processing time: 8 msec (in past 3600-sec window)
            Current watcher started time: May 01 2017 15:06:24 PDT
            Number of events processed: 2 (in past 3600-sec window)
            Total events processed by current watcher: 2
            Total events processed since watcher thread created: 2
            Total watcher recycle count: 0
            Watcher thread created time: May 01 2017 15:06:24 PDT
            Watcher thread status: Up
    
    
    kubenode> get kube-proxy-watcher endpoint
        Average event processing time: 15 msec (in past 3600-sec window)
        Current watcher started time: May 01 2017 15:06:24 PDT
        Number of events processed: 90 (in past 3600-sec window)
        Total events processed by current watcher: 90
        Total events processed since watcher thread created: 90
        Total watcher recycle count: 0
        Watcher thread created time: May 01 2017 15:06:24 PDT
        Watcher thread status: Up
    
  • 在节点上转储 OVS 流量
    dump ovs-flows

    示例:

    kubenode> dump ovs-flows
        NXST_FLOW reply (xid=0x4):
        cookie=0x0, duration=8.876s, table=0, n_packets=0, n_bytes=0, idle_age=8, priority=100,ip actions=ct(table=1)
        cookie=0x0, duration=8.898s, table=0, n_packets=0, n_bytes=0, idle_age=8, priority=0 actions=NORMAL
        cookie=0x0, duration=8.759s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=100,tcp,nw_dst=10.96.0.1,tp_dst=443 actions=mod_tp_dst:443
        cookie=0x0, duration=8.719s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=100,ip,nw_dst=10.96.0.10 actions=drop
        cookie=0x0, duration=8.819s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=90,ip,in_port=1 actions=ct(table=2,nat)
        cookie=0x0, duration=8.799s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=80,ip actions=NORMAL
        cookie=0x0, duration=8.856s, table=2, n_packets=0, n_bytes=0, idle_age=8, actions=NORMAL