Para ejecutar comandos de la CLI, inicie sesión en el contenedor de NSX Container Plugin, abra un terminal y ejecute el comando nsxcli.

También puede obtener avisos de la CLI ejecutando el siguiente comando en un nodo:
  kubectl exec -it <pod name> nsxcli
Tabla 1. Comandos de la CLI para todos los contenedores
Comando
copy file <nombre de archivo> url <url>
copy url <url> [file <nombre de archivo>]
del file <nombre de archivo>
exit
get cli-output datetime
get command history
get core-dumps
get file <nombre de archivo>
get files
get version
help
lista
set cli-output datetime <arg-fecha/hora>
set history limit <tamaño de historial>
Tabla 2. Comandos de la CLI para el contenedor de NCP
Tipo Comando Nota
Estado get ncp-master status Para Kubernetes y TAS.
Estado get ncp-nsx status Para Kubernetes y TAS.
Estado get ncp-watcher <nombre-monitor> Para Kubernetes y TAS.
Estado get ncp-watchers Para Kubernetes y TAS.
Estado get ncp-k8s-api-server status Solo para Kubernetes.
Estado check projects Solo para Kubernetes.
Estado check project <nombre-proyecto> Solo para Kubernetes.
Estado get ncp-restore status Solo para Kubernetes.
Estado get ncp-bbs status Solo para TAS.
Estado get ncp-capi status Solo para TAS.
Estado get ncp-policy-server status Solo para TAS.
Caché get project-caches Solo para Kubernetes.
Caché get project-cache <nombre-proyecto> Solo para Kubernetes.
Caché get namespace-caches Solo para Kubernetes.
Caché get namespace-cache <nombre-espaciodenombres> Solo para Kubernetes.
Caché get pod-caches Solo para Kubernetes.
Caché get pod-cache <nombre-pod> Solo para Kubernetes.
Caché get ingress-caches Solo para Kubernetes.
Caché get ingress-cache <ingress-name> Solo para Kubernetes.
Caché get ingress-cache <nombre de entrada> <nombre de espacio de nombres> Solo para Kubernetes.
Caché get ingress-controllers Solo para Kubernetes.
Caché get ingress-controller <nombre-controlador-entrada> Solo para Kubernetes.
Caché get network-policy-caches Solo para Kubernetes.
Caché get network-policy-cache <nombre de directiva de red> Solo para Kubernetes.
Caché get network-policy-cache <nombre de directiva de red> <nombre de espacio de nombres> Solo para Kubernetes.
Caché get service-cache <id de servicio> Solo para Kubernetes.
Caché get service-caches Solo para Kubernetes.
Caché get asg-caches Solo para TAS.
Caché get asg-cache <ID-asg> Solo para TAS.
Caché get org-caches Solo para TAS.
Caché get org-cache <ID-org> Solo para TAS.
Caché get space-caches Solo para TAS.
Caché get space-cache <ID-espacio> Solo para TAS.
Caché get app-caches Solo para TAS.
Caché get app-cache <<ID-aplicación> Solo para TAS.
Caché get instance-caches <ID-aplicación> Solo para TAS.
Caché get instance-cache <ID-aplicación> <ID-instancia> Solo para TAS.
Caché get policy-caches Solo para TAS.
Soporte técnico get ncp-log file <nombredearchivo> Para Kubernetes y TAS.
Soporte técnico get ncp-log-level [componente] Para Kubernetes y TAS.
Soporte técnico set ncp-log-level <nivel-registro> [componente] Para Kubernetes y TAS.
Soporte técnico get support-bundle file <nombredearchivo> Solo para Kubernetes.
Soporte técnico get node-agent-log file <nombredearchivo> Solo para Kubernetes.
Soporte técnico get node-agent-log file <nombredearchivo> <nombre-nodo> Solo para Kubernetes.
Tabla 3. Comandos de la CLI para el contenedor de agentes del nodo de NSX
Tipo Comando
Estado get node-agent-hyperbus status
Estado get node-agent-ovs status
Soporte técnico get node-agent-log-level
Soporte técnico set node-agent-log-level <nivel de registro>
Caché

get container-cache <nombre-contenedor>

Caché

get container-caches

Tabla 4. Comandos de la CLI para el contenedor de Kube Proxy de NSX
Tipo Comando
Estado get kube-proxy-k8s-api-server status
Estado get kube-proxy-watcher <nombre-monitor>
Estado get kube-proxy-watchers
Soporte técnico get kube-proxy-log-level
Soporte técnico set kube-proxy-log-level <nivel de registro>

Ejemplos de algunos comandos para todos los contenedores

  • Copie un archivo local en un destino remoto.
    copy file <filename> url <url>

    Ejemplo:

    container> copy file support-bundle-0.tgz url scp://[email protected]/home/admin/
    [email protected]'s password:
    container>
  • Copie un archivo remoto en un archivo local.
    copy url <url> [file <filename>]

    Ejemplo:

    container> copy url scp://[email protected]/home/admin/support-bundle-0.tgz
    [email protected]'s password:
    container>

Comandos de estado para el contenedor de NCP

  • Mostrar el estado del maestro de NCP
    get ncp-master status

    Ejemplo:

    kubenode> get ncp-master status
    This instance is not the NCP master
    Current NCP Master id is a4h83eh1-b8dd-4e74-c71c-cbb7cc9c4c1c
    Last master update at Wed Oct 25 22:46:40 2017
  • Mostrar el estado de la conexión entre NCP y NSX Manager
    get ncp-nsx status

    Ejemplo:

    kubenode> get ncp-nsx status
    NSX Manager status: Healthy
  • Mostrar el estado del monitor para la entrada, el espacio de nombres, el pod y el servicio
    get ncp-watchers
    get ncp-watcher <watcher-name>

    Ejemplo:

    kubenode> get ncp-watchers
        pod:
            Average event processing time: 1145 msec (in past 3600-sec window)
            Current watcher started time: Mar 02 2017 10:51:37 PST
            Number of events processed: 1 (in past 3600-sec window)
            Total events processed by current watcher: 1
            Total events processed since watcher thread created: 1
            Total watcher recycle count: 0
            Watcher thread created time: Mar 02 2017 10:51:37 PST
            Watcher thread status: Up
     
        namespace:
            Average event processing time: 68 msec (in past 3600-sec window)
            Current watcher started time: Mar 02 2017 10:51:37 PST
            Number of events processed: 2 (in past 3600-sec window)
            Total events processed by current watcher: 2
            Total events processed since watcher thread created: 2
            Total watcher recycle count: 0
            Watcher thread created time: Mar 02 2017 10:51:37 PST
            Watcher thread status: Up
     
        ingress:
            Average event processing time: 0 msec (in past 3600-sec window)
            Current watcher started time: Mar 02 2017 10:51:37 PST
            Number of events processed: 0 (in past 3600-sec window)
            Total events processed by current watcher: 0
            Total events processed since watcher thread created: 0
            Total watcher recycle count: 0
            Watcher thread created time: Mar 02 2017 10:51:37 PST
            Watcher thread status: Up
     
        service:
            Average event processing time: 3 msec (in past 3600-sec window)
            Current watcher started time: Mar 02 2017 10:51:37 PST
            Number of events processed: 1 (in past 3600-sec window)
            Total events processed by current watcher: 1
            Total events processed since watcher thread created: 1
            Total watcher recycle count: 0
            Watcher thread created time: Mar 02 2017 10:51:37 PST
            Watcher thread status: Up
    
    
    kubenode> get ncp-watcher pod
        Average event processing time: 1174 msec (in past 3600-sec window)
        Current watcher started time: Mar 02 2017 10:47:35 PST
        Number of events processed: 1 (in past 3600-sec window)
        Total events processed by current watcher: 1
        Total events processed since watcher thread created: 1
        Total watcher recycle count: 0
        Watcher thread created time: Mar 02 2017 10:47:35 PST
        Watcher thread status: Up
  • Mostrar el estado de conexión entre el servidor de API de NCP y de Kubernetes
    get ncp-k8s-api-server status

    Ejemplo:

    kubenode> get ncp-k8s-api-server status
    Kubernetes ApiServer status: Healthy
  • Comprobar todos los proyectos o uno específico
    check projects
    check project <project-name>

    Ejemplo:

    kubenode> check projects
        default:
            Tier-1 link port for router 1b90a61f-0f2c-4768-9eb6-ea8954b4f327 is missing
            Switch 40a6829d-c3aa-4e17-ae8a-7f7910fdf2c6 is missing
    
        ns1:
            Router 8accc9cd-9883-45f6-81b3-0d1fb2583180 is missing
    
    kubenode> check project default
        Tier-1 link port for router 1b90a61f-0f2c-4768-9eb6-ea8954b4f327 is missing
        Switch 40a6829d-c3aa-4e17-ae8a-7f7910fdf2c6 is missing
  • Comprobar el estado de conexión entre NCP y TAS BBS
    get ncp-bbs status

    Ejemplo:

    node> get ncp-bbs status
    BBS Server status: Healthy
  • Comprobar el estado de conexión entre NCP y TAS CAPI
    get ncp-capi status

    Ejemplo:

    node> get ncp-capi status
    CAPI Server status: Healthy
  • Comprobar el estado de conexión entre el servidor de directivas de NCP y TAS
    get ncp-policy-server status

    Ejemplo:

    node> get ncp-bbs status
    Policy Server status: Healthy

Comandos de caché para el contenedor de NCP

  • Obtener la caché interna para los proyectos o los espacios de nombres
    get project-cache <project-name>
    get project-caches
    get namespace-cache <namespace-name>
    get namespace-caches

    Ejemplo:

    kubenode> get project-caches
        default:
            logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180
            logical-switch:
                id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d
                ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e
                subnet: 10.0.0.0/24
                subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
     
        kube-system:
            logical-router: 5032b299-acad-448e-a521-19d272a08c46
            logical-switch:
                id: 85233651-602d-445d-ab10-1c84096cc22a
                ip_pool_id: ab1c5b09-7004-4206-ac56-85d9d94bffa2
                subnet: 10.0.1.0/24
                subnet_id: 73e450af-b4b8-4a61-a6e3-c7ddd15ce751
    
        testns:
            ext_pool_id: 346a0f36-7b5a-4ecc-ad32-338dcb92316f
            labels:
                ns: myns
                project: myproject
            logical-router: 4dc8f8a9-69b4-4ff7-8fb7-d2625dc77efa
            logical-switch:
                id: 6111a99a-6e06-4faa-a131-649f10f7c815
                ip_pool_id: 51ca058d-c3dc-41fd-8f2d-e69006ab1b3d
                subnet: 50.0.2.0/24
                subnet_id: 34f79811-bd29-4048-a67d-67ceac97eb98
            project_nsgroup: 9606afee-6348-4780-9dbe-91abfd23e475
            snat_ip: 4.4.0.3
    
    
    kubenode> get project-cache default
        logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180
        logical-switch:
            id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d
            ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e
            subnet: 10.0.0.0/24
            subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
    
    
    kubenode> get namespace-caches          
        default:
            logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180
            logical-switch:
                id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d
                ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e
                subnet: 10.0.0.0/24
                subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
    
        kube-system:
            logical-router: 5032b299-acad-448e-a521-19d272a08c46
            logical-switch:
                id: 85233651-602d-445d-ab10-1c84096cc22a
                ip_pool_id: ab1c5b09-7004-4206-ac56-85d9d94bffa2
                subnet: 10.0.1.0/24
                subnet_id: 73e450af-b4b8-4a61-a6e3-c7ddd15ce751
    
        testns:
            ext_pool_id: 346a0f36-7b5a-4ecc-ad32-338dcb92316f
            labels:
                ns: myns
                project: myproject
            logical-router: 4dc8f8a9-69b4-4ff7-8fb7-d2625dc77efa
            logical-switch:
                id: 6111a99a-6e06-4faa-a131-649f10f7c815
                ip_pool_id: 51ca058d-c3dc-41fd-8f2d-e69006ab1b3d
                subnet: 50.0.2.0/24
                subnet_id: 34f79811-bd29-4048-a67d-67ceac97eb98
            project_nsgroup: 9606afee-6348-4780-9dbe-91abfd23e475
            snat_ip: 4.4.0.3
    
    
    kubenode> get namespace-cache default          
        logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180
        logical-switch:
            id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d
            ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e
            subnet: 10.0.0.0/24
            subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
  • Obtener la caché interna para los pods
    get pod-cache <pod-name>
    get pod-caches

    Ejemplo:

    kubenode> get pod-caches
        nsx.default.nginx-rc-uq2lv:
            cif_id: 2af9f734-37b1-4072-ba88-abbf935bf3d4
            gateway_ip: 10.0.0.1
            host_vif: d6210773-5c07-4817-98db-451bd1f01937
            id: 1c8b5c52-3795-11e8-ab42-005056b198fb
            ingress_controller: False
            ip: 10.0.0.2/24
            labels:
                app: nginx
            mac: 02:50:56:00:08:00
            port_id: d52c833a-f531-4bdf-bfa2-e8a084a8d41b
            vlan: 1
    
        nsx.testns.web-pod-1:
            cif_id: ce134f21-6be5-43fe-afbf-aaca8c06b5cf
            gateway_ip: 50.0.2.1
            host_vif: d6210773-5c07-4817-98db-451bd1f01937
            id: 3180b521-270e-11e8-ab42-005056b198fb
            ingress_controller: False
            ip: 50.0.2.3/24
            labels:
                app: nginx-new
                role: db
                tier: cache
            mac: 02:50:56:00:20:02
            port_id: 81bc2b8e-d902-4cad-9fc1-aabdc32ecaf8
            vlan: 3
    
    
    kubenode> get pod-cache nsx.default.nginx-rc-uq2lv
        cif_id: 2af9f734-37b1-4072-ba88-abbf935bf3d4
        gateway_ip: 10.0.0.1
        host_vif: d6210773-5c07-4817-98db-451bd1f01937
        id: 1c8b5c52-3795-11e8-ab42-005056b198fb
        ingress_controller: False
        ip: 10.0.0.2/24
        labels:
            app: nginx
        mac: 02:50:56:00:08:00
        port_id: d52c833a-f531-4bdf-bfa2-e8a084a8d41b
        vlan: 1
    
  • Obtener todas las cachés de entrada o una específica
    get ingress caches
    get ingress-cache <ingress-name>
    get ingress-cache <ingress-name> <namespace-name>

    Ejemplo:

    kubenode> get ingress-caches     
        nsx.default.cafe-ingress:
            ext_pool_id: cc02db70-539a-4934-a938-5b851b3e485b
            lb_virtual_server:
                id: 895c7f43-c56e-4b67-bb4c-09d68459d416
                lb_service_id: 659eefc6-33d1-4672-a419-344b877f528e
                name: dgo2-http
                type: http
            lb_virtual_server_ip: 5.5.0.2
            name: cafe-ingress
            rules:
                host: cafe.example.com
                http:
                    paths:
                        path: /coffee
                        backend:
                            serviceName: coffee-svc
                            servicePort: 80
                        lb_rule:
                            id: 4bc16bdd-abd9-47fb-a09e-21e58b2131c3
                            name: dgo2-default-cafe-ingress/coffee
     
    
    kubenode> get ingress-cache nsx.default.cafe-ingress
        ext_pool_id: cc02db70-539a-4934-a938-5b851b3e485b
        lb_virtual_server:
            id: 895c7f43-c56e-4b67-bb4c-09d68459d416
            lb_service_id: 659eefc6-33d1-4672-a419-344b877f528e
            name: dgo2-http
            type: http
        lb_virtual_server_ip: 5.5.0.2
        name: cafe-ingress
        rules:
            host: cafe.example.com
            http:
                paths:
                    path: /coffee
                        backend:
                            serviceName: coffee-svc
                            servicePort: 80
                        lb_rule:
                            id: 4bc16bdd-abd9-47fb-a09e-21e58b2131c3
                            name: dgo2-default-cafe-ingress/coffee
    
    
    kubenode> get ingress-cache tea-ingress tea-ns
        creation_timestamp: 2019-05-02T22:00:15Z
        default_backend: None
        labels: None
        loadbalancer_ingress:
            ip: 4.4.0.1
            ip: 100.64.240.3
        name: tea-ingress
        namespace: tea-ns
        rules:
            host: drink.example.com
            http:
                paths:
                    backend:
                        lb_pool:
                            28b36074-8bcc-43ed-bf41-6e3cf4b6fc68:
                                algorithm: ROUND_ROBIN
                                members: None
                        service
                            name: tea-svc
                            port:
                                number: 80
                    nsx_lb_rule:
                        36861dc4-7488-46a8-9820-ba846c97cb09:
                            phase: HTTP_FORWARDING
                    path: /tea
        tls:
            hosts:
                drink.example.com
            nsx_certificate:
            secretName: drink-secret
        uid: aa64e1aa-6d25-11e9-b86c-0050568c8767
    
  • Obtener información sobre todos los controladores de entrada o uno específico, incluidos los controladores que están deshabilitados
    get ingress controllers
    get ingress-controller <ingress-controller-name>

    Ejemplo:

    kubenode> get ingress-controllers
        native-load-balancer:
            ingress_virtual_server:
                http:
                    default_backend_tags:
                    id: 895c7f43-c56e-4b67-bb4c-09d68459d416
                    pool_id: None
                https_terminated:
                    default_backend_tags:
                    id: 293282eb-f1a0-471c-9e48-ba28d9d89161
                    pool_id: None
                lb_ip_pool_id: cc02db70-539a-4934-a938-5b851b3e485b
            loadbalancer_service:
                first_avail_index: 0
                lb_services:
                    id: 659eefc6-33d1-4672-a419-344b877f528e
                    name: dgo2-bfmxi
                    t1_link_port_ip: 100.64.128.5
                    t1_router_id: cb50deb2-4460-45f2-879a-1b94592ae886
                    virtual_servers:
                        293282eb-f1a0-471c-9e48-ba28d9d89161
                        895c7f43-c56e-4b67-bb4c-09d68459d416
            ssl:
                ssl_client_profile_id: aff205bb-4db8-5a72-8d67-218cdc54d27b
            vip: 5.5.0.2
     
        nsx.default.nginx-ingress-rc-host-ed3og
            ip: 10.192.162.201
            mode: hostnetwork
            pool_id: 5813c609-5d3a-4438-b9c3-ea3cd6de52c3
    
    
    kubenode> get ingress-controller native-load-balancer
        ingress_virtual_server:
            http:
                default_backend_tags:
                id: 895c7f43-c56e-4b67-bb4c-09d68459d416
                pool_id: None
            https_terminated:
                default_backend_tags:
                id: 293282eb-f1a0-471c-9e48-ba28d9d89161
                pool_id: None
        lb_ip_pool_id: cc02db70-539a-4934-a938-5b851b3e485b
            loadbalancer_service:
                first_avail_index: 0
                lb_services:
                    id: 659eefc6-33d1-4672-a419-344b877f528e
                    name: dgo2-bfmxi
                    t1_link_port_ip: 100.64.128.5
                    t1_router_id: cb50deb2-4460-45f2-879a-1b94592ae886
                    virtual_servers:
                        293282eb-f1a0-471c-9e48-ba28d9d89161
                        895c7f43-c56e-4b67-bb4c-09d68459d416
            ssl:
                ssl_client_profile_id: aff205bb-4db8-5a72-8d67-218cdc54d27b
            vip: 5.5.0.2
    
  • Obtener cachés de directiva de red o una específica
    get network-policy caches
    get network-policy-cache <network-policy-name>
    get network-policy-cache <network-policy-name> <namespace-name>

    Ejemplo:

    kubenode> get network-policy-caches
        nsx.testns.allow-tcp-80:
            dest_labels: None
            dest_pods:
                50.0.2.3
            match_expressions:
                key: tier
                operator: In
                values:
                    cache
            name: allow-tcp-80
            np_dest_ip_set_ids:
                22f82d76-004f-4d12-9504-ce1cb9c8aa00
                np_except_ip_set_ids:
            np_ip_set_ids:
                14f7f825-f1a0-408f-bbd9-bb2f75d44666
            np_isol_section_id: c8d93597-9066-42e3-991c-c550c46b2270
            np_section_id: 04693136-7925-44f2-8616-d809d02cd2a9
            ns_name: testns
            src_egress_rules: None
            src_egress_rules_hash: 97d170e1550eee4afc0af065b78cda302a97674c
            src_pods:
                50.0.2.0/24
            src_rules:
                from:
                    namespaceSelector:
                        matchExpressions:
                            key: tier
                            operator: DoesNotExist
                        matchLabels:
                            ns: myns
                ports:
                    port: 80
                    protocol: TCP
            src_rules_hash: e4ea7b8d91c1e722670a59f971f8fcc1a5ac51f1
    
    
    kubenode> get network-policy-cache nsx.testns.allow-tcp-80
        dest_labels: None
        dest_pods:
            50.0.2.3
        match_expressions:
            key: tier
            operator: In
            values:
                cache
        name: allow-tcp-80
        np_dest_ip_set_ids:
            22f82d76-004f-4d12-9504-ce1cb9c8aa00
            np_except_ip_set_ids:
        np_ip_set_ids:
            14f7f825-f1a0-408f-bbd9-bb2f75d44666
        np_isol_section_id: c8d93597-9066-42e3-991c-c550c46b2270
        np_section_id: 04693136-7925-44f2-8616-d809d02cd2a9
        ns_name: testns
        src_egress_rules: None
        src_egress_rules_hash: 97d170e1550eee4afc0af065b78cda302a97674c
        src_pods:
            50.0.2.0/24
        src_rules:
            from:
                namespaceSelector:
                    matchExpressions:
                        key: tier
                        operator: DoesNotExist
                    matchLabels:
                        ns: myns
            ports:
                port: 80
                protocol: TCP
        src_rules_hash: e4ea7b8d91c1e722670a59f971f8fcc1a5ac51f1
    
    
    kubenode> get network-policy-cache test-network-policy playground
        egress_rules:
            ports:
                port: 5978
                protocol: TCP
            to:
                ipBlock:
                    cidr: 10.0.0.0/24
        id:
            playground
            test-network-policy
        ingress_rules:
            from:
                ipBlock:
                    cidr: 192.167.0.1/24
                    except:
                        192.167.0.22/30
                namespaceSelector:
                    matchLabels:
                        project: playground3
                podSelector:
                    matchLabels:
                        role: testing
        isolation_section:
            id: a2746857-59cd-48ed-90d7-fd0a26395d68
            labels:
                external_id: a815e70a-0646-11ea-940b-0050569e8e8f
            name: is-k8scluster-playground-test-network-policy
            rules:
                1049:
                    action: DROP
                    destinations:
                        is_valid: True
                        target_display_name: tgt-k8scluster-playground-test-network-policy
                        target_id: 2f01d2f1-7496-4e67-a856-4829c56923cb
                        target_type: IPSet
                    direction: IN
                    name: ir-k8scluster-playground-test-network-policy
                    sources: None
        name: test-network-policy
        namespace: playground
        pod_match_expression:
            operator: match_labels
            values:
                role: testing2
        policy_section:
            id: 0fc97658-0588-4af7-b958-1eaf6141e817
            labels:
                external_id: a815e70a-0646-11ea-940b-0050569e8e8f
            name: k8scluster-playground-test-network-policy
            rules:
                1053:
                    action: ALLOW
                    destinations:
                        is_valid: True
                        target_display_name: tgt-k8scluster-playground-test-network-policy
                        target_id: 2f01d2f1-7496-4e67-a856-4829c56923cb
                        target_type: IPSet
                    direction: IN
                    name: ir-k8scluster-playground-test-network-policy-all
                    sources:
                        is_valid: True
                        target_display_name: src-k8scluster-playground-test-network-policy-all
                        target_id: b0573576-ff87-49c2-8279-79858c6329b4
                        target_type: IPSet
        policy_types:
            ingress
            egress
        target_ip_set:
            id: 2f01d2f1-7496-4e67-a856-4829c56923cb
            ip_addresses:
                192.168.0.35
                192.168.0.37
            labels:
                match_expr_hash: 9915edba71061de777bd58ca054745debc14dcf5
                role: testing2
            name: tgt-k8scluster-playground-test-network-policy
    
  • Obtener todas las cachés de ASG o una específica
    get asg-caches
    get asg-cache <asg-ID>

    Ejemplo:

    node> get asg-caches
        edc04715-d04c-4e63-abbc-db601a668db6:
            fws_id: 3c66f40a-5378-46d7-a7e2-bee4ba72a4cc
            name: org-85_tcp_80_asg
            rules:
                destinations:
                    66.10.10.0/24
                ports:
                    80
                protocol: tcp
                rule_id: 4359
            running_default: False
            running_spaces:
                75bc164d-1214-46f9-80bb-456a8fbccbfd
            staging_default: False
            staging_spaces:
    
    
    node> get asg-cache edc04715-d04c-4e63-abbc-db601a668db6
        fws_id: 3c66f40a-5378-46d7-a7e2-bee4ba72a4cc
        name: org-85_tcp_80_asg
        rules:
            destinations:
                66.10.10.0/24
            ports:
                80
            protocol: tcp
            rule_id: 4359
        running_default: False
        running_spaces:
            75bc164d-1214-46f9-80bb-456a8fbccbfd
        staging_default: False
        staging_spaces:
    
  • Obtener todas las cachés de organización o una específica
    get org-caches
    get org-cache <org-ID>

    Ejemplo:

    node> get org-caches
        ebb8b4f9-a40f-4122-bf21-65c40f575aca:
            ext_pool_id: 9208a8b8-57d7-4582-9c1f-7a7cefa104f5
            isolation:
                isolation_section_id: d6e7ff95-4737-4e34-91d4-27601897353f
            logical-router: 94a414a2-551e-4444-bae6-3d79901a165f
            logical-switch:
                id: d74807e8-8f74-4575-b26b-87d4fdbafd3c
                ip_pool_id: 1b60f16f-4a30-4a3d-93cc-bfb08a5e3e02
                subnet: 50.0.48.0/24
                subnet_id: a458d3aa-bea9-4684-9957-d0ce80d11788
            name: org-50
            snat_ip: 70.0.0.49
            spaces:
                e8ab7aa0-d4e3-4458-a896-f33177557851
    
    
    node> get org-cache ebb8b4f9-a40f-4122-bf21-65c40f575aca
        ext_pool_id: 9208a8b8-57d7-4582-9c1f-7a7cefa104f5
        isolation:
            isolation_section_id: d6e7ff95-4737-4e34-91d4-27601897353f
        logical-router: 94a414a2-551e-4444-bae6-3d79901a165f
        logical-switch:
            id: d74807e8-8f74-4575-b26b-87d4fdbafd3c
            ip_pool_id: 1b60f16f-4a30-4a3d-93cc-bfb08a5e3e02
            subnet: 50.0.48.0/24
            subnet_id: a458d3aa-bea9-4684-9957-d0ce80d11788
        name: org-50
        snat_ip: 70.0.0.49
        spaces:
            e8ab7aa0-d4e3-4458-a896-f33177557851
    
  • Obtener todas las cachés de espacio o una específica
    get space-caches
    get space-cache <space-ID>

    Ejemplo:

    node> get space-caches
        global_security_group:
            name: global_security_group
            running_nsgroup: 226d4292-47fb-4c2e-a118-449818d8fa98
            staging_nsgroup: 7ebbf7f5-38c9-43a3-9292-682056722836
    
        7870d134-7997-4373-b665-b6a910413c47:
            name: test-space1
            org_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09
            running_nsgroup: 4a3d9bcc-be36-47ae-bff8-96448fecf307
            running_security_groups:
                aa0c7c3f-a478-4d45-8afa-df5d5d7dc512
            staging_security_groups:
                aa0c7c3f-a478-4d45-8afa-df5d5d7dc512
    
    
    node> get space-cache 7870d134-7997-4373-b665-b6a910413c47
        name: test-space1
        org_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09
        running_nsgroup: 4a3d9bcc-be36-47ae-bff8-96448fecf307
        running_security_groups:
            aa0c7c3f-a478-4d45-8afa-df5d5d7dc512
        staging_security_groups:
            aa0c7c3f-a478-4d45-8afa-df5d5d7dc512
    
  • Obtener todas las cachés de aplicación o una específica
    get app-caches
    get app-cache <app-ID>

    Ejemplo:

    node> get app-caches
         aff2b12b-b425-4d9f-b8e6-b6308644efa8:
             instances:
                 b72199cc-e1ab-49bf-506d-478d:
                 app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8
                 cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544
                 cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b
                 gateway_ip: 192.168.5.1
                 host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab
                 id: b72199cc-e1ab-49bf-506d-478d
                 index: 0
                 ip: 192.168.5.4/24
                 last_updated_time: 1522965828.45
                 mac: 02:50:56:00:60:02
                 port_id: a7c6f6bb-c472-4239-a030-bce615d5063e
                 state: RUNNING
                 vlan: 3
             name: hello2
             rg_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09
             space_id: 7870d134-7997-4373-b665-b6a910413c47
    
    
    node> get app-cache aff2b12b-b425-4d9f-b8e6-b6308644efa8
        instances:
            b72199cc-e1ab-49bf-506d-478d:
            app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8
            cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544
            cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b
            gateway_ip: 192.168.5.1
            host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab
            id: b72199cc-e1ab-49bf-506d-478d
            index: 0
            ip: 192.168.5.4/24
            last_updated_time: 1522965828.45
            mac: 02:50:56:00:60:02
            port_id: a7c6f6bb-c472-4239-a030-bce615d5063e
            state: RUNNING
            vlan: 3
        name: hello2
        org_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09
        space_id: 7870d134-7997-4373-b665-b6a910413c47
    
  • Obtener todas cachés de instancia de una aplicación o una caché de instancia específica
    get instance-caches <app-ID>
    get instance-cache <app-ID> <instance-ID>

    Ejemplo:

    node> get instance-caches aff2b12b-b425-4d9f-b8e6-b6308644efa8
        b72199cc-e1ab-49bf-506d-478d:
            app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8
            cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544
            cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b
            gateway_ip: 192.168.5.1
            host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab
            id: b72199cc-e1ab-49bf-506d-478d
            index: 0
            ip: 192.168.5.4/24
            last_updated_time: 1522965828.45
            mac: 02:50:56:00:60:02
            port_id: a7c6f6bb-c472-4239-a030-bce615d5063e
            state: RUNNING
            vlan: 3
    
    
    node> get instance-cache aff2b12b-b425-4d9f-b8e6-b6308644efa8 b72199cc-e1ab-49bf-506d-478d
        app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8
        cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544
        cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b
        gateway_ip: 192.168.5.1
        host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab
        id: b72199cc-e1ab-49bf-506d-478d
        index: 0
        ip: 192.168.5.4/24
        last_updated_time: 1522965828.45
        mac: 02:50:56:00:60:02
        port_id: a7c6f6bb-c472-4239-a030-bce615d5063e
        state: RUNNING
        vlan: 3
    
  • Obtener todas las cachés de directiva
    get policy-caches

    Ejemplo:

    node> get policy-caches
        aff2b12b-b425-4d9f-b8e6-b6308644efa8:
            fws_id: 3fe27725-f139-479a-b83b-8576c9aedbef
            nsg_id: 30583a27-9b56-49c1-a534-4040f91cc333
            rules:
                8272:
                    dst_app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8
                    ports: 8382
                    protocol: tcp
                    src_app_id: f582ec4d-3a13-440a-afbd-97b7bfae21d1
    
        f582ec4d-3a13-440a-afbd-97b7bfae21d1:
            nsg_id: d24b9f77-e2e0-4fba-b258-893223683aa6
            rules:
                8272:
                    dst_app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8
                    ports: 8382
                    protocol: tcp
                    src_app_id: f582ec4d-3a13-440a-afbd-97b7bfae21d1
    
  • Obtener una memoria caché de servicio en NCP.
    get service-cache <service-id>

    Ejemplo:

    kubenode> get service-cache d3d3b69d-2fc9-11e9-9046-00505691efa7
        lb_persistence_profile: None
        loadbalancer_ingress:
          ip: 11.11.64.8
          ip: 100.64.144.3
        loadbalancer_ip: None
        name: tcp-svc-lb
        namespace: cafe
        service_type: LoadBalancer
        snat_pool: None
        deletion_timestamp: None
        port_nums:
          80
        ports:
          lb_pool:
            e017377f-3e34-4acc-9a7e-583bb16fb8ba:
              algorithm: ROUND_ROBIN
              members: None
          lb_vs:
            ef6f4104-0fcb-42e5-95e5-6b9b7f5ef818:
              ip_address: 11.11.64.8
              ip_pool_id: 1d0dfac7-d707-448a-ad79-e101fafa3e23
              persistence_profile_id: d29cfef5-0caf-4d21-ae01-b2173bb3db36
              pool_id: e017377f-3e34-4acc-9a7e-583bb16fb8ba
          name: tcp
          nodePort: 31836
          port: 80
          protocol: TCP
          targetPort: 80
    

Comandos de soporte para el contenedor de NCP

  • Guardar el paquete de soporte técnico de NCP en el almacén de archivos

    El paquete de soporte técnico consta de los archivos de registro de todos los contenedores de los pods con la etiqueta tier:nsx-networking. El archivo de paquete tiene formato tgz y se guarda en el directorio del almacén de archivos predeterminado de la CLI /var/vmware/nsx/file-store. Puede utilizar el comando del almacén de archivos de la CLI para copiar el archivo de paquete a un sitio remoto.

    get support-bundle file <filename>

    Ejemplo:

    kubenode>get support-bundle file foo
    Bundle file foo created in tgz format
    kubenode>copy file foo url scp://[email protected]:/tmp
  • Guardar los registros de NCP en el almacén de archivos

    El archivo de registro se guarda en formato tgz y en el directorio del almacén de archivos predeterminado de la CLI /var/vmware/nsx/file-store. Puede utilizar el comando del almacén de archivos de la CLI para copiar el archivo de paquete a un sitio remoto.

    get ncp-log file <filename>

    Ejemplo:

    kubenode>get ncp-log file foo
    Log file foo created in tgz format
  • Guardar los registros del agente del nodo en el almacén de archivos

    Guarde los registros del agente del nodo de uno nodo o de todos. Los registros se guardan en formato tgz y en el directorio del almacén de archivos predeterminado de la CLI /var/vmware/nsx/file-store. Puede utilizar el comando del almacén de archivos de la CLI para copiar el archivo de paquete a un sitio remoto.

    get node-agent-log file <filename>
    get node-agent-log file <filename> <node-name>

    Ejemplo:

    kubenode>get node-agent-log file foo
    Log file foo created in tgz format
  • Obtener y establecer el nivel de registro globalmente o para un componente específico.

    Los niveles de registro disponibles son NOTSET, DEBUG, INFO, WARNING, ERROR y CRITICAL.

    Los componentes disponibles son nsx_ujo.ncp, nsx_ujo.ncp.k8s, nsx_ujo.ncp.pcf, vmware_nsxlib.v3, nsxrpc y nsx_ujo.ncp.nsx.

    get ncp-log-level [component]
    set ncp-log-level <log level> [component]

    Ejemplos:

    kubenode> get ncp-log-level
    NCP log level is INFO
    
    kubenode> get ncp-log-level nsx_ujo.ncp
    nsx_ujo.ncp log level is INFO
     
    kubenode>set ncp-log-level DEBUG
    NCP log level is changed to DEBUG
    
    kubenode> set ncp-log-level DEBUG nsx_ujo.ncp
    nsx_ujo.ncp log level has been changed to DEBUG

Comandos de estado para el contenedor de agentes del nodo de NSX

  • Mostrar el estado de conexión entre HyperBus y el agente de este nodo.
    get node-agent-hyperbus status

    Ejemplo:

    kubenode> get node-agent-hyperbus status
    HyperBus status: Healthy

Comandos de caché para el contenedor de agentes del nodo de NSX

  • Obtener la caché interna para contenedores de agentes del nodo de NSX.
    get container-cache <container-name>
    get container-caches

    Ejemplo:

    kubenode> get container-caches
        cif104:
            ip: 192.168.0.14/32
            mac: 50:01:01:01:01:14
            gateway_ip: 169.254.1.254/16
            vlan_id: 104
    
    
    kubenode> get container-cache cif104
        ip: 192.168.0.14/32
        mac: 50:01:01:01:01:14
        gateway_ip: 169.254.1.254/16
        vlan_id: 104
    

Comandos de estado para el contenedor del proxy de NSX Kube

  • Mostrar el estado de conexión entre el servidor de Kube Proxy y de Kubernetes
    get ncp-k8s-api-server status

    Ejemplo:

    kubenode> get kube-proxy-k8s-api-server status
    Kubernetes ApiServer status: Healthy
  • Mostrar el estado del monitor de Kube Proxy
    get kube-proxy-watcher <watcher-name>
    get kube-proxy-watchers

    Ejemplo:

    kubenode> get kube-proxy-watchers
        endpoint:
            Average event processing time: 15 msec (in past 3600-sec window)
            Current watcher started time: May 01 2017 15:06:24 PDT
            Number of events processed: 90 (in past 3600-sec window)
            Total events processed by current watcher: 90
            Total events processed since watcher thread created: 90
            Total watcher recycle count: 0
            Watcher thread created time: May 01 2017 15:06:24 PDT
            Watcher thread status: Up
    
         service:
            Average event processing time: 8 msec (in past 3600-sec window)
            Current watcher started time: May 01 2017 15:06:24 PDT
            Number of events processed: 2 (in past 3600-sec window)
            Total events processed by current watcher: 2
            Total events processed since watcher thread created: 2
            Total watcher recycle count: 0
            Watcher thread created time: May 01 2017 15:06:24 PDT
            Watcher thread status: Up
    
    
    kubenode> get kube-proxy-watcher endpoint
        Average event processing time: 15 msec (in past 3600-sec window)
        Current watcher started time: May 01 2017 15:06:24 PDT
        Number of events processed: 90 (in past 3600-sec window)
        Total events processed by current watcher: 90
        Total events processed since watcher thread created: 90
        Total watcher recycle count: 0
        Watcher thread created time: May 01 2017 15:06:24 PDT
        Watcher thread status: Up
    
  • Volcar los flujos OVS en un nodo
    dump ovs-flows

    Ejemplo:

    kubenode> dump ovs-flows
        NXST_FLOW reply (xid=0x4):
        cookie=0x0, duration=8.876s, table=0, n_packets=0, n_bytes=0, idle_age=8, priority=100,ip actions=ct(table=1)
        cookie=0x0, duration=8.898s, table=0, n_packets=0, n_bytes=0, idle_age=8, priority=0 actions=NORMAL
        cookie=0x0, duration=8.759s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=100,tcp,nw_dst=10.96.0.1,tp_dst=443 actions=mod_tp_dst:443
        cookie=0x0, duration=8.719s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=100,ip,nw_dst=10.96.0.10 actions=drop
        cookie=0x0, duration=8.819s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=90,ip,in_port=1 actions=ct(table=2,nat)
        cookie=0x0, duration=8.799s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=80,ip actions=NORMAL
        cookie=0x0, duration=8.856s, table=2, n_packets=0, n_bytes=0, idle_age=8, actions=NORMAL