To run CLI commands, log in to the NSX Container Plugin container, open a terminal and run the nsxcli command.
kubectl exec -it <pod name> nsxcli
Type | Command | Note |
---|---|---|
Status | get ncp-master status | For both Kubernetes and TAS. |
Status | get ncp-nsx status | For both Kubernetes and TAS. |
Status | get ncp-watcher <watcher-name> | For both Kubernetes and TAS. |
Status | get ncp-watchers | For both Kubernetes and TAS. |
Status | get ncp-k8s-api-server status | For Kubernetes only. |
Status | check projects | For Kubernetes only. |
Status | check project <project-name> | For Kubernetes only. |
Status | get ncp-bbs status | For TAS only. |
Status | get ncp-capi status | For TAS only. |
Status | get ncp-policy-server status | For TAS only. |
Cache | get project-caches | For Kubernetes only. |
Cache | get project-cache <project-name> | For Kubernetes only. |
Cache | get namespace-caches | For Kubernetes only. |
Cache | get namespace-cache <namespace-name> | For Kubernetes only. |
Cache | get pod-caches | For Kubernetes only. |
Cache | get pod-cache <pod-name> | For Kubernetes only. |
Cache | get ingress-caches | For Kubernetes only. |
Cache | get ingress-cache <ingress-name> | For Kubernetes only. |
Cache | get ingress-controllers | For Kubernetes only. |
Cache | get ingress-controller <ingress-controller-name> | For Kubernetes only. |
Cache | get network-policy-caches | For Kubernetes only. |
Cache | get network-policy-cache <pod-name> | For Kubernetes only. |
Cache | get asg-caches | For TAS only. |
Cache | get asg-cache <asg-ID> | For TAS only. |
Cache | get org-caches | For TAS only. |
Cache | get org-cache <org-ID> | For TAS only. |
Cache | get space-caches | For TAS only. |
Cache | get space-cache <space-ID> | For TAS only. |
Cache | get app-caches | For TAS only. |
Cache | get app-cache <app-ID> | For TAS only. |
Cache | get instance-caches <app-ID> | For TAS only. |
Cache | get instance-cache <app-ID> <instance-ID> | For TAS only. |
Cache | get policy-caches | For TAS only. |
Support | get ncp-log file <filename> | For both Kubernetes and TAS. |
Support | get ncp-log-level [component] | For both Kubernetes and TAS. |
Support | set ncp-log-level <log-level> [component] | For both Kubernetes and TAS. |
Support | get support-bundle file <filename> | For Kubernetes only. |
Support | get node-agent-log file <filename> | For Kubernetes only. |
Support | get node-agent-log file <filename> <node-name> | For Kubernetes only. |
Type | Command |
---|---|
Status | get node-agent-hyperbus status |
Cache | get container-cache <container-name> |
Cache | get container-caches |
Type | Command |
---|---|
Status | get ncp-k8s-api-server status |
Status | get kube-proxy-watcher <watcher-name> |
Status | get kube-proxy-watchers |
Status | dump ovs-flows |
Status Commands for the NCP Container
- Show the status of the NCP master
get ncp-master status
Example:
kubenode> get ncp-master status This instance is not the NCP master Current NCP Master id is a4h83eh1-b8dd-4e74-c71c-cbb7cc9c4c1c Last master update at Wed Oct 25 22:46:40 2017
- Show the connection status between NCP and NSX Manager
get ncp-nsx status
Example:
kubenode> get ncp-nsx status NSX Manager status: Healthy
- Show the watcher status for ingress, namespace, pod, and service
get ncp-watchers get ncp-watcher <watcher-name>
Example:
kubenode> get ncp-watchers pod: Average event processing time: 1145 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:51:37 PST Number of events processed: 1 (in past 3600-sec window) Total events processed by current watcher: 1 Total events processed since watcher thread created: 1 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:51:37 PST Watcher thread status: Up namespace: Average event processing time: 68 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:51:37 PST Number of events processed: 2 (in past 3600-sec window) Total events processed by current watcher: 2 Total events processed since watcher thread created: 2 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:51:37 PST Watcher thread status: Up ingress: Average event processing time: 0 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:51:37 PST Number of events processed: 0 (in past 3600-sec window) Total events processed by current watcher: 0 Total events processed since watcher thread created: 0 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:51:37 PST Watcher thread status: Up service: Average event processing time: 3 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:51:37 PST Number of events processed: 1 (in past 3600-sec window) Total events processed by current watcher: 1 Total events processed since watcher thread created: 1 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:51:37 PST Watcher thread status: Up kubenode> get ncp-watcher pod Average event processing time: 1174 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:47:35 PST Number of events processed: 1 (in past 3600-sec window) Total events processed by current watcher: 1 Total events processed since watcher thread created: 1 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:47:35 PST Watcher thread status: Up
- Show the connection status between NCP and Kubernetes API server
get ncp-k8s-api-server status
Example:
kubenode> get ncp-k8s-api-server status Kubernetes ApiServer status: Healthy
- Check all projects or a specific one
check projects check project <project-name>
Example:
kubenode> check projects default: Tier-1 link port for router 1b90a61f-0f2c-4768-9eb6-ea8954b4f327 is missing Switch 40a6829d-c3aa-4e17-ae8a-7f7910fdf2c6 is missing ns1: Router 8accc9cd-9883-45f6-81b3-0d1fb2583180 is missing kubenode> check project default Tier-1 link port for router 1b90a61f-0f2c-4768-9eb6-ea8954b4f327 is missing Switch 40a6829d-c3aa-4e17-ae8a-7f7910fdf2c6 is missing
- Check connection status between NCP and TAS BBS
get ncp-bbs status
Example:
node> get ncp-bbs status BBS Server status: Healthy
- Check connection status between NCP and TAS CAPI
get ncp-capi status
Example:
node> get ncp-capi status CAPI Server status: Healthy
- Check connection status between NCP and TAS policy server
get ncp-policy-server status
Example:
node> get ncp-bbs status Policy Server status: Healthy
Cache Commands for the NCP Container
- Get the internal cache for projects or namespaces
get project-cache <project-name> get project-caches get namespace-cache <namespace-name> get namespace-caches
Example:
kubenode> get project-caches default: logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180 logical-switch: id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e subnet: 10.0.0.0/24 subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435 kube-system: logical-router: 5032b299-acad-448e-a521-19d272a08c46 logical-switch: id: 85233651-602d-445d-ab10-1c84096cc22a ip_pool_id: ab1c5b09-7004-4206-ac56-85d9d94bffa2 subnet: 10.0.1.0/24 subnet_id: 73e450af-b4b8-4a61-a6e3-c7ddd15ce751 testns: ext_pool_id: 346a0f36-7b5a-4ecc-ad32-338dcb92316f labels: ns: myns project: myproject logical-router: 4dc8f8a9-69b4-4ff7-8fb7-d2625dc77efa logical-switch: id: 6111a99a-6e06-4faa-a131-649f10f7c815 ip_pool_id: 51ca058d-c3dc-41fd-8f2d-e69006ab1b3d subnet: 50.0.2.0/24 subnet_id: 34f79811-bd29-4048-a67d-67ceac97eb98 project_nsgroup: 9606afee-6348-4780-9dbe-91abfd23e475 snat_ip: 4.4.0.3 kubenode> get project-cache default logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180 logical-switch: id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e subnet: 10.0.0.0/24 subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435 kubenode> get namespace-caches default: logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180 logical-switch: id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e subnet: 10.0.0.0/24 subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435 kube-system: logical-router: 5032b299-acad-448e-a521-19d272a08c46 logical-switch: id: 85233651-602d-445d-ab10-1c84096cc22a ip_pool_id: ab1c5b09-7004-4206-ac56-85d9d94bffa2 subnet: 10.0.1.0/24 subnet_id: 73e450af-b4b8-4a61-a6e3-c7ddd15ce751 testns: ext_pool_id: 346a0f36-7b5a-4ecc-ad32-338dcb92316f labels: ns: myns project: myproject logical-router: 4dc8f8a9-69b4-4ff7-8fb7-d2625dc77efa logical-switch: id: 6111a99a-6e06-4faa-a131-649f10f7c815 ip_pool_id: 51ca058d-c3dc-41fd-8f2d-e69006ab1b3d subnet: 50.0.2.0/24 subnet_id: 34f79811-bd29-4048-a67d-67ceac97eb98 project_nsgroup: 9606afee-6348-4780-9dbe-91abfd23e475 snat_ip: 4.4.0.3 kubenode> get namespace-cache default logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180 logical-switch: id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e subnet: 10.0.0.0/24 subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
- Get the internal cache for pods
get pod-cache <pod-name> get pod-caches
Example:
kubenode> get pod-caches nsx.default.nginx-rc-uq2lv: cif_id: 2af9f734-37b1-4072-ba88-abbf935bf3d4 gateway_ip: 10.0.0.1 host_vif: d6210773-5c07-4817-98db-451bd1f01937 id: 1c8b5c52-3795-11e8-ab42-005056b198fb ingress_controller: False ip: 10.0.0.2/24 labels: app: nginx mac: 02:50:56:00:08:00 port_id: d52c833a-f531-4bdf-bfa2-e8a084a8d41b vlan: 1 nsx.testns.web-pod-1: cif_id: ce134f21-6be5-43fe-afbf-aaca8c06b5cf gateway_ip: 50.0.2.1 host_vif: d6210773-5c07-4817-98db-451bd1f01937 id: 3180b521-270e-11e8-ab42-005056b198fb ingress_controller: False ip: 50.0.2.3/24 labels: app: nginx-new role: db tier: cache mac: 02:50:56:00:20:02 port_id: 81bc2b8e-d902-4cad-9fc1-aabdc32ecaf8 vlan: 3 kubenode> get pod-cache nsx.default.nginx-rc-uq2lv cif_id: 2af9f734-37b1-4072-ba88-abbf935bf3d4 gateway_ip: 10.0.0.1 host_vif: d6210773-5c07-4817-98db-451bd1f01937 id: 1c8b5c52-3795-11e8-ab42-005056b198fb ingress_controller: False ip: 10.0.0.2/24 labels: app: nginx mac: 02:50:56:00:08:00 port_id: d52c833a-f531-4bdf-bfa2-e8a084a8d41b vlan: 1
- Get all Ingress caches or a specific one
get ingress caches get ingress-cache <ingress-name>
Example:
kubenode> get ingress-caches nsx.default.cafe-ingress: ext_pool_id: cc02db70-539a-4934-a938-5b851b3e485b lb_virtual_server: id: 895c7f43-c56e-4b67-bb4c-09d68459d416 lb_service_id: 659eefc6-33d1-4672-a419-344b877f528e name: dgo2-http type: http lb_virtual_server_ip: 5.5.0.2 name: cafe-ingress rules: host: cafe.example.com http: paths: path: /coffee backend: serviceName: coffee-svc servicePort: 80 lb_rule: id: 4bc16bdd-abd9-47fb-a09e-21e58b2131c3 name: dgo2-default-cafe-ingress/coffee kubenode> get ingress-cache nsx.default.cafe-ingress ext_pool_id: cc02db70-539a-4934-a938-5b851b3e485b lb_virtual_server: id: 895c7f43-c56e-4b67-bb4c-09d68459d416 lb_service_id: 659eefc6-33d1-4672-a419-344b877f528e name: dgo2-http type: http lb_virtual_server_ip: 5.5.0.2 name: cafe-ingress rules: host: cafe.example.com http: paths: path: /coffee backend: serviceName: coffee-svc servicePort: 80 lb_rule: id: 4bc16bdd-abd9-47fb-a09e-21e58b2131c3 name: dgo2-default-cafe-ingress/coffee
- Get information on all Ingress controllers or a specific one, including controllers that are disabled
get ingress controllers get ingress-controller <ingress-controller-name>
Example:
kubenode> get ingress-controllers native-load-balancer: ingress_virtual_server: http: default_backend_tags: id: 895c7f43-c56e-4b67-bb4c-09d68459d416 pool_id: None https_terminated: default_backend_tags: id: 293282eb-f1a0-471c-9e48-ba28d9d89161 pool_id: None lb_ip_pool_id: cc02db70-539a-4934-a938-5b851b3e485b loadbalancer_service: first_avail_index: 0 lb_services: id: 659eefc6-33d1-4672-a419-344b877f528e name: dgo2-bfmxi t1_link_port_ip: 100.64.128.5 t1_router_id: cb50deb2-4460-45f2-879a-1b94592ae886 virtual_servers: 293282eb-f1a0-471c-9e48-ba28d9d89161 895c7f43-c56e-4b67-bb4c-09d68459d416 ssl: ssl_client_profile_id: aff205bb-4db8-5a72-8d67-218cdc54d27b vip: 5.5.0.2 nsx.default.nginx-ingress-rc-host-ed3og ip: 10.192.162.201 mode: hostnetwork pool_id: 5813c609-5d3a-4438-b9c3-ea3cd6de52c3 kubenode> get ingress-controller native-load-balancer ingress_virtual_server: http: default_backend_tags: id: 895c7f43-c56e-4b67-bb4c-09d68459d416 pool_id: None https_terminated: default_backend_tags: id: 293282eb-f1a0-471c-9e48-ba28d9d89161 pool_id: None lb_ip_pool_id: cc02db70-539a-4934-a938-5b851b3e485b loadbalancer_service: first_avail_index: 0 lb_services: id: 659eefc6-33d1-4672-a419-344b877f528e name: dgo2-bfmxi t1_link_port_ip: 100.64.128.5 t1_router_id: cb50deb2-4460-45f2-879a-1b94592ae886 virtual_servers: 293282eb-f1a0-471c-9e48-ba28d9d89161 895c7f43-c56e-4b67-bb4c-09d68459d416 ssl: ssl_client_profile_id: aff205bb-4db8-5a72-8d67-218cdc54d27b vip: 5.5.0.2
- Get network policy caches or a specific one
get network-policy caches get network-policy-cache <network-policy-name>
Example:
kubenode> get network-policy-caches nsx.testns.allow-tcp-80: dest_labels: None dest_pods: 50.0.2.3 match_expressions: key: tier operator: In values: cache name: allow-tcp-80 np_dest_ip_set_ids: 22f82d76-004f-4d12-9504-ce1cb9c8aa00 np_except_ip_set_ids: np_ip_set_ids: 14f7f825-f1a0-408f-bbd9-bb2f75d44666 np_isol_section_id: c8d93597-9066-42e3-991c-c550c46b2270 np_section_id: 04693136-7925-44f2-8616-d809d02cd2a9 ns_name: testns src_egress_rules: None src_egress_rules_hash: 97d170e1550eee4afc0af065b78cda302a97674c src_pods: 50.0.2.0/24 src_rules: from: namespaceSelector: matchExpressions: key: tier operator: DoesNotExist matchLabels: ns: myns ports: port: 80 protocol: TCP src_rules_hash: e4ea7b8d91c1e722670a59f971f8fcc1a5ac51f1 kubenode> get network-policy-cache nsx.testns.allow-tcp-80 dest_labels: None dest_pods: 50.0.2.3 match_expressions: key: tier operator: In values: cache name: allow-tcp-80 np_dest_ip_set_ids: 22f82d76-004f-4d12-9504-ce1cb9c8aa00 np_except_ip_set_ids: np_ip_set_ids: 14f7f825-f1a0-408f-bbd9-bb2f75d44666 np_isol_section_id: c8d93597-9066-42e3-991c-c550c46b2270 np_section_id: 04693136-7925-44f2-8616-d809d02cd2a9 ns_name: testns src_egress_rules: None src_egress_rules_hash: 97d170e1550eee4afc0af065b78cda302a97674c src_pods: 50.0.2.0/24 src_rules: from: namespaceSelector: matchExpressions: key: tier operator: DoesNotExist matchLabels: ns: myns ports: port: 80 protocol: TCP src_rules_hash: e4ea7b8d91c1e722670a59f971f8fcc1a5ac51f1
- Get all ASG caches or a specific one
get asg-caches get asg-cache <asg-ID>
Example:
node> get asg-caches edc04715-d04c-4e63-abbc-db601a668db6: fws_id: 3c66f40a-5378-46d7-a7e2-bee4ba72a4cc name: org-85_tcp_80_asg rules: destinations: 66.10.10.0/24 ports: 80 protocol: tcp rule_id: 4359 running_default: False running_spaces: 75bc164d-1214-46f9-80bb-456a8fbccbfd staging_default: False staging_spaces: node> get asg-cache edc04715-d04c-4e63-abbc-db601a668db6 fws_id: 3c66f40a-5378-46d7-a7e2-bee4ba72a4cc name: org-85_tcp_80_asg rules: destinations: 66.10.10.0/24 ports: 80 protocol: tcp rule_id: 4359 running_default: False running_spaces: 75bc164d-1214-46f9-80bb-456a8fbccbfd staging_default: False staging_spaces:
- Get all org caches or a specific one
get org-caches get org-cache <org-ID>
Example:
node> get org-caches ebb8b4f9-a40f-4122-bf21-65c40f575aca: ext_pool_id: 9208a8b8-57d7-4582-9c1f-7a7cefa104f5 isolation: isolation_section_id: d6e7ff95-4737-4e34-91d4-27601897353f logical-router: 94a414a2-551e-4444-bae6-3d79901a165f logical-switch: id: d74807e8-8f74-4575-b26b-87d4fdbafd3c ip_pool_id: 1b60f16f-4a30-4a3d-93cc-bfb08a5e3e02 subnet: 50.0.48.0/24 subnet_id: a458d3aa-bea9-4684-9957-d0ce80d11788 name: org-50 snat_ip: 70.0.0.49 spaces: e8ab7aa0-d4e3-4458-a896-f33177557851 node> get org-cache ebb8b4f9-a40f-4122-bf21-65c40f575aca ext_pool_id: 9208a8b8-57d7-4582-9c1f-7a7cefa104f5 isolation: isolation_section_id: d6e7ff95-4737-4e34-91d4-27601897353f logical-router: 94a414a2-551e-4444-bae6-3d79901a165f logical-switch: id: d74807e8-8f74-4575-b26b-87d4fdbafd3c ip_pool_id: 1b60f16f-4a30-4a3d-93cc-bfb08a5e3e02 subnet: 50.0.48.0/24 subnet_id: a458d3aa-bea9-4684-9957-d0ce80d11788 name: org-50 snat_ip: 70.0.0.49 spaces: e8ab7aa0-d4e3-4458-a896-f33177557851
- Get all space caches or a specific one
get space-caches get space-cache <space-ID>
Example:
node> get space-caches global_security_group: name: global_security_group running_nsgroup: 226d4292-47fb-4c2e-a118-449818d8fa98 staging_nsgroup: 7ebbf7f5-38c9-43a3-9292-682056722836 7870d134-7997-4373-b665-b6a910413c47: name: test-space1 org_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09 running_nsgroup: 4a3d9bcc-be36-47ae-bff8-96448fecf307 running_security_groups: aa0c7c3f-a478-4d45-8afa-df5d5d7dc512 staging_security_groups: aa0c7c3f-a478-4d45-8afa-df5d5d7dc512 node> get space-cache 7870d134-7997-4373-b665-b6a910413c47 name: test-space1 org_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09 running_nsgroup: 4a3d9bcc-be36-47ae-bff8-96448fecf307 running_security_groups: aa0c7c3f-a478-4d45-8afa-df5d5d7dc512 staging_security_groups: aa0c7c3f-a478-4d45-8afa-df5d5d7dc512
- Get all app caches or a specific one
get app-caches get app-cache <app-ID>
Example:
node> get app-caches aff2b12b-b425-4d9f-b8e6-b6308644efa8: instances: b72199cc-e1ab-49bf-506d-478d: app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544 cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b gateway_ip: 192.168.5.1 host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab id: b72199cc-e1ab-49bf-506d-478d index: 0 ip: 192.168.5.4/24 last_updated_time: 1522965828.45 mac: 02:50:56:00:60:02 port_id: a7c6f6bb-c472-4239-a030-bce615d5063e state: RUNNING vlan: 3 name: hello2 rg_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09 space_id: 7870d134-7997-4373-b665-b6a910413c47 node> get app-cache aff2b12b-b425-4d9f-b8e6-b6308644efa8 instances: b72199cc-e1ab-49bf-506d-478d: app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544 cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b gateway_ip: 192.168.5.1 host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab id: b72199cc-e1ab-49bf-506d-478d index: 0 ip: 192.168.5.4/24 last_updated_time: 1522965828.45 mac: 02:50:56:00:60:02 port_id: a7c6f6bb-c472-4239-a030-bce615d5063e state: RUNNING vlan: 3 name: hello2 org_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09 space_id: 7870d134-7997-4373-b665-b6a910413c47
- Get all instance caches of an app or a specific instance cache
get instance-caches <app-ID> get instance-cache <app-ID> <instance-ID>
Example:
node> get instance-caches aff2b12b-b425-4d9f-b8e6-b6308644efa8 b72199cc-e1ab-49bf-506d-478d: app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544 cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b gateway_ip: 192.168.5.1 host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab id: b72199cc-e1ab-49bf-506d-478d index: 0 ip: 192.168.5.4/24 last_updated_time: 1522965828.45 mac: 02:50:56:00:60:02 port_id: a7c6f6bb-c472-4239-a030-bce615d5063e state: RUNNING vlan: 3 node> get instance-cache aff2b12b-b425-4d9f-b8e6-b6308644efa8 b72199cc-e1ab-49bf-506d-478d app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544 cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b gateway_ip: 192.168.5.1 host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab id: b72199cc-e1ab-49bf-506d-478d index: 0 ip: 192.168.5.4/24 last_updated_time: 1522965828.45 mac: 02:50:56:00:60:02 port_id: a7c6f6bb-c472-4239-a030-bce615d5063e state: RUNNING vlan: 3
- Get all policy caches
get policy-caches
Example:
node> get policy-caches aff2b12b-b425-4d9f-b8e6-b6308644efa8: fws_id: 3fe27725-f139-479a-b83b-8576c9aedbef nsg_id: 30583a27-9b56-49c1-a534-4040f91cc333 rules: 8272: dst_app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 ports: 8382 protocol: tcp src_app_id: f582ec4d-3a13-440a-afbd-97b7bfae21d1 f582ec4d-3a13-440a-afbd-97b7bfae21d1: nsg_id: d24b9f77-e2e0-4fba-b258-893223683aa6 rules: 8272: dst_app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 ports: 8382 protocol: tcp src_app_id: f582ec4d-3a13-440a-afbd-97b7bfae21d1
Support Commands for the NCP Container
- Save the NCP support bundle in the filestore
The support bundle consists of the log files for all the containers in pods with the label tier:nsx-networking. The bundle file is in the tgz format and saved in the CLI default filestore directory /var/vmware/nsx/file-store. You can use the CLI file-store command to copy the bundle file to a remote site.
get support-bundle file <filename>
Example:
kubenode>get support-bundle file foo Bundle file foo created in tgz format kubenode>copy file foo url scp://[email protected]:/tmp
- Save the NCP logs in the filestore
The log file is saved in the tgz format in the CLI default filestore directory /var/vmware/nsx/file-store. You can use the CLI file-store command to copy the bundle file to a remote site.
get ncp-log file <filename>
Example:
kubenode>get ncp-log file foo Log file foo created in tgz format
- Save the node agent logs in the filestore
Save the node agent logs from one node or all the nodes. The logs are saved in the tgz format in the CLI default filestore directory /var/vmware/nsx/file-store. You can use the CLI file-store command to copy the bundle file to a remote site.
get node-agent-log file <filename> get node-agent-log file <filename> <node-name>
Example:
kubenode>get node-agent-log file foo Log file foo created in tgz format
- Get and set the log level globally or for a specific component.
The available log levels are NOTSET, DEBUG, INFO, WARNING, ERROR, and CRITICAL.
The available components are nsx_ujo.ncp, nsx_ujo.ncp.k8s, nsx_ujo.ncp.pcf, vmware_nsxlib.v3, nsxrpc, and nsx_ujo.ncp.nsx.
get ncp-log-level [component] set ncp-log-level <log level> [component]
Examples:
kubenode> get ncp-log-level NCP log level is INFO kubenode> get ncp-log-level nsx_ujo.ncp nsx_ujo.ncp log level is INFO kubenode>set ncp-log-level DEBUG NCP log level is changed to DEBUG kubenode> set ncp-log-level DEBUG nsx_ujo.ncp nsx_ujo.ncp log level has been changed to DEBUG
Status Commands for the NSX Node Agent Container
- Show the connection status between the node agent and HyperBus on this node.
get node-agent-hyperbus status
Example:
kubenode> get node-agent-hyperbus status HyperBus status: Healthy
Cache Commands for the NSX Node Agent Container
- Get the internal cache for NSX node agent containers.
get container-cache <container-name> get container-caches
Example:
kubenode> get container-caches cif104: ip: 192.168.0.14/32 mac: 50:01:01:01:01:14 gateway_ip: 169.254.1.254/16 vlan_id: 104 kubenode> get container-cache cif104 ip: 192.168.0.14/32 mac: 50:01:01:01:01:14 gateway_ip: 169.254.1.254/16 vlan_id: 104
Status Commands for the NSX Kube-Proxy Container
- Show the connection status between Kube Proxy and Kubernetes API Server
get ncp-k8s-api-server status
Example:
kubenode> get kube-proxy-k8s-api-server status Kubernetes ApiServer status: Healthy
- Show the Kube Proxy watcher status
get kube-proxy-watcher <watcher-name> get kube-proxy-watchers
Example:
kubenode> get kube-proxy-watchers endpoint: Average event processing time: 15 msec (in past 3600-sec window) Current watcher started time: May 01 2017 15:06:24 PDT Number of events processed: 90 (in past 3600-sec window) Total events processed by current watcher: 90 Total events processed since watcher thread created: 90 Total watcher recycle count: 0 Watcher thread created time: May 01 2017 15:06:24 PDT Watcher thread status: Up service: Average event processing time: 8 msec (in past 3600-sec window) Current watcher started time: May 01 2017 15:06:24 PDT Number of events processed: 2 (in past 3600-sec window) Total events processed by current watcher: 2 Total events processed since watcher thread created: 2 Total watcher recycle count: 0 Watcher thread created time: May 01 2017 15:06:24 PDT Watcher thread status: Up kubenode> get kube-proxy-watcher endpoint Average event processing time: 15 msec (in past 3600-sec window) Current watcher started time: May 01 2017 15:06:24 PDT Number of events processed: 90 (in past 3600-sec window) Total events processed by current watcher: 90 Total events processed since watcher thread created: 90 Total watcher recycle count: 0 Watcher thread created time: May 01 2017 15:06:24 PDT Watcher thread status: Up
- Dump OVS flows on a node
dump ovs-flows
Example:
kubenode> dump ovs-flows NXST_FLOW reply (xid=0x4): cookie=0x0, duration=8.876s, table=0, n_packets=0, n_bytes=0, idle_age=8, priority=100,ip actions=ct(table=1) cookie=0x0, duration=8.898s, table=0, n_packets=0, n_bytes=0, idle_age=8, priority=0 actions=NORMAL cookie=0x0, duration=8.759s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=100,tcp,nw_dst=10.96.0.1,tp_dst=443 actions=mod_tp_dst:443 cookie=0x0, duration=8.719s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=100,ip,nw_dst=10.96.0.10 actions=drop cookie=0x0, duration=8.819s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=90,ip,in_port=1 actions=ct(table=2,nat) cookie=0x0, duration=8.799s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=80,ip actions=NORMAL cookie=0x0, duration=8.856s, table=2, n_packets=0, n_bytes=0, idle_age=8, actions=NORMAL