Per eseguire i comandi della CLI, accedere al container NSX Container Plugin, aprire un terminale ed eseguire il comando nsxcli.
kubectl exec -it <pod name> nsxcli
Tipo | Comando | Nota |
---|---|---|
Stato | get ncp-master status | Sia per Kubernetes sia per TAS. |
Stato | get ncp-nsx status | Sia per Kubernetes sia per TAS. |
Stato | get ncp-watcher <nome-watcher> | Sia per Kubernetes sia per TAS. |
Stato | get ncp-watchers | Sia per Kubernetes sia per TAS. |
Stato | get ncp-k8s-api-server status | Solo per Kubernetes. |
Stato | check projects | Solo per Kubernetes. |
Stato | check project <nome-progetto> | Solo per Kubernetes. |
Stato | get ncp-bbs status | Solo per TAS. |
Stato | get ncp-capi status | Solo per TAS. |
Stato | get ncp-policy-server status | Solo per TAS. |
Cache | get project-caches | Solo per Kubernetes. |
Cache | get project-cache <nome-progetto> | Solo per Kubernetes. |
Cache | get namespace-caches | Solo per Kubernetes. |
Cache | get namespace-cache <nome-spaziodeinomi> | Solo per Kubernetes. |
Cache | get pod-caches | Solo per Kubernetes. |
Cache | get pod-cache <nome-pod> | Solo per Kubernetes. |
Cache | get ingress-caches | Solo per Kubernetes. |
Cache | get ingress-cache <nome-ingresso> | Solo per Kubernetes. |
Cache | get ingress-controllers | Solo per Kubernetes. |
Cache | get ingress-controller <nome-controller-ingresso> | Solo per Kubernetes. |
Cache | get network-policy-caches | Solo per Kubernetes. |
Cache | get network-policy-cache <nome-pod> | Solo per Kubernetes. |
Cache | get asg-caches | Solo per TAS. |
Cache | get asg-cache <ID-asg> | Solo per TAS. |
Cache | get org-caches | Solo per TAS. |
Cache | get org-cache <ID-org> | Solo per TAS. |
Cache | get space-caches | Solo per TAS. |
Cache | get space-cache <ID-spazio> | Solo per TAS. |
Cache | get app-caches | Solo per TAS. |
Cache | get app-cache <ID-app> | Solo per TAS. |
Cache | get instance-caches <ID-app> | Solo per TAS. |
Cache | get instance-cache <ID-app> <ID-istanza> | Solo per TAS. |
Cache | get policy-caches | Solo per TAS. |
Supporto | get ncp-log file <nomefile> | Sia per Kubernetes sia per TAS. |
Supporto | get ncp-log-level [component] | Sia per Kubernetes sia per TAS. |
Supporto | set ncp-log-level <livello-registrazione> [component] | Sia per Kubernetes sia per TAS. |
Supporto | get support-bundle file <nomefile> | Solo per Kubernetes. |
Supporto | get node-agent-log file <nomefile> | Solo per Kubernetes. |
Supporto | get node-agent-log file <nomefile> <nome-nodo> | Solo per Kubernetes. |
Tipo | Comando |
---|---|
Stato | get node-agent-hyperbus status |
Cache | get container-cache <nome-container> |
Cache | get container-caches |
Tipo | Comando |
---|---|
Stato | get ncp-k8s-api-server status |
Stato | get kube-proxy-watcher <nome-watcher> |
Stato | get kube-proxy-watchers |
Stato | dump ovs-flows |
Comandi dello stato per il container NCP
- Visualizzare lo stato del master NCP
get ncp-master status
Esempio:
kubenode> get ncp-master status This instance is not the NCP master Current NCP Master id is a4h83eh1-b8dd-4e74-c71c-cbb7cc9c4c1c Last master update at Wed Oct 25 22:46:40 2017
- Visualizzare lo stato della connessione tra NCP e NSX Manager
get ncp-nsx status
Esempio:
kubenode> get ncp-nsx status NSX Manager status: Healthy
- Visualizzare lo stato del watcher per l'ingresso, lo spazio dei nomi, il pod e il servizio
get ncp-watchers get ncp-watcher <watcher-name>
Esempio:
kubenode> get ncp-watchers pod: Average event processing time: 1145 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:51:37 PST Number of events processed: 1 (in past 3600-sec window) Total events processed by current watcher: 1 Total events processed since watcher thread created: 1 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:51:37 PST Watcher thread status: Up namespace: Average event processing time: 68 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:51:37 PST Number of events processed: 2 (in past 3600-sec window) Total events processed by current watcher: 2 Total events processed since watcher thread created: 2 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:51:37 PST Watcher thread status: Up ingress: Average event processing time: 0 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:51:37 PST Number of events processed: 0 (in past 3600-sec window) Total events processed by current watcher: 0 Total events processed since watcher thread created: 0 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:51:37 PST Watcher thread status: Up service: Average event processing time: 3 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:51:37 PST Number of events processed: 1 (in past 3600-sec window) Total events processed by current watcher: 1 Total events processed since watcher thread created: 1 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:51:37 PST Watcher thread status: Up kubenode> get ncp-watcher pod Average event processing time: 1174 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:47:35 PST Number of events processed: 1 (in past 3600-sec window) Total events processed by current watcher: 1 Total events processed since watcher thread created: 1 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:47:35 PST Watcher thread status: Up
- Visualizzare lo stato della connessione tra il server dell'API NCP e Kubernetes
get ncp-k8s-api-server status
Esempio:
kubenode> get ncp-k8s-api-server status Kubernetes ApiServer status: Healthy
- Controllare tutti i progetti o uno specifico
check projects check project <project-name>
Esempio:
kubenode> check projects default: Tier-1 link port for router 1b90a61f-0f2c-4768-9eb6-ea8954b4f327 is missing Switch 40a6829d-c3aa-4e17-ae8a-7f7910fdf2c6 is missing ns1: Router 8accc9cd-9883-45f6-81b3-0d1fb2583180 is missing kubenode> check project default Tier-1 link port for router 1b90a61f-0f2c-4768-9eb6-ea8954b4f327 is missing Switch 40a6829d-c3aa-4e17-ae8a-7f7910fdf2c6 is missing
- Controllare lo stato della connessione tra NCP e TAS BBS
get ncp-bbs status
Esempio:
node> get ncp-bbs status BBS Server status: Healthy
- Controllare stato della connessione tra NCP e TAS CAPI
get ncp-capi status
Esempio:
node> get ncp-capi status CAPI Server status: Healthy
- Controllare lo stato della connessione tra NCP e il server dei criteri TAS
get ncp-policy-server status
Esempio:
node> get ncp-bbs status Policy Server status: Healthy
Comandi della cache per il container NCP
- Recuperare la cache interna per progetti o spazi dei nomi
get project-cache <project-name> get project-caches get namespace-cache <namespace-name> get namespace-caches
Esempio:
kubenode> get project-caches default: logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180 logical-switch: id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e subnet: 10.0.0.0/24 subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435 kube-system: logical-router: 5032b299-acad-448e-a521-19d272a08c46 logical-switch: id: 85233651-602d-445d-ab10-1c84096cc22a ip_pool_id: ab1c5b09-7004-4206-ac56-85d9d94bffa2 subnet: 10.0.1.0/24 subnet_id: 73e450af-b4b8-4a61-a6e3-c7ddd15ce751 testns: ext_pool_id: 346a0f36-7b5a-4ecc-ad32-338dcb92316f labels: ns: myns project: myproject logical-router: 4dc8f8a9-69b4-4ff7-8fb7-d2625dc77efa logical-switch: id: 6111a99a-6e06-4faa-a131-649f10f7c815 ip_pool_id: 51ca058d-c3dc-41fd-8f2d-e69006ab1b3d subnet: 50.0.2.0/24 subnet_id: 34f79811-bd29-4048-a67d-67ceac97eb98 project_nsgroup: 9606afee-6348-4780-9dbe-91abfd23e475 snat_ip: 4.4.0.3 kubenode> get project-cache default logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180 logical-switch: id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e subnet: 10.0.0.0/24 subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435 kubenode> get namespace-caches default: logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180 logical-switch: id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e subnet: 10.0.0.0/24 subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435 kube-system: logical-router: 5032b299-acad-448e-a521-19d272a08c46 logical-switch: id: 85233651-602d-445d-ab10-1c84096cc22a ip_pool_id: ab1c5b09-7004-4206-ac56-85d9d94bffa2 subnet: 10.0.1.0/24 subnet_id: 73e450af-b4b8-4a61-a6e3-c7ddd15ce751 testns: ext_pool_id: 346a0f36-7b5a-4ecc-ad32-338dcb92316f labels: ns: myns project: myproject logical-router: 4dc8f8a9-69b4-4ff7-8fb7-d2625dc77efa logical-switch: id: 6111a99a-6e06-4faa-a131-649f10f7c815 ip_pool_id: 51ca058d-c3dc-41fd-8f2d-e69006ab1b3d subnet: 50.0.2.0/24 subnet_id: 34f79811-bd29-4048-a67d-67ceac97eb98 project_nsgroup: 9606afee-6348-4780-9dbe-91abfd23e475 snat_ip: 4.4.0.3 kubenode> get namespace-cache default logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180 logical-switch: id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e subnet: 10.0.0.0/24 subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
- Recuperare la cache interna per i pod
get pod-cache <pod-name> get pod-caches
Esempio:
kubenode> get pod-caches nsx.default.nginx-rc-uq2lv: cif_id: 2af9f734-37b1-4072-ba88-abbf935bf3d4 gateway_ip: 10.0.0.1 host_vif: d6210773-5c07-4817-98db-451bd1f01937 id: 1c8b5c52-3795-11e8-ab42-005056b198fb ingress_controller: False ip: 10.0.0.2/24 labels: app: nginx mac: 02:50:56:00:08:00 port_id: d52c833a-f531-4bdf-bfa2-e8a084a8d41b vlan: 1 nsx.testns.web-pod-1: cif_id: ce134f21-6be5-43fe-afbf-aaca8c06b5cf gateway_ip: 50.0.2.1 host_vif: d6210773-5c07-4817-98db-451bd1f01937 id: 3180b521-270e-11e8-ab42-005056b198fb ingress_controller: False ip: 50.0.2.3/24 labels: app: nginx-new role: db tier: cache mac: 02:50:56:00:20:02 port_id: 81bc2b8e-d902-4cad-9fc1-aabdc32ecaf8 vlan: 3 kubenode> get pod-cache nsx.default.nginx-rc-uq2lv cif_id: 2af9f734-37b1-4072-ba88-abbf935bf3d4 gateway_ip: 10.0.0.1 host_vif: d6210773-5c07-4817-98db-451bd1f01937 id: 1c8b5c52-3795-11e8-ab42-005056b198fb ingress_controller: False ip: 10.0.0.2/24 labels: app: nginx mac: 02:50:56:00:08:00 port_id: d52c833a-f531-4bdf-bfa2-e8a084a8d41b vlan: 1
- Recuperare tutte le cache in ingresso o una cache specifica
get ingress caches get ingress-cache <ingress-name>
Esempio:
kubenode> get ingress-caches nsx.default.cafe-ingress: ext_pool_id: cc02db70-539a-4934-a938-5b851b3e485b lb_virtual_server: id: 895c7f43-c56e-4b67-bb4c-09d68459d416 lb_service_id: 659eefc6-33d1-4672-a419-344b877f528e name: dgo2-http type: http lb_virtual_server_ip: 5.5.0.2 name: cafe-ingress rules: host: cafe.example.com http: paths: path: /coffee backend: serviceName: coffee-svc servicePort: 80 lb_rule: id: 4bc16bdd-abd9-47fb-a09e-21e58b2131c3 name: dgo2-default-cafe-ingress/coffee kubenode> get ingress-cache nsx.default.cafe-ingress ext_pool_id: cc02db70-539a-4934-a938-5b851b3e485b lb_virtual_server: id: 895c7f43-c56e-4b67-bb4c-09d68459d416 lb_service_id: 659eefc6-33d1-4672-a419-344b877f528e name: dgo2-http type: http lb_virtual_server_ip: 5.5.0.2 name: cafe-ingress rules: host: cafe.example.com http: paths: path: /coffee backend: serviceName: coffee-svc servicePort: 80 lb_rule: id: 4bc16bdd-abd9-47fb-a09e-21e58b2131c3 name: dgo2-default-cafe-ingress/coffee
- Recuperare informazioni su tutti i controller in ingresso o uno specifico, inclusi i controller disabilitati
get ingress controllers get ingress-controller <ingress-controller-name>
Esempio:
kubenode> get ingress-controllers native-load-balancer: ingress_virtual_server: http: default_backend_tags: id: 895c7f43-c56e-4b67-bb4c-09d68459d416 pool_id: None https_terminated: default_backend_tags: id: 293282eb-f1a0-471c-9e48-ba28d9d89161 pool_id: None lb_ip_pool_id: cc02db70-539a-4934-a938-5b851b3e485b loadbalancer_service: first_avail_index: 0 lb_services: id: 659eefc6-33d1-4672-a419-344b877f528e name: dgo2-bfmxi t1_link_port_ip: 100.64.128.5 t1_router_id: cb50deb2-4460-45f2-879a-1b94592ae886 virtual_servers: 293282eb-f1a0-471c-9e48-ba28d9d89161 895c7f43-c56e-4b67-bb4c-09d68459d416 ssl: ssl_client_profile_id: aff205bb-4db8-5a72-8d67-218cdc54d27b vip: 5.5.0.2 nsx.default.nginx-ingress-rc-host-ed3og ip: 10.192.162.201 mode: hostnetwork pool_id: 5813c609-5d3a-4438-b9c3-ea3cd6de52c3 kubenode> get ingress-controller native-load-balancer ingress_virtual_server: http: default_backend_tags: id: 895c7f43-c56e-4b67-bb4c-09d68459d416 pool_id: None https_terminated: default_backend_tags: id: 293282eb-f1a0-471c-9e48-ba28d9d89161 pool_id: None lb_ip_pool_id: cc02db70-539a-4934-a938-5b851b3e485b loadbalancer_service: first_avail_index: 0 lb_services: id: 659eefc6-33d1-4672-a419-344b877f528e name: dgo2-bfmxi t1_link_port_ip: 100.64.128.5 t1_router_id: cb50deb2-4460-45f2-879a-1b94592ae886 virtual_servers: 293282eb-f1a0-471c-9e48-ba28d9d89161 895c7f43-c56e-4b67-bb4c-09d68459d416 ssl: ssl_client_profile_id: aff205bb-4db8-5a72-8d67-218cdc54d27b vip: 5.5.0.2
- Recuperare le cache dei criteri di rete o una cache specifica
get network-policy caches get network-policy-cache <network-policy-name>
Esempio:
kubenode> get network-policy-caches nsx.testns.allow-tcp-80: dest_labels: None dest_pods: 50.0.2.3 match_expressions: key: tier operator: In values: cache name: allow-tcp-80 np_dest_ip_set_ids: 22f82d76-004f-4d12-9504-ce1cb9c8aa00 np_except_ip_set_ids: np_ip_set_ids: 14f7f825-f1a0-408f-bbd9-bb2f75d44666 np_isol_section_id: c8d93597-9066-42e3-991c-c550c46b2270 np_section_id: 04693136-7925-44f2-8616-d809d02cd2a9 ns_name: testns src_egress_rules: None src_egress_rules_hash: 97d170e1550eee4afc0af065b78cda302a97674c src_pods: 50.0.2.0/24 src_rules: from: namespaceSelector: matchExpressions: key: tier operator: DoesNotExist matchLabels: ns: myns ports: port: 80 protocol: TCP src_rules_hash: e4ea7b8d91c1e722670a59f971f8fcc1a5ac51f1 kubenode> get network-policy-cache nsx.testns.allow-tcp-80 dest_labels: None dest_pods: 50.0.2.3 match_expressions: key: tier operator: In values: cache name: allow-tcp-80 np_dest_ip_set_ids: 22f82d76-004f-4d12-9504-ce1cb9c8aa00 np_except_ip_set_ids: np_ip_set_ids: 14f7f825-f1a0-408f-bbd9-bb2f75d44666 np_isol_section_id: c8d93597-9066-42e3-991c-c550c46b2270 np_section_id: 04693136-7925-44f2-8616-d809d02cd2a9 ns_name: testns src_egress_rules: None src_egress_rules_hash: 97d170e1550eee4afc0af065b78cda302a97674c src_pods: 50.0.2.0/24 src_rules: from: namespaceSelector: matchExpressions: key: tier operator: DoesNotExist matchLabels: ns: myns ports: port: 80 protocol: TCP src_rules_hash: e4ea7b8d91c1e722670a59f971f8fcc1a5ac51f1
- Recuperare tutte le cache di ASG o una cache specifica
get asg-caches get asg-cache <asg-ID>
Esempio:
node> get asg-caches edc04715-d04c-4e63-abbc-db601a668db6: fws_id: 3c66f40a-5378-46d7-a7e2-bee4ba72a4cc name: org-85_tcp_80_asg rules: destinations: 66.10.10.0/24 ports: 80 protocol: tcp rule_id: 4359 running_default: False running_spaces: 75bc164d-1214-46f9-80bb-456a8fbccbfd staging_default: False staging_spaces: node> get asg-cache edc04715-d04c-4e63-abbc-db601a668db6 fws_id: 3c66f40a-5378-46d7-a7e2-bee4ba72a4cc name: org-85_tcp_80_asg rules: destinations: 66.10.10.0/24 ports: 80 protocol: tcp rule_id: 4359 running_default: False running_spaces: 75bc164d-1214-46f9-80bb-456a8fbccbfd staging_default: False staging_spaces:
- Recuperare tutte le cache dell'organizzazione o una cache specifica
get org-caches get org-cache <org-ID>
Esempio:
node> get org-caches ebb8b4f9-a40f-4122-bf21-65c40f575aca: ext_pool_id: 9208a8b8-57d7-4582-9c1f-7a7cefa104f5 isolation: isolation_section_id: d6e7ff95-4737-4e34-91d4-27601897353f logical-router: 94a414a2-551e-4444-bae6-3d79901a165f logical-switch: id: d74807e8-8f74-4575-b26b-87d4fdbafd3c ip_pool_id: 1b60f16f-4a30-4a3d-93cc-bfb08a5e3e02 subnet: 50.0.48.0/24 subnet_id: a458d3aa-bea9-4684-9957-d0ce80d11788 name: org-50 snat_ip: 70.0.0.49 spaces: e8ab7aa0-d4e3-4458-a896-f33177557851 node> get org-cache ebb8b4f9-a40f-4122-bf21-65c40f575aca ext_pool_id: 9208a8b8-57d7-4582-9c1f-7a7cefa104f5 isolation: isolation_section_id: d6e7ff95-4737-4e34-91d4-27601897353f logical-router: 94a414a2-551e-4444-bae6-3d79901a165f logical-switch: id: d74807e8-8f74-4575-b26b-87d4fdbafd3c ip_pool_id: 1b60f16f-4a30-4a3d-93cc-bfb08a5e3e02 subnet: 50.0.48.0/24 subnet_id: a458d3aa-bea9-4684-9957-d0ce80d11788 name: org-50 snat_ip: 70.0.0.49 spaces: e8ab7aa0-d4e3-4458-a896-f33177557851
- Recuperare tutte le cache dello spazio o una cache specifica
get space-caches get space-cache <space-ID>
Esempio:
node> get space-caches global_security_group: name: global_security_group running_nsgroup: 226d4292-47fb-4c2e-a118-449818d8fa98 staging_nsgroup: 7ebbf7f5-38c9-43a3-9292-682056722836 7870d134-7997-4373-b665-b6a910413c47: name: test-space1 org_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09 running_nsgroup: 4a3d9bcc-be36-47ae-bff8-96448fecf307 running_security_groups: aa0c7c3f-a478-4d45-8afa-df5d5d7dc512 staging_security_groups: aa0c7c3f-a478-4d45-8afa-df5d5d7dc512 node> get space-cache 7870d134-7997-4373-b665-b6a910413c47 name: test-space1 org_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09 running_nsgroup: 4a3d9bcc-be36-47ae-bff8-96448fecf307 running_security_groups: aa0c7c3f-a478-4d45-8afa-df5d5d7dc512 staging_security_groups: aa0c7c3f-a478-4d45-8afa-df5d5d7dc512
- Recuperare tutte le cache delle app o una cache specifica
get app-caches get app-cache <app-ID>
Esempio:
node> get app-caches aff2b12b-b425-4d9f-b8e6-b6308644efa8: instances: b72199cc-e1ab-49bf-506d-478d: app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544 cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b gateway_ip: 192.168.5.1 host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab id: b72199cc-e1ab-49bf-506d-478d index: 0 ip: 192.168.5.4/24 last_updated_time: 1522965828.45 mac: 02:50:56:00:60:02 port_id: a7c6f6bb-c472-4239-a030-bce615d5063e state: RUNNING vlan: 3 name: hello2 rg_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09 space_id: 7870d134-7997-4373-b665-b6a910413c47 node> get app-cache aff2b12b-b425-4d9f-b8e6-b6308644efa8 instances: b72199cc-e1ab-49bf-506d-478d: app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544 cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b gateway_ip: 192.168.5.1 host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab id: b72199cc-e1ab-49bf-506d-478d index: 0 ip: 192.168.5.4/24 last_updated_time: 1522965828.45 mac: 02:50:56:00:60:02 port_id: a7c6f6bb-c472-4239-a030-bce615d5063e state: RUNNING vlan: 3 name: hello2 org_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09 space_id: 7870d134-7997-4373-b665-b6a910413c47
- Recuperare le cache di tutte le istanze di un'app o la cache di un'istanza specifica
get instance-caches <app-ID> get instance-cache <app-ID> <instance-ID>
Esempio:
node> get instance-caches aff2b12b-b425-4d9f-b8e6-b6308644efa8 b72199cc-e1ab-49bf-506d-478d: app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544 cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b gateway_ip: 192.168.5.1 host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab id: b72199cc-e1ab-49bf-506d-478d index: 0 ip: 192.168.5.4/24 last_updated_time: 1522965828.45 mac: 02:50:56:00:60:02 port_id: a7c6f6bb-c472-4239-a030-bce615d5063e state: RUNNING vlan: 3 node> get instance-cache aff2b12b-b425-4d9f-b8e6-b6308644efa8 b72199cc-e1ab-49bf-506d-478d app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544 cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b gateway_ip: 192.168.5.1 host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab id: b72199cc-e1ab-49bf-506d-478d index: 0 ip: 192.168.5.4/24 last_updated_time: 1522965828.45 mac: 02:50:56:00:60:02 port_id: a7c6f6bb-c472-4239-a030-bce615d5063e state: RUNNING vlan: 3
- Recuperare tutte le cache dei criteri
get policy-caches
Esempio:
node> get policy-caches aff2b12b-b425-4d9f-b8e6-b6308644efa8: fws_id: 3fe27725-f139-479a-b83b-8576c9aedbef nsg_id: 30583a27-9b56-49c1-a534-4040f91cc333 rules: 8272: dst_app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 ports: 8382 protocol: tcp src_app_id: f582ec4d-3a13-440a-afbd-97b7bfae21d1 f582ec4d-3a13-440a-afbd-97b7bfae21d1: nsg_id: d24b9f77-e2e0-4fba-b258-893223683aa6 rules: 8272: dst_app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 ports: 8382 protocol: tcp src_app_id: f582ec4d-3a13-440a-afbd-97b7bfae21d1
Comandi di supporto per il container NCP
- Salvare il bundle di supporto NCP nell'archivio file
Il bundle di supporto è costituito dai file di registro per tutti i container nei pod con l'etichetta tier:nsx-networking. Il file del bundle è in formato tgz e viene salvato nella directory predefinita dell'archivio file della CLI /var/vmware/nsx/file-store. È possibile utilizzare il comando del file-store della CLI per copiare il file del bundle in un sito remoto.
get support-bundle file <filename>
Esempio:
kubenode>get support-bundle file foo Bundle file foo created in tgz format kubenode>copy file foo url scp://[email protected]:/tmp
- Salvare i registri NCP nell'archivio file
Il file di registro viene salvato nel formato tgz nella directory predefinita dell'archivio file della CLI /var/vmware/nsx/file-store. È possibile utilizzare il comando del file-store della CLI per copiare il file del bundle in un sito remoto.
get ncp-log file <filename>
Esempio:
kubenode>get ncp-log file foo Log file foo created in tgz format
- Salvare i registri dell'agente del nodo nell'archivio file
Salvare i registri dell'agente del nodo da un nodo o da tutti i nodi. I registri vengono salvati nel formato tgz nella directory predefinita dell'archivio file della CLI /var/vmware/nsx/file-store. È possibile utilizzare il comando del file-store della CLI per copiare il file del bundle in un sito remoto.
get node-agent-log file <filename> get node-agent-log file <filename> <node-name>
Esempio:
kubenode>get node-agent-log file foo Log file foo created in tgz format
- Recuperare e impostare il livello di registrazione globalmente o per un componente specifico.
I livelli di registrazione disponibili sono NOTSET, DEBUG, INFO, WARNING, ERROR e CRITICAL.
I componenti disponibili sono nsx_ujo.ncp, nsx_ujo.ncp.k8s, nsx_ujo.ncp.pcf, vmware_nsxlib.v3, nsxrpc e nsx_ujo.ncp.nsx.
get ncp-log-level [component] set ncp-log-level <log level> [component]
Esempi:
kubenode> get ncp-log-level NCP log level is INFO kubenode> get ncp-log-level nsx_ujo.ncp nsx_ujo.ncp log level is INFO kubenode>set ncp-log-level DEBUG NCP log level is changed to DEBUG kubenode> set ncp-log-level DEBUG nsx_ujo.ncp nsx_ujo.ncp log level has been changed to DEBUG
Comandi dello stato per il container dell'agente del nodo NSX
- Visualizzare lo stato della connessione tra l'agente del nodo e HyperBus in questo nodo.
get node-agent-hyperbus status
Esempio:
kubenode> get node-agent-hyperbus status HyperBus status: Healthy
Comandi della cache per il container dell'agente del nodo NSX
- Recuperare la cache interna per i container dell'agente del nodo NSX.
get container-cache <container-name> get container-caches
Esempio:
kubenode> get container-caches cif104: ip: 192.168.0.14/32 mac: 50:01:01:01:01:14 gateway_ip: 169.254.1.254/16 vlan_id: 104 kubenode> get container-cache cif104 ip: 192.168.0.14/32 mac: 50:01:01:01:01:14 gateway_ip: 169.254.1.254/16 vlan_id: 104
Comandi dello stato per il container del proxy Kube di NSX
- Visualizzare lo stato della connessione tra il proxy Kube e il server dell'API di Kubernetes
get ncp-k8s-api-server status
Esempio:
kubenode> get kube-proxy-k8s-api-server status Kubernetes ApiServer status: Healthy
- Visualizzare lo stato del watcher del proxy Kube
get kube-proxy-watcher <watcher-name> get kube-proxy-watchers
Esempio:
kubenode> get kube-proxy-watchers endpoint: Average event processing time: 15 msec (in past 3600-sec window) Current watcher started time: May 01 2017 15:06:24 PDT Number of events processed: 90 (in past 3600-sec window) Total events processed by current watcher: 90 Total events processed since watcher thread created: 90 Total watcher recycle count: 0 Watcher thread created time: May 01 2017 15:06:24 PDT Watcher thread status: Up service: Average event processing time: 8 msec (in past 3600-sec window) Current watcher started time: May 01 2017 15:06:24 PDT Number of events processed: 2 (in past 3600-sec window) Total events processed by current watcher: 2 Total events processed since watcher thread created: 2 Total watcher recycle count: 0 Watcher thread created time: May 01 2017 15:06:24 PDT Watcher thread status: Up kubenode> get kube-proxy-watcher endpoint Average event processing time: 15 msec (in past 3600-sec window) Current watcher started time: May 01 2017 15:06:24 PDT Number of events processed: 90 (in past 3600-sec window) Total events processed by current watcher: 90 Total events processed since watcher thread created: 90 Total watcher recycle count: 0 Watcher thread created time: May 01 2017 15:06:24 PDT Watcher thread status: Up
- Eseguire il dump dei flussi OVS in un nodo
dump ovs-flows
Esempio:
kubenode> dump ovs-flows NXST_FLOW reply (xid=0x4): cookie=0x0, duration=8.876s, table=0, n_packets=0, n_bytes=0, idle_age=8, priority=100,ip actions=ct(table=1) cookie=0x0, duration=8.898s, table=0, n_packets=0, n_bytes=0, idle_age=8, priority=0 actions=NORMAL cookie=0x0, duration=8.759s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=100,tcp,nw_dst=10.96.0.1,tp_dst=443 actions=mod_tp_dst:443 cookie=0x0, duration=8.719s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=100,ip,nw_dst=10.96.0.10 actions=drop cookie=0x0, duration=8.819s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=90,ip,in_port=1 actions=ct(table=2,nat) cookie=0x0, duration=8.799s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=80,ip actions=NORMAL cookie=0x0, duration=8.856s, table=2, n_packets=0, n_bytes=0, idle_age=8, actions=NORMAL