Um CLI-Befehle auszuführen, melden Sie sich beim NSX Container Plugin-Container an, öffnen Sie ein Terminal, und führen Sie den Befehl nsxcli aus.
kubectl exec -it <pod name> nsxcli
Typ | Befehl | Hinweis |
---|---|---|
Status | get ncp-master status | Für Kubernetes und TAS. |
Status | get ncp-nsx status | Für Kubernetes und TAS. |
Status | get ncp-watcher <Watcher-Name> | Für Kubernetes und TAS. |
Status | get ncp-watchers | Für Kubernetes und TAS. |
Status | get ncp-k8s-api-server status | Für Kubernetes. |
Status | check projects | Für Kubernetes. |
Status | check project <project-name> | Für Kubernetes. |
Status | get ncp-bbs status | Nur für TAS. |
Status | get ncp-capi status | Nur für TAS. |
Status | get ncp-policy-server status | Nur für TAS. |
Cache | get project-caches | Für Kubernetes. |
Cache | get project-cache <Projektname> | Für Kubernetes. |
Cache | get namespace-caches | Für Kubernetes. |
Cache | get namespace-cache <Namensraum-Name> | Für Kubernetes. |
Cache | get pod-caches | Für Kubernetes. |
Cache | get pod-cache <Pod-Name> | Für Kubernetes. |
Cache | get ingress-caches | Für Kubernetes. |
Cache | get ingress-cache <ingress-name> | Für Kubernetes. |
Cache | get ingress-controllers | Für Kubernetes. |
Cache | get ingress-controller <ingress-controller-name> | Für Kubernetes. |
Cache | get network-policy-caches | Für Kubernetes. |
Cache | get network-policy-cache <pod-name> | Für Kubernetes. |
Cache | get asg-caches | Nur für TAS. |
Cache | get asg-cache <asg-ID> | Nur für TAS. |
Cache | get org-caches | Nur für TAS. |
Cache | get org-cache <org-ID> | Nur für TAS. |
Cache | get space-caches | Nur für TAS. |
Cache | get space-cache <space-ID> | Nur für TAS. |
Cache | get app-caches | Nur für TAS. |
Cache | get app-cache <app-ID> | Nur für TAS. |
Cache | get instance-caches <app-ID> | Nur für TAS. |
Cache | get instance-cache <app-ID> <instance-ID> | Nur für TAS. |
Cache | get policy-caches | Nur für TAS. |
Support | get ncp-log file <Dateiname> | Für Kubernetes und TAS. |
Support | get ncp-log-level [Komponente] | Für Kubernetes und TAS. |
Support | set ncp-log-level <log-level> [Komponente] | Für Kubernetes und TAS. |
Support | get support-bundle file <Dateiname> | Für Kubernetes. |
Support | get node-agent-log file <Dateiname> | Für Kubernetes. |
Support | get node-agent-log file <Dateiname> <Knotenname> | Für Kubernetes. |
Typ | Befehl |
---|---|
Status | get node-agent-hyperbus status |
Cache | get container-cache <Containername> |
Cache | get container-caches |
Typ | Befehl |
---|---|
Status | get ncp-k8s-api-server status |
Status | get kube-proxy-watcher <Watcher-Name> |
Status | get kube-proxy-watchers |
Status | dump ovs-flows |
Statusbefehle für den NCP-Container
- Status des NCP-Masters anzeigen
get ncp-master status
Beispiel:
kubenode> get ncp-master status This instance is not the NCP master Current NCP Master id is a4h83eh1-b8dd-4e74-c71c-cbb7cc9c4c1c Last master update at Wed Oct 25 22:46:40 2017
- Verbindungsstatus zwischen NCP und NSX Manager anzeigen
get ncp-nsx status
Beispiel:
kubenode> get ncp-nsx status NSX Manager status: Healthy
- Watcher-Status für Ingress, Namensraum, Pod und Dienst anzeigen
get ncp-watchers get ncp-watcher <watcher-name>
Beispiel:
kubenode> get ncp-watchers pod: Average event processing time: 1145 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:51:37 PST Number of events processed: 1 (in past 3600-sec window) Total events processed by current watcher: 1 Total events processed since watcher thread created: 1 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:51:37 PST Watcher thread status: Up namespace: Average event processing time: 68 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:51:37 PST Number of events processed: 2 (in past 3600-sec window) Total events processed by current watcher: 2 Total events processed since watcher thread created: 2 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:51:37 PST Watcher thread status: Up ingress: Average event processing time: 0 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:51:37 PST Number of events processed: 0 (in past 3600-sec window) Total events processed by current watcher: 0 Total events processed since watcher thread created: 0 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:51:37 PST Watcher thread status: Up service: Average event processing time: 3 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:51:37 PST Number of events processed: 1 (in past 3600-sec window) Total events processed by current watcher: 1 Total events processed since watcher thread created: 1 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:51:37 PST Watcher thread status: Up kubenode> get ncp-watcher pod Average event processing time: 1174 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:47:35 PST Number of events processed: 1 (in past 3600-sec window) Total events processed by current watcher: 1 Total events processed since watcher thread created: 1 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:47:35 PST Watcher thread status: Up
- Verbindungsstatus zwischen NCP und Kubernetes-API-Server anzeigen
get ncp-k8s-api-server status
Beispiel:
kubenode> get ncp-k8s-api-server status Kubernetes ApiServer status: Healthy
- Alle Projekte oder ein bestimmtes Projekt überprüfen
check projects check project <project-name>
Beispiel:
kubenode> check projects default: Tier-1 link port for router 1b90a61f-0f2c-4768-9eb6-ea8954b4f327 is missing Switch 40a6829d-c3aa-4e17-ae8a-7f7910fdf2c6 is missing ns1: Router 8accc9cd-9883-45f6-81b3-0d1fb2583180 is missing kubenode> check project default Tier-1 link port for router 1b90a61f-0f2c-4768-9eb6-ea8954b4f327 is missing Switch 40a6829d-c3aa-4e17-ae8a-7f7910fdf2c6 is missing
- Verbindungsstatus zwischen NCP und TAS-BBS überprüfen
get ncp-bbs status
Beispiel:
node> get ncp-bbs status BBS Server status: Healthy
- Verbindungsstatus zwischen NCP und TAS-CAPI überprüfen
get ncp-capi status
Beispiel:
node> get ncp-capi status CAPI Server status: Healthy
- Verbindungsstatus zwischen NCP und TAS-Richtlinienserver überprüfen
get ncp-policy-server status
Beispiel:
node> get ncp-bbs status Policy Server status: Healthy
Cachebefehle für den NCP-Container
- Internen Cache für Projekte oder Namensräume abrufen
get project-cache <project-name> get project-caches get namespace-cache <namespace-name> get namespace-caches
Beispiel:
kubenode> get project-caches default: logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180 logical-switch: id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e subnet: 10.0.0.0/24 subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435 kube-system: logical-router: 5032b299-acad-448e-a521-19d272a08c46 logical-switch: id: 85233651-602d-445d-ab10-1c84096cc22a ip_pool_id: ab1c5b09-7004-4206-ac56-85d9d94bffa2 subnet: 10.0.1.0/24 subnet_id: 73e450af-b4b8-4a61-a6e3-c7ddd15ce751 testns: ext_pool_id: 346a0f36-7b5a-4ecc-ad32-338dcb92316f labels: ns: myns project: myproject logical-router: 4dc8f8a9-69b4-4ff7-8fb7-d2625dc77efa logical-switch: id: 6111a99a-6e06-4faa-a131-649f10f7c815 ip_pool_id: 51ca058d-c3dc-41fd-8f2d-e69006ab1b3d subnet: 50.0.2.0/24 subnet_id: 34f79811-bd29-4048-a67d-67ceac97eb98 project_nsgroup: 9606afee-6348-4780-9dbe-91abfd23e475 snat_ip: 4.4.0.3 kubenode> get project-cache default logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180 logical-switch: id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e subnet: 10.0.0.0/24 subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435 kubenode> get namespace-caches default: logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180 logical-switch: id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e subnet: 10.0.0.0/24 subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435 kube-system: logical-router: 5032b299-acad-448e-a521-19d272a08c46 logical-switch: id: 85233651-602d-445d-ab10-1c84096cc22a ip_pool_id: ab1c5b09-7004-4206-ac56-85d9d94bffa2 subnet: 10.0.1.0/24 subnet_id: 73e450af-b4b8-4a61-a6e3-c7ddd15ce751 testns: ext_pool_id: 346a0f36-7b5a-4ecc-ad32-338dcb92316f labels: ns: myns project: myproject logical-router: 4dc8f8a9-69b4-4ff7-8fb7-d2625dc77efa logical-switch: id: 6111a99a-6e06-4faa-a131-649f10f7c815 ip_pool_id: 51ca058d-c3dc-41fd-8f2d-e69006ab1b3d subnet: 50.0.2.0/24 subnet_id: 34f79811-bd29-4048-a67d-67ceac97eb98 project_nsgroup: 9606afee-6348-4780-9dbe-91abfd23e475 snat_ip: 4.4.0.3 kubenode> get namespace-cache default logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180 logical-switch: id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e subnet: 10.0.0.0/24 subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
- Internen Cache für Pods abrufen
get pod-cache <pod-name> get pod-caches
Beispiel:
kubenode> get pod-caches nsx.default.nginx-rc-uq2lv: cif_id: 2af9f734-37b1-4072-ba88-abbf935bf3d4 gateway_ip: 10.0.0.1 host_vif: d6210773-5c07-4817-98db-451bd1f01937 id: 1c8b5c52-3795-11e8-ab42-005056b198fb ingress_controller: False ip: 10.0.0.2/24 labels: app: nginx mac: 02:50:56:00:08:00 port_id: d52c833a-f531-4bdf-bfa2-e8a084a8d41b vlan: 1 nsx.testns.web-pod-1: cif_id: ce134f21-6be5-43fe-afbf-aaca8c06b5cf gateway_ip: 50.0.2.1 host_vif: d6210773-5c07-4817-98db-451bd1f01937 id: 3180b521-270e-11e8-ab42-005056b198fb ingress_controller: False ip: 50.0.2.3/24 labels: app: nginx-new role: db tier: cache mac: 02:50:56:00:20:02 port_id: 81bc2b8e-d902-4cad-9fc1-aabdc32ecaf8 vlan: 3 kubenode> get pod-cache nsx.default.nginx-rc-uq2lv cif_id: 2af9f734-37b1-4072-ba88-abbf935bf3d4 gateway_ip: 10.0.0.1 host_vif: d6210773-5c07-4817-98db-451bd1f01937 id: 1c8b5c52-3795-11e8-ab42-005056b198fb ingress_controller: False ip: 10.0.0.2/24 labels: app: nginx mac: 02:50:56:00:08:00 port_id: d52c833a-f531-4bdf-bfa2-e8a084a8d41b vlan: 1
- Alle Ingress-Caches oder einen bestimmten Ingress-Cache abrufen
get ingress caches get ingress-cache <ingress-name>
Beispiel:
kubenode> get ingress-caches nsx.default.cafe-ingress: ext_pool_id: cc02db70-539a-4934-a938-5b851b3e485b lb_virtual_server: id: 895c7f43-c56e-4b67-bb4c-09d68459d416 lb_service_id: 659eefc6-33d1-4672-a419-344b877f528e name: dgo2-http type: http lb_virtual_server_ip: 5.5.0.2 name: cafe-ingress rules: host: cafe.example.com http: paths: path: /coffee backend: serviceName: coffee-svc servicePort: 80 lb_rule: id: 4bc16bdd-abd9-47fb-a09e-21e58b2131c3 name: dgo2-default-cafe-ingress/coffee kubenode> get ingress-cache nsx.default.cafe-ingress ext_pool_id: cc02db70-539a-4934-a938-5b851b3e485b lb_virtual_server: id: 895c7f43-c56e-4b67-bb4c-09d68459d416 lb_service_id: 659eefc6-33d1-4672-a419-344b877f528e name: dgo2-http type: http lb_virtual_server_ip: 5.5.0.2 name: cafe-ingress rules: host: cafe.example.com http: paths: path: /coffee backend: serviceName: coffee-svc servicePort: 80 lb_rule: id: 4bc16bdd-abd9-47fb-a09e-21e58b2131c3 name: dgo2-default-cafe-ingress/coffee
- Alle Ingress-Caches oder einen bestimmten Ingress-Cache abrufen, einschließlich deaktivierter Controller
get ingress controllers get ingress-controller <ingress-controller-name>
Beispiel:
kubenode> get ingress-controllers native-load-balancer: ingress_virtual_server: http: default_backend_tags: id: 895c7f43-c56e-4b67-bb4c-09d68459d416 pool_id: None https_terminated: default_backend_tags: id: 293282eb-f1a0-471c-9e48-ba28d9d89161 pool_id: None lb_ip_pool_id: cc02db70-539a-4934-a938-5b851b3e485b loadbalancer_service: first_avail_index: 0 lb_services: id: 659eefc6-33d1-4672-a419-344b877f528e name: dgo2-bfmxi t1_link_port_ip: 100.64.128.5 t1_router_id: cb50deb2-4460-45f2-879a-1b94592ae886 virtual_servers: 293282eb-f1a0-471c-9e48-ba28d9d89161 895c7f43-c56e-4b67-bb4c-09d68459d416 ssl: ssl_client_profile_id: aff205bb-4db8-5a72-8d67-218cdc54d27b vip: 5.5.0.2 nsx.default.nginx-ingress-rc-host-ed3og ip: 10.192.162.201 mode: hostnetwork pool_id: 5813c609-5d3a-4438-b9c3-ea3cd6de52c3 kubenode> get ingress-controller native-load-balancer ingress_virtual_server: http: default_backend_tags: id: 895c7f43-c56e-4b67-bb4c-09d68459d416 pool_id: None https_terminated: default_backend_tags: id: 293282eb-f1a0-471c-9e48-ba28d9d89161 pool_id: None lb_ip_pool_id: cc02db70-539a-4934-a938-5b851b3e485b loadbalancer_service: first_avail_index: 0 lb_services: id: 659eefc6-33d1-4672-a419-344b877f528e name: dgo2-bfmxi t1_link_port_ip: 100.64.128.5 t1_router_id: cb50deb2-4460-45f2-879a-1b94592ae886 virtual_servers: 293282eb-f1a0-471c-9e48-ba28d9d89161 895c7f43-c56e-4b67-bb4c-09d68459d416 ssl: ssl_client_profile_id: aff205bb-4db8-5a72-8d67-218cdc54d27b vip: 5.5.0.2
- Netzwerkrichtlinien-Caches oder einen bestimmten Netzwerkrichtlinien-Cache abrufen
get network-policy caches get network-policy-cache <network-policy-name>
Beispiel:
kubenode> get network-policy-caches nsx.testns.allow-tcp-80: dest_labels: None dest_pods: 50.0.2.3 match_expressions: key: tier operator: In values: cache name: allow-tcp-80 np_dest_ip_set_ids: 22f82d76-004f-4d12-9504-ce1cb9c8aa00 np_except_ip_set_ids: np_ip_set_ids: 14f7f825-f1a0-408f-bbd9-bb2f75d44666 np_isol_section_id: c8d93597-9066-42e3-991c-c550c46b2270 np_section_id: 04693136-7925-44f2-8616-d809d02cd2a9 ns_name: testns src_egress_rules: None src_egress_rules_hash: 97d170e1550eee4afc0af065b78cda302a97674c src_pods: 50.0.2.0/24 src_rules: from: namespaceSelector: matchExpressions: key: tier operator: DoesNotExist matchLabels: ns: myns ports: port: 80 protocol: TCP src_rules_hash: e4ea7b8d91c1e722670a59f971f8fcc1a5ac51f1 kubenode> get network-policy-cache nsx.testns.allow-tcp-80 dest_labels: None dest_pods: 50.0.2.3 match_expressions: key: tier operator: In values: cache name: allow-tcp-80 np_dest_ip_set_ids: 22f82d76-004f-4d12-9504-ce1cb9c8aa00 np_except_ip_set_ids: np_ip_set_ids: 14f7f825-f1a0-408f-bbd9-bb2f75d44666 np_isol_section_id: c8d93597-9066-42e3-991c-c550c46b2270 np_section_id: 04693136-7925-44f2-8616-d809d02cd2a9 ns_name: testns src_egress_rules: None src_egress_rules_hash: 97d170e1550eee4afc0af065b78cda302a97674c src_pods: 50.0.2.0/24 src_rules: from: namespaceSelector: matchExpressions: key: tier operator: DoesNotExist matchLabels: ns: myns ports: port: 80 protocol: TCP src_rules_hash: e4ea7b8d91c1e722670a59f971f8fcc1a5ac51f1
- Alle ASG-Caches oder einen bestimmten ASG-Cache abrufen
get asg-caches get asg-cache <asg-ID>
Beispiel:
node> get asg-caches edc04715-d04c-4e63-abbc-db601a668db6: fws_id: 3c66f40a-5378-46d7-a7e2-bee4ba72a4cc name: org-85_tcp_80_asg rules: destinations: 66.10.10.0/24 ports: 80 protocol: tcp rule_id: 4359 running_default: False running_spaces: 75bc164d-1214-46f9-80bb-456a8fbccbfd staging_default: False staging_spaces: node> get asg-cache edc04715-d04c-4e63-abbc-db601a668db6 fws_id: 3c66f40a-5378-46d7-a7e2-bee4ba72a4cc name: org-85_tcp_80_asg rules: destinations: 66.10.10.0/24 ports: 80 protocol: tcp rule_id: 4359 running_default: False running_spaces: 75bc164d-1214-46f9-80bb-456a8fbccbfd staging_default: False staging_spaces:
- Alle Organisations-Caches oder einen bestimmten Organisations-Cache abrufen
get org-caches get org-cache <org-ID>
Beispiel:
node> get org-caches ebb8b4f9-a40f-4122-bf21-65c40f575aca: ext_pool_id: 9208a8b8-57d7-4582-9c1f-7a7cefa104f5 isolation: isolation_section_id: d6e7ff95-4737-4e34-91d4-27601897353f logical-router: 94a414a2-551e-4444-bae6-3d79901a165f logical-switch: id: d74807e8-8f74-4575-b26b-87d4fdbafd3c ip_pool_id: 1b60f16f-4a30-4a3d-93cc-bfb08a5e3e02 subnet: 50.0.48.0/24 subnet_id: a458d3aa-bea9-4684-9957-d0ce80d11788 name: org-50 snat_ip: 70.0.0.49 spaces: e8ab7aa0-d4e3-4458-a896-f33177557851 node> get org-cache ebb8b4f9-a40f-4122-bf21-65c40f575aca ext_pool_id: 9208a8b8-57d7-4582-9c1f-7a7cefa104f5 isolation: isolation_section_id: d6e7ff95-4737-4e34-91d4-27601897353f logical-router: 94a414a2-551e-4444-bae6-3d79901a165f logical-switch: id: d74807e8-8f74-4575-b26b-87d4fdbafd3c ip_pool_id: 1b60f16f-4a30-4a3d-93cc-bfb08a5e3e02 subnet: 50.0.48.0/24 subnet_id: a458d3aa-bea9-4684-9957-d0ce80d11788 name: org-50 snat_ip: 70.0.0.49 spaces: e8ab7aa0-d4e3-4458-a896-f33177557851
- Alle Speicher-Caches oder einen bestimmten Speicher-Cache abrufen
get space-caches get space-cache <space-ID>
Beispiel:
node> get space-caches global_security_group: name: global_security_group running_nsgroup: 226d4292-47fb-4c2e-a118-449818d8fa98 staging_nsgroup: 7ebbf7f5-38c9-43a3-9292-682056722836 7870d134-7997-4373-b665-b6a910413c47: name: test-space1 org_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09 running_nsgroup: 4a3d9bcc-be36-47ae-bff8-96448fecf307 running_security_groups: aa0c7c3f-a478-4d45-8afa-df5d5d7dc512 staging_security_groups: aa0c7c3f-a478-4d45-8afa-df5d5d7dc512 node> get space-cache 7870d134-7997-4373-b665-b6a910413c47 name: test-space1 org_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09 running_nsgroup: 4a3d9bcc-be36-47ae-bff8-96448fecf307 running_security_groups: aa0c7c3f-a478-4d45-8afa-df5d5d7dc512 staging_security_groups: aa0c7c3f-a478-4d45-8afa-df5d5d7dc512
- Alle App-Caches oder einen bestimmten App-Cache abrufen
get app-caches get app-cache <app-ID>
Beispiel:
node> get app-caches aff2b12b-b425-4d9f-b8e6-b6308644efa8: instances: b72199cc-e1ab-49bf-506d-478d: app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544 cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b gateway_ip: 192.168.5.1 host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab id: b72199cc-e1ab-49bf-506d-478d index: 0 ip: 192.168.5.4/24 last_updated_time: 1522965828.45 mac: 02:50:56:00:60:02 port_id: a7c6f6bb-c472-4239-a030-bce615d5063e state: RUNNING vlan: 3 name: hello2 rg_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09 space_id: 7870d134-7997-4373-b665-b6a910413c47 node> get app-cache aff2b12b-b425-4d9f-b8e6-b6308644efa8 instances: b72199cc-e1ab-49bf-506d-478d: app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544 cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b gateway_ip: 192.168.5.1 host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab id: b72199cc-e1ab-49bf-506d-478d index: 0 ip: 192.168.5.4/24 last_updated_time: 1522965828.45 mac: 02:50:56:00:60:02 port_id: a7c6f6bb-c472-4239-a030-bce615d5063e state: RUNNING vlan: 3 name: hello2 org_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09 space_id: 7870d134-7997-4373-b665-b6a910413c47
- Alle Instanz-Caches einer App oder einen bestimmten Instanz-Cache abrufen
get instance-caches <app-ID> get instance-cache <app-ID> <instance-ID>
Beispiel:
node> get instance-caches aff2b12b-b425-4d9f-b8e6-b6308644efa8 b72199cc-e1ab-49bf-506d-478d: app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544 cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b gateway_ip: 192.168.5.1 host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab id: b72199cc-e1ab-49bf-506d-478d index: 0 ip: 192.168.5.4/24 last_updated_time: 1522965828.45 mac: 02:50:56:00:60:02 port_id: a7c6f6bb-c472-4239-a030-bce615d5063e state: RUNNING vlan: 3 node> get instance-cache aff2b12b-b425-4d9f-b8e6-b6308644efa8 b72199cc-e1ab-49bf-506d-478d app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544 cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b gateway_ip: 192.168.5.1 host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab id: b72199cc-e1ab-49bf-506d-478d index: 0 ip: 192.168.5.4/24 last_updated_time: 1522965828.45 mac: 02:50:56:00:60:02 port_id: a7c6f6bb-c472-4239-a030-bce615d5063e state: RUNNING vlan: 3
- Alle Richtlinien-Caches abrufen
get policy-caches
Beispiel:
node> get policy-caches aff2b12b-b425-4d9f-b8e6-b6308644efa8: fws_id: 3fe27725-f139-479a-b83b-8576c9aedbef nsg_id: 30583a27-9b56-49c1-a534-4040f91cc333 rules: 8272: dst_app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 ports: 8382 protocol: tcp src_app_id: f582ec4d-3a13-440a-afbd-97b7bfae21d1 f582ec4d-3a13-440a-afbd-97b7bfae21d1: nsg_id: d24b9f77-e2e0-4fba-b258-893223683aa6 rules: 8272: dst_app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 ports: 8382 protocol: tcp src_app_id: f582ec4d-3a13-440a-afbd-97b7bfae21d1
Supportbefehle für den NCP-Container
- NCP-Support-Paket im Dateispeicher speichern
Das Support-Paket umfasst die Protokolldateien für alle Container in Pods mit der Bezeichnung tier:nsx-networking. Die Paketdatei liegt im TGZ-Format vor und befindet sich im CLI-Standard-Dateispeicherverzeichnis /var/vmware/nsx/file-store. Mithilfe des CLI-Befehls „file-store“ können Sie die Paketdatei in eine Remote-Site kopieren.
get support-bundle file <filename>
Beispiel:
kubenode>get support-bundle file foo Bundle file foo created in tgz format kubenode>copy file foo url scp://[email protected]:/tmp
- NCP-Protokolle im Dateispeicher speichern
Die Protokolldatei wird im TGZ-Format im CLI-Standard-Dateispeicherverzeichnis /var/vmware/nsx/file-store gespeichert. Mithilfe des CLI-Befehls „file-store“ können Sie die Paketdatei in eine Remote-Site kopieren.
get ncp-log file <filename>
Beispiel:
kubenode>get ncp-log file foo Log file foo created in tgz format
- Knoten-Agent-Protokolle im Dateispeicher speichern
Speichern Sie die Knoten-Agent-Protokolle von einem oder allen Knoten. Die Protokolle werden im TGZ-Format im CLI-Standard-Dateispeicherverzeichnis /var/vmware/nsx/file-store gespeichert. Mithilfe des CLI-Befehls „file-store“ können Sie die Paketdatei in eine Remote-Site kopieren.
get node-agent-log file <filename> get node-agent-log file <filename> <node-name>
Beispiel:
kubenode>get node-agent-log file foo Log file foo created in tgz format
- Rufen Sie die Protokollierungsebene global oder für eine bestimmte Komponente ab und legen Sie sie fest.
Zu den verfügbaren Protokollebenen gehören: NOTSET, DEBUG, INFO, WARNING, ERROR und CRITICAL.
Die verfügbaren Komponenten sind nsx_ujo.ncp, nsx_ujo.ncp.k8s, nsx_ujo.ncp.pcf, vmware_nsxlib.v3, nsxrpc und nsx_ujo.ncp.nsx.
get ncp-log-level [component] set ncp-log-level <log level> [component]
Beispiele:
kubenode> get ncp-log-level NCP log level is INFO kubenode> get ncp-log-level nsx_ujo.ncp nsx_ujo.ncp log level is INFO kubenode>set ncp-log-level DEBUG NCP log level is changed to DEBUG kubenode> set ncp-log-level DEBUG nsx_ujo.ncp nsx_ujo.ncp log level has been changed to DEBUG
Status-Befehle für den NSX-Knoten-Agent-Container
- Zeigen Sie den Verbindungsstatus zwischen dem Knoten-Agent und HyperBus auf diesem Knoten an.
get node-agent-hyperbus status
Beispiel:
kubenode> get node-agent-hyperbus status HyperBus status: Healthy
Cache-Befehle für den NSX-Knoten-Agent-Container
- Internen Cache für NSX-Knoten-Agent-Container abrufen
get container-cache <container-name> get container-caches
Beispiel:
kubenode> get container-caches cif104: ip: 192.168.0.14/32 mac: 50:01:01:01:01:14 gateway_ip: 169.254.1.254/16 vlan_id: 104 kubenode> get container-cache cif104 ip: 192.168.0.14/32 mac: 50:01:01:01:01:14 gateway_ip: 169.254.1.254/16 vlan_id: 104
Status-Befehle für den NSX-Kube-Proxy-Container
- Verbindungsstatus zwischen Kube-Proxy und Kubernetes-API-Server anzeigen
get ncp-k8s-api-server status
Beispiel:
kubenode> get kube-proxy-k8s-api-server status Kubernetes ApiServer status: Healthy
- Kube-Proxy-Watcher-Status anzeigen
get kube-proxy-watcher <watcher-name> get kube-proxy-watchers
Beispiel:
kubenode> get kube-proxy-watchers endpoint: Average event processing time: 15 msec (in past 3600-sec window) Current watcher started time: May 01 2017 15:06:24 PDT Number of events processed: 90 (in past 3600-sec window) Total events processed by current watcher: 90 Total events processed since watcher thread created: 90 Total watcher recycle count: 0 Watcher thread created time: May 01 2017 15:06:24 PDT Watcher thread status: Up service: Average event processing time: 8 msec (in past 3600-sec window) Current watcher started time: May 01 2017 15:06:24 PDT Number of events processed: 2 (in past 3600-sec window) Total events processed by current watcher: 2 Total events processed since watcher thread created: 2 Total watcher recycle count: 0 Watcher thread created time: May 01 2017 15:06:24 PDT Watcher thread status: Up kubenode> get kube-proxy-watcher endpoint Average event processing time: 15 msec (in past 3600-sec window) Current watcher started time: May 01 2017 15:06:24 PDT Number of events processed: 90 (in past 3600-sec window) Total events processed by current watcher: 90 Total events processed since watcher thread created: 90 Total watcher recycle count: 0 Watcher thread created time: May 01 2017 15:06:24 PDT Watcher thread status: Up
- OVS-Flows für Speicherabbild an einem Knoten
dump ovs-flows
Beispiel:
kubenode> dump ovs-flows NXST_FLOW reply (xid=0x4): cookie=0x0, duration=8.876s, table=0, n_packets=0, n_bytes=0, idle_age=8, priority=100,ip actions=ct(table=1) cookie=0x0, duration=8.898s, table=0, n_packets=0, n_bytes=0, idle_age=8, priority=0 actions=NORMAL cookie=0x0, duration=8.759s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=100,tcp,nw_dst=10.96.0.1,tp_dst=443 actions=mod_tp_dst:443 cookie=0x0, duration=8.719s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=100,ip,nw_dst=10.96.0.10 actions=drop cookie=0x0, duration=8.819s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=90,ip,in_port=1 actions=ct(table=2,nat) cookie=0x0, duration=8.799s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=80,ip actions=NORMAL cookie=0x0, duration=8.856s, table=2, n_packets=0, n_bytes=0, idle_age=8, actions=NORMAL