Pour exécuter des commandes d'interface de ligne de commande, connectez-vous au conteneur NSX Container Plugin, ouvrez un terminal et exécutez la commande nsxcli.
kubectl exec -it <pod name> nsxcli
Commande |
---|
copy file <filename> url <url> |
copy url <url> [file <filename>] |
del file <filename> |
exit |
get cli-output datetime |
get command history |
get core-dumps |
get file <filename> |
get files |
get version |
help |
list |
set cli-output datetime <datetime-arg> |
set history limit <history-size> |
Type | Commande | Remarque |
---|---|---|
État | get ncp-master status | Pour Kubernetes et TAS. |
État | get ncp-nsx status | Pour Kubernetes et TAS. |
État | get ncp-watcher <watcher-name> | Pour Kubernetes et TAS. |
État | get ncp-watchers | Pour Kubernetes et TAS. |
État | get ncp-k8s-api-server status | Pour Kubernetes seulement. |
État | check projects | Pour Kubernetes seulement. |
État | check project <nom_projet> | Pour Kubernetes seulement. |
État | get ncp-restore status | Pour Kubernetes seulement. |
État | get ncp-bbs status | Pour TAS uniquement. |
État | get ncp-capi status | Pour TAS uniquement. |
Statut | get ncp-policy-server status | Pour TAS uniquement. |
Cache | get project-caches | Pour Kubernetes seulement. |
Cache | get project-cache <project-name> | Pour Kubernetes seulement. |
Cache | get namespace-caches | Pour Kubernetes seulement. |
Cache | get namespace-cache <namespace-name> | Pour Kubernetes seulement. |
Cache | get pod-caches | Pour Kubernetes seulement. |
Cache | get pod-cache <pod-name> | Pour Kubernetes seulement. |
Cache | get ingress-caches | Pour Kubernetes seulement. |
Cache | get ingress-cache <nom_entrée> | Pour Kubernetes seulement. |
Cache | get ingress-cache <ingress-name> <namespace-name> | Pour Kubernetes seulement. |
Cache | get ingress-controllers | Pour Kubernetes seulement. |
Cache | get ingress-controller <nom_contrôleur_entrée> | Pour Kubernetes seulement. |
Cache | get network-policy-caches | Pour Kubernetes seulement. |
Cache | get network-policy-cache <network-policy-name> | Pour Kubernetes seulement. |
Cache | get network-policy-cache <network-policy-name> <namespace-name> | Pour Kubernetes seulement. |
Cache | get service-cache <service-id> | Pour Kubernetes seulement. |
Cache | get service-caches | Pour Kubernetes seulement. |
Cache | get asg-caches | Pour TAS uniquement. |
Cache | get asg-cache <ID_ASG> | Pour TAS uniquement. |
Cache | get org-caches | Pour TAS uniquement. |
Cache | get org-cache <ID_organisation> | Pour TAS uniquement. |
Cache | get space-caches | Pour TAS uniquement. |
Cache | get space-cache <ID-espace> | Pour TAS uniquement. |
Cache | get app-caches | Pour TAS uniquement. |
Cache | get app-cache <ID_application> | Pour TAS uniquement. |
Cache | get instance-caches <ID_application> | Pour TAS uniquement. |
Cache | get instance-cache <ID_application> <ID_instance> | Pour TAS uniquement. |
Cache | get policy-caches | Pour TAS uniquement. |
Support | get ncp-log file <filename> | Pour Kubernetes et TAS. |
Support | get ncp-log-level [composant] | Pour Kubernetes et TAS. |
Support | set ncp-log-level <log-level> [composant] | Pour Kubernetes et TAS. |
Support | get support-bundle file <filename> | Pour Kubernetes seulement. |
Support | get node-agent-log file <filename> | Pour Kubernetes seulement. |
Support | get node-agent-log file <filename> <node-name> | Pour Kubernetes seulement. |
Type | Commande |
---|---|
État | get node-agent-hyperbus status |
État | get node-agent-ovs status |
Support | get node-agent-log-level |
Support | set node-agent-log-level <log-level> |
Cache | get container-cache <nom-conteneur> |
Cache | get container-caches |
Type | Commande |
---|---|
État | get kube-proxy-k8s-api-server status |
État | get kube-proxy-watcher <watcher-name> |
État | get kube-proxy-watchers |
Support | get kube-proxy-log-level |
Support | set kube-proxy-log-level <log-level> |
Exemples de commandes pour tous les conteneurs
- Copiez un fichier local vers une destination distante.
copy file <filename> url <url>
Exemple :
container> copy file support-bundle-0.tgz url scp://[email protected]/home/admin/ [email protected]'s password: container>
- Copiez un fichier distant vers un fichier local.
copy url <url> [file <filename>]
Exemple :
container> copy url scp://[email protected]/home/admin/support-bundle-0.tgz [email protected]'s password: container>
Commandes d'état pour le conteneur NCP
- Afficher l'état du nœud maître NCP
get ncp-master status
Exemple :
kubenode> get ncp-master status This instance is not the NCP master Current NCP Master id is a4h83eh1-b8dd-4e74-c71c-cbb7cc9c4c1c Last master update at Wed Oct 25 22:46:40 2017
- Afficher l'état de la connexion entre NCP et NSX Manager
get ncp-nsx status
Exemple :
kubenode> get ncp-nsx status NSX Manager status: Healthy
- Afficher l'état de l'observateur de l'entrée, l'espace de noms, l'espace et le service
get ncp-watchers get ncp-watcher <watcher-name>
Exemple :
kubenode> get ncp-watchers pod: Average event processing time: 1145 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:51:37 PST Number of events processed: 1 (in past 3600-sec window) Total events processed by current watcher: 1 Total events processed since watcher thread created: 1 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:51:37 PST Watcher thread status: Up namespace: Average event processing time: 68 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:51:37 PST Number of events processed: 2 (in past 3600-sec window) Total events processed by current watcher: 2 Total events processed since watcher thread created: 2 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:51:37 PST Watcher thread status: Up ingress: Average event processing time: 0 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:51:37 PST Number of events processed: 0 (in past 3600-sec window) Total events processed by current watcher: 0 Total events processed since watcher thread created: 0 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:51:37 PST Watcher thread status: Up service: Average event processing time: 3 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:51:37 PST Number of events processed: 1 (in past 3600-sec window) Total events processed by current watcher: 1 Total events processed since watcher thread created: 1 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:51:37 PST Watcher thread status: Up kubenode> get ncp-watcher pod Average event processing time: 1174 msec (in past 3600-sec window) Current watcher started time: Mar 02 2017 10:47:35 PST Number of events processed: 1 (in past 3600-sec window) Total events processed by current watcher: 1 Total events processed since watcher thread created: 1 Total watcher recycle count: 0 Watcher thread created time: Mar 02 2017 10:47:35 PST Watcher thread status: Up
- Afficher l'état de la connexion entre NCP et Kubernetes API Server
get ncp-k8s-api-server status
Exemple :
kubenode> get ncp-k8s-api-server status Kubernetes ApiServer status: Healthy
- Vérifier tous les projets ou un projet spécifique
check projects check project <project-name>
Exemple :
kubenode> check projects default: Tier-1 link port for router 1b90a61f-0f2c-4768-9eb6-ea8954b4f327 is missing Switch 40a6829d-c3aa-4e17-ae8a-7f7910fdf2c6 is missing ns1: Router 8accc9cd-9883-45f6-81b3-0d1fb2583180 is missing kubenode> check project default Tier-1 link port for router 1b90a61f-0f2c-4768-9eb6-ea8954b4f327 is missing Switch 40a6829d-c3aa-4e17-ae8a-7f7910fdf2c6 is missing
- Vérifier l'état de la connexion entre NCP et TAS BBS
get ncp-bbs status
Exemple :
node> get ncp-bbs status BBS Server status: Healthy
- Vérifier l'état de la connexion entre NCP et TAS CAPI
get ncp-capi status
Exemple :
node> get ncp-capi status CAPI Server status: Healthy
- Vérifier l'état de la connexion entre NCP et le serveur de stratégie TAS
get ncp-policy-server status
Exemple :
node> get ncp-bbs status Policy Server status: Healthy
Commandes de cache pour le conteneur NCP
- Obtenir le cache interne pour des projets ou des espaces de noms
get project-cache <project-name> get project-caches get namespace-cache <namespace-name> get namespace-caches
Exemple :
kubenode> get project-caches default: logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180 logical-switch: id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e subnet: 10.0.0.0/24 subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435 kube-system: logical-router: 5032b299-acad-448e-a521-19d272a08c46 logical-switch: id: 85233651-602d-445d-ab10-1c84096cc22a ip_pool_id: ab1c5b09-7004-4206-ac56-85d9d94bffa2 subnet: 10.0.1.0/24 subnet_id: 73e450af-b4b8-4a61-a6e3-c7ddd15ce751 testns: ext_pool_id: 346a0f36-7b5a-4ecc-ad32-338dcb92316f labels: ns: myns project: myproject logical-router: 4dc8f8a9-69b4-4ff7-8fb7-d2625dc77efa logical-switch: id: 6111a99a-6e06-4faa-a131-649f10f7c815 ip_pool_id: 51ca058d-c3dc-41fd-8f2d-e69006ab1b3d subnet: 50.0.2.0/24 subnet_id: 34f79811-bd29-4048-a67d-67ceac97eb98 project_nsgroup: 9606afee-6348-4780-9dbe-91abfd23e475 snat_ip: 4.4.0.3 kubenode> get project-cache default logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180 logical-switch: id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e subnet: 10.0.0.0/24 subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435 kubenode> get namespace-caches default: logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180 logical-switch: id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e subnet: 10.0.0.0/24 subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435 kube-system: logical-router: 5032b299-acad-448e-a521-19d272a08c46 logical-switch: id: 85233651-602d-445d-ab10-1c84096cc22a ip_pool_id: ab1c5b09-7004-4206-ac56-85d9d94bffa2 subnet: 10.0.1.0/24 subnet_id: 73e450af-b4b8-4a61-a6e3-c7ddd15ce751 testns: ext_pool_id: 346a0f36-7b5a-4ecc-ad32-338dcb92316f labels: ns: myns project: myproject logical-router: 4dc8f8a9-69b4-4ff7-8fb7-d2625dc77efa logical-switch: id: 6111a99a-6e06-4faa-a131-649f10f7c815 ip_pool_id: 51ca058d-c3dc-41fd-8f2d-e69006ab1b3d subnet: 50.0.2.0/24 subnet_id: 34f79811-bd29-4048-a67d-67ceac97eb98 project_nsgroup: 9606afee-6348-4780-9dbe-91abfd23e475 snat_ip: 4.4.0.3 kubenode> get namespace-cache default logical-router: 8accc9cd-9883-45f6-81b3-0d1fb2583180 logical-switch: id: 9d7da647-27b6-47cf-9cdb-6e4f4d5a356d ip_pool_id: 519ff57f-061f-4009-8d92-3e6526e7c17e subnet: 10.0.0.0/24 subnet_id: f75fd64c-c7b0-4b42-9681-fc656ae5e435
- Obtenir le cache interne pour des espaces
get pod-cache <pod-name> get pod-caches
Exemple :
kubenode> get pod-caches nsx.default.nginx-rc-uq2lv: cif_id: 2af9f734-37b1-4072-ba88-abbf935bf3d4 gateway_ip: 10.0.0.1 host_vif: d6210773-5c07-4817-98db-451bd1f01937 id: 1c8b5c52-3795-11e8-ab42-005056b198fb ingress_controller: False ip: 10.0.0.2/24 labels: app: nginx mac: 02:50:56:00:08:00 port_id: d52c833a-f531-4bdf-bfa2-e8a084a8d41b vlan: 1 nsx.testns.web-pod-1: cif_id: ce134f21-6be5-43fe-afbf-aaca8c06b5cf gateway_ip: 50.0.2.1 host_vif: d6210773-5c07-4817-98db-451bd1f01937 id: 3180b521-270e-11e8-ab42-005056b198fb ingress_controller: False ip: 50.0.2.3/24 labels: app: nginx-new role: db tier: cache mac: 02:50:56:00:20:02 port_id: 81bc2b8e-d902-4cad-9fc1-aabdc32ecaf8 vlan: 3 kubenode> get pod-cache nsx.default.nginx-rc-uq2lv cif_id: 2af9f734-37b1-4072-ba88-abbf935bf3d4 gateway_ip: 10.0.0.1 host_vif: d6210773-5c07-4817-98db-451bd1f01937 id: 1c8b5c52-3795-11e8-ab42-005056b198fb ingress_controller: False ip: 10.0.0.2/24 labels: app: nginx mac: 02:50:56:00:08:00 port_id: d52c833a-f531-4bdf-bfa2-e8a084a8d41b vlan: 1
- Obtenir tous les caches d'entrée ou un cache spécifique
get ingress caches get ingress-cache <ingress-name> get ingress-cache <ingress-name> <namespace-name>
Exemple :
kubenode> get ingress-caches nsx.default.cafe-ingress: ext_pool_id: cc02db70-539a-4934-a938-5b851b3e485b lb_virtual_server: id: 895c7f43-c56e-4b67-bb4c-09d68459d416 lb_service_id: 659eefc6-33d1-4672-a419-344b877f528e name: dgo2-http type: http lb_virtual_server_ip: 5.5.0.2 name: cafe-ingress rules: host: cafe.example.com http: paths: path: /coffee backend: serviceName: coffee-svc servicePort: 80 lb_rule: id: 4bc16bdd-abd9-47fb-a09e-21e58b2131c3 name: dgo2-default-cafe-ingress/coffee kubenode> get ingress-cache nsx.default.cafe-ingress ext_pool_id: cc02db70-539a-4934-a938-5b851b3e485b lb_virtual_server: id: 895c7f43-c56e-4b67-bb4c-09d68459d416 lb_service_id: 659eefc6-33d1-4672-a419-344b877f528e name: dgo2-http type: http lb_virtual_server_ip: 5.5.0.2 name: cafe-ingress rules: host: cafe.example.com http: paths: path: /coffee backend: serviceName: coffee-svc servicePort: 80 lb_rule: id: 4bc16bdd-abd9-47fb-a09e-21e58b2131c3 name: dgo2-default-cafe-ingress/coffee kubenode> get ingress-cache tea-ingress tea-ns creation_timestamp: 2019-05-02T22:00:15Z default_backend: None labels: None loadbalancer_ingress: ip: 4.4.0.1 ip: 100.64.240.3 name: tea-ingress namespace: tea-ns rules: host: drink.example.com http: paths: backend: lb_pool: 28b36074-8bcc-43ed-bf41-6e3cf4b6fc68: algorithm: ROUND_ROBIN members: None service name: tea-svc port: number: 80 nsx_lb_rule: 36861dc4-7488-46a8-9820-ba846c97cb09: phase: HTTP_FORWARDING path: /tea tls: hosts: drink.example.com nsx_certificate: secretName: drink-secret uid: aa64e1aa-6d25-11e9-b86c-0050568c8767
- Obtenir des informations sur tous les contrôleurs d'entrée ou sur un contrôleur spécifique, notamment les contrôleurs qui sont désactivés
get ingress controllers get ingress-controller <ingress-controller-name>
Exemple :
kubenode> get ingress-controllers native-load-balancer: ingress_virtual_server: http: default_backend_tags: id: 895c7f43-c56e-4b67-bb4c-09d68459d416 pool_id: None https_terminated: default_backend_tags: id: 293282eb-f1a0-471c-9e48-ba28d9d89161 pool_id: None lb_ip_pool_id: cc02db70-539a-4934-a938-5b851b3e485b loadbalancer_service: first_avail_index: 0 lb_services: id: 659eefc6-33d1-4672-a419-344b877f528e name: dgo2-bfmxi t1_link_port_ip: 100.64.128.5 t1_router_id: cb50deb2-4460-45f2-879a-1b94592ae886 virtual_servers: 293282eb-f1a0-471c-9e48-ba28d9d89161 895c7f43-c56e-4b67-bb4c-09d68459d416 ssl: ssl_client_profile_id: aff205bb-4db8-5a72-8d67-218cdc54d27b vip: 5.5.0.2 nsx.default.nginx-ingress-rc-host-ed3og ip: 10.192.162.201 mode: hostnetwork pool_id: 5813c609-5d3a-4438-b9c3-ea3cd6de52c3 kubenode> get ingress-controller native-load-balancer ingress_virtual_server: http: default_backend_tags: id: 895c7f43-c56e-4b67-bb4c-09d68459d416 pool_id: None https_terminated: default_backend_tags: id: 293282eb-f1a0-471c-9e48-ba28d9d89161 pool_id: None lb_ip_pool_id: cc02db70-539a-4934-a938-5b851b3e485b loadbalancer_service: first_avail_index: 0 lb_services: id: 659eefc6-33d1-4672-a419-344b877f528e name: dgo2-bfmxi t1_link_port_ip: 100.64.128.5 t1_router_id: cb50deb2-4460-45f2-879a-1b94592ae886 virtual_servers: 293282eb-f1a0-471c-9e48-ba28d9d89161 895c7f43-c56e-4b67-bb4c-09d68459d416 ssl: ssl_client_profile_id: aff205bb-4db8-5a72-8d67-218cdc54d27b vip: 5.5.0.2
- Obtenir les caches de stratégie réseau ou un cache spécifique
get network-policy caches get network-policy-cache <network-policy-name> get network-policy-cache <network-policy-name> <namespace-name>
Exemple :
kubenode> get network-policy-caches nsx.testns.allow-tcp-80: dest_labels: None dest_pods: 50.0.2.3 match_expressions: key: tier operator: In values: cache name: allow-tcp-80 np_dest_ip_set_ids: 22f82d76-004f-4d12-9504-ce1cb9c8aa00 np_except_ip_set_ids: np_ip_set_ids: 14f7f825-f1a0-408f-bbd9-bb2f75d44666 np_isol_section_id: c8d93597-9066-42e3-991c-c550c46b2270 np_section_id: 04693136-7925-44f2-8616-d809d02cd2a9 ns_name: testns src_egress_rules: None src_egress_rules_hash: 97d170e1550eee4afc0af065b78cda302a97674c src_pods: 50.0.2.0/24 src_rules: from: namespaceSelector: matchExpressions: key: tier operator: DoesNotExist matchLabels: ns: myns ports: port: 80 protocol: TCP src_rules_hash: e4ea7b8d91c1e722670a59f971f8fcc1a5ac51f1 kubenode> get network-policy-cache nsx.testns.allow-tcp-80 dest_labels: None dest_pods: 50.0.2.3 match_expressions: key: tier operator: In values: cache name: allow-tcp-80 np_dest_ip_set_ids: 22f82d76-004f-4d12-9504-ce1cb9c8aa00 np_except_ip_set_ids: np_ip_set_ids: 14f7f825-f1a0-408f-bbd9-bb2f75d44666 np_isol_section_id: c8d93597-9066-42e3-991c-c550c46b2270 np_section_id: 04693136-7925-44f2-8616-d809d02cd2a9 ns_name: testns src_egress_rules: None src_egress_rules_hash: 97d170e1550eee4afc0af065b78cda302a97674c src_pods: 50.0.2.0/24 src_rules: from: namespaceSelector: matchExpressions: key: tier operator: DoesNotExist matchLabels: ns: myns ports: port: 80 protocol: TCP src_rules_hash: e4ea7b8d91c1e722670a59f971f8fcc1a5ac51f1 kubenode> get network-policy-cache test-network-policy playground egress_rules: ports: port: 5978 protocol: TCP to: ipBlock: cidr: 10.0.0.0/24 id: playground test-network-policy ingress_rules: from: ipBlock: cidr: 192.167.0.1/24 except: 192.167.0.22/30 namespaceSelector: matchLabels: project: playground3 podSelector: matchLabels: role: testing isolation_section: id: a2746857-59cd-48ed-90d7-fd0a26395d68 labels: external_id: a815e70a-0646-11ea-940b-0050569e8e8f name: is-k8scluster-playground-test-network-policy rules: 1049: action: DROP destinations: is_valid: True target_display_name: tgt-k8scluster-playground-test-network-policy target_id: 2f01d2f1-7496-4e67-a856-4829c56923cb target_type: IPSet direction: IN name: ir-k8scluster-playground-test-network-policy sources: None name: test-network-policy namespace: playground pod_match_expression: operator: match_labels values: role: testing2 policy_section: id: 0fc97658-0588-4af7-b958-1eaf6141e817 labels: external_id: a815e70a-0646-11ea-940b-0050569e8e8f name: k8scluster-playground-test-network-policy rules: 1053: action: ALLOW destinations: is_valid: True target_display_name: tgt-k8scluster-playground-test-network-policy target_id: 2f01d2f1-7496-4e67-a856-4829c56923cb target_type: IPSet direction: IN name: ir-k8scluster-playground-test-network-policy-all sources: is_valid: True target_display_name: src-k8scluster-playground-test-network-policy-all target_id: b0573576-ff87-49c2-8279-79858c6329b4 target_type: IPSet policy_types: ingress egress target_ip_set: id: 2f01d2f1-7496-4e67-a856-4829c56923cb ip_addresses: 192.168.0.35 192.168.0.37 labels: match_expr_hash: 9915edba71061de777bd58ca054745debc14dcf5 role: testing2 name: tgt-k8scluster-playground-test-network-policy
- Obtenir tous les caches ASG ou un cache spécifique
get asg-caches get asg-cache <asg-ID>
Exemple :
node> get asg-caches edc04715-d04c-4e63-abbc-db601a668db6: fws_id: 3c66f40a-5378-46d7-a7e2-bee4ba72a4cc name: org-85_tcp_80_asg rules: destinations: 66.10.10.0/24 ports: 80 protocol: tcp rule_id: 4359 running_default: False running_spaces: 75bc164d-1214-46f9-80bb-456a8fbccbfd staging_default: False staging_spaces: node> get asg-cache edc04715-d04c-4e63-abbc-db601a668db6 fws_id: 3c66f40a-5378-46d7-a7e2-bee4ba72a4cc name: org-85_tcp_80_asg rules: destinations: 66.10.10.0/24 ports: 80 protocol: tcp rule_id: 4359 running_default: False running_spaces: 75bc164d-1214-46f9-80bb-456a8fbccbfd staging_default: False staging_spaces:
- Obtenir tous les caches d'organisation ou un cache spécifique
get org-caches get org-cache <org-ID>
Exemple :
node> get org-caches ebb8b4f9-a40f-4122-bf21-65c40f575aca: ext_pool_id: 9208a8b8-57d7-4582-9c1f-7a7cefa104f5 isolation: isolation_section_id: d6e7ff95-4737-4e34-91d4-27601897353f logical-router: 94a414a2-551e-4444-bae6-3d79901a165f logical-switch: id: d74807e8-8f74-4575-b26b-87d4fdbafd3c ip_pool_id: 1b60f16f-4a30-4a3d-93cc-bfb08a5e3e02 subnet: 50.0.48.0/24 subnet_id: a458d3aa-bea9-4684-9957-d0ce80d11788 name: org-50 snat_ip: 70.0.0.49 spaces: e8ab7aa0-d4e3-4458-a896-f33177557851 node> get org-cache ebb8b4f9-a40f-4122-bf21-65c40f575aca ext_pool_id: 9208a8b8-57d7-4582-9c1f-7a7cefa104f5 isolation: isolation_section_id: d6e7ff95-4737-4e34-91d4-27601897353f logical-router: 94a414a2-551e-4444-bae6-3d79901a165f logical-switch: id: d74807e8-8f74-4575-b26b-87d4fdbafd3c ip_pool_id: 1b60f16f-4a30-4a3d-93cc-bfb08a5e3e02 subnet: 50.0.48.0/24 subnet_id: a458d3aa-bea9-4684-9957-d0ce80d11788 name: org-50 snat_ip: 70.0.0.49 spaces: e8ab7aa0-d4e3-4458-a896-f33177557851
- Obtenir tous les caches d'espace ou un cache spécifique
get space-caches get space-cache <space-ID>
Exemple :
node> get space-caches global_security_group: name: global_security_group running_nsgroup: 226d4292-47fb-4c2e-a118-449818d8fa98 staging_nsgroup: 7ebbf7f5-38c9-43a3-9292-682056722836 7870d134-7997-4373-b665-b6a910413c47: name: test-space1 org_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09 running_nsgroup: 4a3d9bcc-be36-47ae-bff8-96448fecf307 running_security_groups: aa0c7c3f-a478-4d45-8afa-df5d5d7dc512 staging_security_groups: aa0c7c3f-a478-4d45-8afa-df5d5d7dc512 node> get space-cache 7870d134-7997-4373-b665-b6a910413c47 name: test-space1 org_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09 running_nsgroup: 4a3d9bcc-be36-47ae-bff8-96448fecf307 running_security_groups: aa0c7c3f-a478-4d45-8afa-df5d5d7dc512 staging_security_groups: aa0c7c3f-a478-4d45-8afa-df5d5d7dc512
- Obtenir tous les caches d'application ou un cache spécifique
get app-caches get app-cache <app-ID>
Exemple :
node> get app-caches aff2b12b-b425-4d9f-b8e6-b6308644efa8: instances: b72199cc-e1ab-49bf-506d-478d: app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544 cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b gateway_ip: 192.168.5.1 host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab id: b72199cc-e1ab-49bf-506d-478d index: 0 ip: 192.168.5.4/24 last_updated_time: 1522965828.45 mac: 02:50:56:00:60:02 port_id: a7c6f6bb-c472-4239-a030-bce615d5063e state: RUNNING vlan: 3 name: hello2 rg_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09 space_id: 7870d134-7997-4373-b665-b6a910413c47 node> get app-cache aff2b12b-b425-4d9f-b8e6-b6308644efa8 instances: b72199cc-e1ab-49bf-506d-478d: app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544 cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b gateway_ip: 192.168.5.1 host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab id: b72199cc-e1ab-49bf-506d-478d index: 0 ip: 192.168.5.4/24 last_updated_time: 1522965828.45 mac: 02:50:56:00:60:02 port_id: a7c6f6bb-c472-4239-a030-bce615d5063e state: RUNNING vlan: 3 name: hello2 org_id: a8423bc0-4b2b-49fb-bbff-a4badf21eb09 space_id: 7870d134-7997-4373-b665-b6a910413c47
- Obtenir tous les caches d'instance d'une application ou un cache d'instance spécifique
get instance-caches <app-ID> get instance-cache <app-ID> <instance-ID>
Exemple :
node> get instance-caches aff2b12b-b425-4d9f-b8e6-b6308644efa8 b72199cc-e1ab-49bf-506d-478d: app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544 cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b gateway_ip: 192.168.5.1 host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab id: b72199cc-e1ab-49bf-506d-478d index: 0 ip: 192.168.5.4/24 last_updated_time: 1522965828.45 mac: 02:50:56:00:60:02 port_id: a7c6f6bb-c472-4239-a030-bce615d5063e state: RUNNING vlan: 3 node> get instance-cache aff2b12b-b425-4d9f-b8e6-b6308644efa8 b72199cc-e1ab-49bf-506d-478d app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 cell_id: 0dda88bc-640b-44e7-8cea-20e83e873544 cif_id: 158a1d7e-6ccc-4027-a773-55bb2618f51b gateway_ip: 192.168.5.1 host_vif: 53475dfd-03e4-4bc6-b8ba-3d803725cbab id: b72199cc-e1ab-49bf-506d-478d index: 0 ip: 192.168.5.4/24 last_updated_time: 1522965828.45 mac: 02:50:56:00:60:02 port_id: a7c6f6bb-c472-4239-a030-bce615d5063e state: RUNNING vlan: 3
- Obtenir tous les caches de stratégie
get policy-caches
Exemple :
node> get policy-caches aff2b12b-b425-4d9f-b8e6-b6308644efa8: fws_id: 3fe27725-f139-479a-b83b-8576c9aedbef nsg_id: 30583a27-9b56-49c1-a534-4040f91cc333 rules: 8272: dst_app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 ports: 8382 protocol: tcp src_app_id: f582ec4d-3a13-440a-afbd-97b7bfae21d1 f582ec4d-3a13-440a-afbd-97b7bfae21d1: nsg_id: d24b9f77-e2e0-4fba-b258-893223683aa6 rules: 8272: dst_app_id: aff2b12b-b425-4d9f-b8e6-b6308644efa8 ports: 8382 protocol: tcp src_app_id: f582ec4d-3a13-440a-afbd-97b7bfae21d1
- Obtenez un cache de service dans NCP.
get service-cache <service-id>
Exemple :
kubenode> get service-cache d3d3b69d-2fc9-11e9-9046-00505691efa7 lb_persistence_profile: None loadbalancer_ingress: ip: 11.11.64.8 ip: 100.64.144.3 loadbalancer_ip: None name: tcp-svc-lb namespace: cafe service_type: LoadBalancer snat_pool: None deletion_timestamp: None port_nums: 80 ports: lb_pool: e017377f-3e34-4acc-9a7e-583bb16fb8ba: algorithm: ROUND_ROBIN members: None lb_vs: ef6f4104-0fcb-42e5-95e5-6b9b7f5ef818: ip_address: 11.11.64.8 ip_pool_id: 1d0dfac7-d707-448a-ad79-e101fafa3e23 persistence_profile_id: d29cfef5-0caf-4d21-ae01-b2173bb3db36 pool_id: e017377f-3e34-4acc-9a7e-583bb16fb8ba name: tcp nodePort: 31836 port: 80 protocol: TCP targetPort: 80
Commandes de support pour le conteneur NCP
- Enregistrer le bundle de support NCP dans le magasin de fichiers
Le bundle de support est constitué des fichiers journaux de tous les conteneurs d'espaces avec l'étiquette tier:nsx-networking. Le fichier de bundle est au format tgz et enregistré dans le répertoire du magasin de fichiers par défaut d'interface de ligne de commande /var/vmware/nsx/file-store. Vous pouvez utiliser la commande de magasin de fichiers d'interface de ligne de commande pour copier le fichier de bundle sur un site distant.
get support-bundle file <filename>
Exemple :
kubenode>get support-bundle file foo Bundle file foo created in tgz format kubenode>copy file foo url scp://[email protected]:/tmp
- Enregistrer les journaux NCP dans le magasin de fichiers
Le fichier journal est enregistré au format tgz dans le répertoire du magasin de fichiers par défaut d'interface de ligne de commande /var/vmware/nsx/file-store. Vous pouvez utiliser la commande de magasin de fichiers d'interface de ligne de commande pour copier le fichier de bundle sur un site distant.
get ncp-log file <filename>
Exemple :
kubenode>get ncp-log file foo Log file foo created in tgz format
- Enregistrer les journaux de l'agent de nœud dans le magasin de fichiers
Enregistrez les journaux de l'agent de nœud d'un seul nœud ou de tous les nœuds. Les journaux sont enregistrés au format tgz dans le répertoire du magasin de fichiers par défaut d'interface de ligne de commande /var/vmware/nsx/file-store. Vous pouvez utiliser la commande de magasin de fichiers d'interface de ligne de commande pour copier le fichier de bundle sur un site distant.
get node-agent-log file <filename> get node-agent-log file <filename> <node-name>
Exemple :
kubenode>get node-agent-log file foo Log file foo created in tgz format
- Obtenez et définissez le niveau de journalisation de manière globale ou pour un composant spécifique.
Les niveaux de journalisation disponibles sont NOTSET, DEBUG, INFO, WARNING, ERROR et CRITICAL.
Les composants disponibles sont nsx_ujo.ncp, nsx_ujo.ncp.k8s, nsx_ujo.ncp.pcf, vmware_nsxlib.v3, nsxrpc et nsx_ujo.ncp.nsx.
get ncp-log-level [component] set ncp-log-level <log level> [component]
Exemples :
kubenode> get ncp-log-level NCP log level is INFO kubenode> get ncp-log-level nsx_ujo.ncp nsx_ujo.ncp log level is INFO kubenode>set ncp-log-level DEBUG NCP log level is changed to DEBUG kubenode> set ncp-log-level DEBUG nsx_ujo.ncp nsx_ujo.ncp log level has been changed to DEBUG
Commandes d'état pour le conteneur de l'agent du nœud NSX
- Affichez l'état de la connexion entre l'agent de nœud et HyperBus sur ce nœud.
get node-agent-hyperbus status
Exemple :
kubenode> get node-agent-hyperbus status HyperBus status: Healthy
Commandes de cache pour le conteneur de l'agent du nœud NSX
- Obtenir le cache interne pour les conteneurs d'agents du nœud NSX.
get container-cache <container-name> get container-caches
Exemple :
kubenode> get container-caches cif104: ip: 192.168.0.14/32 mac: 50:01:01:01:01:14 gateway_ip: 169.254.1.254/16 vlan_id: 104 kubenode> get container-cache cif104 ip: 192.168.0.14/32 mac: 50:01:01:01:01:14 gateway_ip: 169.254.1.254/16 vlan_id: 104
Commandes d'état pour le conteneur du proxy Kube NSX
- Afficher l'état de la connexion entre Kube Proxy et Kubernetes API Server
get ncp-k8s-api-server status
Exemple :
kubenode> get kube-proxy-k8s-api-server status Kubernetes ApiServer status: Healthy
- Afficher l'état de l'observateur Kube Proxy
get kube-proxy-watcher <watcher-name> get kube-proxy-watchers
Exemple :
kubenode> get kube-proxy-watchers endpoint: Average event processing time: 15 msec (in past 3600-sec window) Current watcher started time: May 01 2017 15:06:24 PDT Number of events processed: 90 (in past 3600-sec window) Total events processed by current watcher: 90 Total events processed since watcher thread created: 90 Total watcher recycle count: 0 Watcher thread created time: May 01 2017 15:06:24 PDT Watcher thread status: Up service: Average event processing time: 8 msec (in past 3600-sec window) Current watcher started time: May 01 2017 15:06:24 PDT Number of events processed: 2 (in past 3600-sec window) Total events processed by current watcher: 2 Total events processed since watcher thread created: 2 Total watcher recycle count: 0 Watcher thread created time: May 01 2017 15:06:24 PDT Watcher thread status: Up kubenode> get kube-proxy-watcher endpoint Average event processing time: 15 msec (in past 3600-sec window) Current watcher started time: May 01 2017 15:06:24 PDT Number of events processed: 90 (in past 3600-sec window) Total events processed by current watcher: 90 Total events processed since watcher thread created: 90 Total watcher recycle count: 0 Watcher thread created time: May 01 2017 15:06:24 PDT Watcher thread status: Up
- Vider les flux OVS sur un nœud
dump ovs-flows
Exemple :
kubenode> dump ovs-flows NXST_FLOW reply (xid=0x4): cookie=0x0, duration=8.876s, table=0, n_packets=0, n_bytes=0, idle_age=8, priority=100,ip actions=ct(table=1) cookie=0x0, duration=8.898s, table=0, n_packets=0, n_bytes=0, idle_age=8, priority=0 actions=NORMAL cookie=0x0, duration=8.759s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=100,tcp,nw_dst=10.96.0.1,tp_dst=443 actions=mod_tp_dst:443 cookie=0x0, duration=8.719s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=100,ip,nw_dst=10.96.0.10 actions=drop cookie=0x0, duration=8.819s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=90,ip,in_port=1 actions=ct(table=2,nat) cookie=0x0, duration=8.799s, table=1, n_packets=0, n_bytes=0, idle_age=8, priority=80,ip actions=NORMAL cookie=0x0, duration=8.856s, table=2, n_packets=0, n_bytes=0, idle_age=8, actions=NORMAL