Dedicated HSM interfaces on an Avi Load Balancer Controller uses the following YAML parameters:
avi.hsm-ip.Controller
avi.hsm-static-routes.Controller
avi.hsm-vnic-id.Controller
YAML parameters
For configuration on a new Avi Load Balancer Controller, these parameters can be provided in the day-zero YAML file.
YAML Parameter |
Description |
Format |
Example |
avi.hsm-ip.Controller |
IP address of the dedicated HSM vNIC on the Controller (this is not the IP address of the HSM device) |
IP-address/subnet-mask |
avi.hsm-ip.SE: 10.160.103.230/24 |
avi.hsm-static-routes.Controller |
Comma-separated, static routes to reach the HSM devices from the respective Avi Load Balancer Controllers. Even /32 routes can be provided.
Note:
If there is a single static route, provide the same and ensure the square brackets are matched. Also, if the HSM devices are in the same subnet as the dedicated interfaces, provide the gateway as the default gateway for the subnet. |
[ hsm-network1/mask1 via gateway1, hsm-network2/mask2 via gateway2 ] or [ hsm-network1/mask1 via gateway1 ] |
avi.hsm-static-routes.Controller: [10.128.1.0/24 via 10.160.103.1, 10.130.1.0/24 via 10.160.103.1] |
avi.asm-vnic-id.Controller |
ID of the dedicated HSM vNIC and is typically 1 on CSP. vNIC0 is the management interface, which is the only interface on Avi Load Balancer Controllers by default. |
numeric-vnic-id |
avi.hsm-vnic-id.Controller: '1' |
Instructions
A sample Avi Load Balancer Controller service YAML file for the Day Zero configuration on the CSP looks like as follows:
bash# cat avi_meta_data_ctlr-dedicated-hsm.yml avi.default-gw.Controller: 10.128.2.1 avi.mgmt-ip.Controller: 10.128.2.30 avi.mgmt-mask.Controller: 255.255.255.0 avi.hsm-ip.Controller: 10.160.103.230/24 avi.hsm-static-routes.Controller: [10.128.1.0/24 via 10.160.103.1, 10.130.1.0/24 via 10.160.103.1] avi.hsm-vnic-id.Controller: '1'
Once Avi Load Balancer Controller is created with this Day Zero configuration and additional virtual NIC interface is added to the Avi Load Balancer Controller service instance on CSP. Verify that the dedicated vNIC configuration is applied successfully and the HSM devices are reachable through the dedicated interface. In this case we configured eth1 as the dedicated HSM interface with IP 10.160.103.230/24.
bash# ssh admin@<CONTROLLER-MGMT-IP> bash# ifconfig eth1 eth1 Link encap:Ethernet HWaddr 02:4a:80:02:11:04 inet addr:10.160.103.230 Bcast:10.160.103.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:342620 errors:0 dropped:2855 overruns:0 frame:0 TX packets:78 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:29201376 (29.2 MB) TX bytes:11230 (11.2 KB) bash# ip route default via 10.128.2.1 dev eth0 10.128.1.0/24 via 10.160.103.1 dev eth1 10.128.2.0/24 dev eth0 proto kernel scope link src 10.128.2.18 10.130.1.0/24 via 10.160.103.1 dev eth1 10.160.103.0/24 dev eth1 proto kernel scope link src 10.160.103.218 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 bash# ping -I eth1 <HSM-IP> ping -I eth1 10.130.1.10 PING 10.130.1.10 (10.130.1.10) from 10.160.103.230 eth1: 56(84) bytes of data. 64 bytes from 10.130.1.10: icmp_seq=1 ttl=62 time=0.229 ms