HowTo Configure Openstack L2 Gateway with Mellanox Spectrum Switch (VTEP)

This post shows how to configure a OpenStack L2 Gateway setup using Mellanox Spectrum Switch.

References

Overview

L2 Gateway is a service plugin to be added to the OpenStack Networking (Neutron) services. L2 Gateway is an entity or resource which bridges two L2 domains (or networks) to achieve one seamless L2 broadcast domain,

In the following use case L2 gateway bridges a VXLAN network and a VLAN network as shown in the figure below.

In the figure we can see the assigned IPs for each component. The L2 Gateway agent will configure the hardware VTEP database located at the switch according to the OpenStack network topology and allocated VMs.

All the configurations and commands used to setup the network are detailed below.

Note: The assigned IP for the Bare metal machine is in the same subnet as the IP assigned to the Openstack VMs (10.0.0.x in the example). In addition, the IP assigned to OpenStack compute node port is in the same subnet as the switch port that it is connected to (2.2.2.x in the example).

The control plan (configuration) is done via OVS DB agent on the switch being address with Jason API from OpenStack Neutron CLI. The Neutron creates Bare Metal (BM) server ports on the switch and attach them to the VM network (over VXLAN), this reaches the switch in Jason API (not CLI configuration). each VXLAN tunnel is mapped to VLAN+port on a switch.

MAC Address table

On the one direction from the BM server to the VMs the switch holds the MACs of the VMs and answer upon ARP request. You can think about it as a static configuration of ARP entries for each VM.

On the other direction from the VMs to the BM server there is a broadcast of the ARP for the VMs to learn (dynamic).

Presentation1.jpg

Configuration

For this setup a Mellanox Spectrum switch was used with image version 3.6.3502 or later, and Openstack Ocata release.

L2 Gateway installation

L2 Gateway agent can be installed using PIP or as used here with DevStack. The project is located here: GitHub - openstack/networking-l2gw: API's and implementations to support L2 Gateways in Neutron.

In order to clone it, add the following line to local.conf file.

enable_plugin networking-l2gw https://github.com/openstack/networking-l2gw

Additional information about L2 Gateway background and installation can be found here: https://wiki.openstack.org/wiki/Neutron/L2-GW#Use_Cases

Switch Configuration

switch (config) # enable
switch (config) # configure terminal
switch (config) # interface loopback 1 ip address 1.1.1.1/32 --> create a loopback device to receive the Vxlan l3 tunneled packet
switch (config) # ip routing vrf default --> enable ip routing
switch (config) # protocol nve --> enable nve protocol
switch (config) # interface nve 1 --> create A nve interface
switch (config interface nve 1) vxlan source interface loopback 1 --> set the interface to handle vxlan with loopback interface 1
switch (config) # interface ethernet 1/3 nve mode only force --> Set the BM port to be associated with nve
switch (config) # interface ethernet 1/4 no switchport force --> Set the vxlan port to be A router port
switch (config) # interface ethernet 1/4 ip address 2.2.2.1 255.255.255.0 --> assign ip to the vxlan port (same subnet as the vxlan tunnel endpoint)
switch (config) # ovs ovsdb server --> start ovsdb

switch (config) # ovs ovsdb server listen tcp port 6640 --> set ovs server listen port
switch (config) # write memory --> write configuration to memory

Bare-metal interface configuration

Create vlan interface and assign an IP interface to the the same subnet as the OpenStack VM:

# ip link add link eth0 name ens3f0.8 type vlan id 8

# ip addr add 10.0.0.24/26 dev ens3f0.8

Openstack configuration

Configure OpenStack compute interface connected to the switch as follows:
1. Assign an IP in the same subnet of the IP assigned to the connected switch port we assigned in the previous section and route the loopback IP through the interface connected to the switch.

# ip addr add 2.2.2.2/24 dev enp3s0f0
# route add -net 1.1.1.1 netmask 255.255.255.255 gw 2.2.2.1

2. The following commands update the switch's hardware vtep DB:

In those commands we create L2 Gateway called vtep0 for the Bare metal server port (eth3) in the switch MLNX-GW-ETH3. In addition, we map the VLAN 8 to the private network of the VMs (VNI).

# neutron l2-gateway-create --device name="vtep0",interface_names="eth3" MLNX-GW-ETH3
# neutron l2-gateway-connection-create --default-segmentation-id 8 MLNX-GW-ETH3 private

3. Add a security group rule allowing ICMP packets to reach the VM:

# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0

local.conf:

[[local|localrc]]
RECLONE=false
LOGFILE=/opt/stack/logs/stack.sh.log

# Switch to Neutron
disable_service n-net
enable_service n-cond
enable_service q-svc
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron

# enable compute node services
enable_service n-cpu
enable_service q-agt

# Disable cinder/heat/tempest for faster testing
disable_service c-sch c-api c-vol h-eng h-api h-api-cfn h-api-cw tempest

# Secrets
ADMIN_PASSWORD=password
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=a682f596-76f3-11e3-b3b2-e716f9080d50

## Use ML2 + OVS
PHYSICAL_NETWORK=default

# Set ML2 plugin + OVS agent
Q_PLUGIN=ml2
Q_AGENT=openvswitch

# Set ML2 mechanism drivers to OVS
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,l2population

# Set type drivers
Q_ML2_PLUGIN_TYPE_DRIVERS=flat,vlan,vxlan

# Use Neutron security groups
Q_USE_SECGROUP=True

# Set possible tenant network types
Q_ML2_TENANT_NETWORK_TYPE=flat,vlan,vxlan
HOST_IP=10.209.32.109

# Simple GRE tunnel configuration -- overrides extra opts
ENABLE_TENANT_TUNNELS=True

# L2GW
enable_plugin networking-l2gw https://github.com/openstack/networking-l2gw
enable_service l2gw-plugin l2gw-agent
OVSDB_HOSTS=ovsdb1:10.209.80.33:6640

mlnx_dev=02:00
mlnx_port=enp3s0f0
PUBLIC_INTERFACE=${mlnx_port}
PHYSICAL_INTERFACE=${mlnx_port}
OVS_PHYSICAL_BRIDGE=br-$mlnx_port

[[post-config|/$Q_PLUGIN_CONF_FILE]]
[vxlan]
l2_population = True
enable_vxlan = True
[agent]
tunnel_types=vxlan
l2_population=True
vxlan_udp_port=4789
[ovs]
tunnel_bridge=br-tun
local_ip = 2.2.2.2
[[post-config|$NOVA_CONF]]
[DEFAULT]
cheduler_available_filters=nova.scheduler.filters.all_filters
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, PciPassthroughFilter
pci_passthrough_whitelist ={"'"address"'":"'"*:'"${mlnx_dev}"'.*"'","'"physical_network"'":"'"default"'"}

How to validate the L2 Gateway installation

To dump the hardware vteo database run the following command from any machine with network access the the switch:

# ovsdb-client dump --pretty tcp:<switch_ip>:6640 hardware_vtep

The output should look similar to the following (depending on the assigned IPs and number of ports):

    

id acl_entries acl_fault_status acl_name

----- ----------- ---------------- --------

ACL_entry table

_uuid acle_fault_status action dest_ip dest_mac dest_mask dest_port_max dest_port_min direction ethertype icmp_code icmp_type protocol sequence source_ip source_mac source_mask source_port_max source_port_min tcp_flags tcp_flags_mask

----- ----------------- ------ ------- -------- --------- ------------- ------------- --------- --------- --------- --------- -------- -------- --------- ---------- ----------- --------------- --------------- --------- --------------

Arp_Sources_Local table

_uuid locator src_mac

----- ------- -------

Arp_Sources_Remote table

_uuid locator src_mac

----- ------- -------

Global table

_uuid managers switches

------------------------------------ -------- --------------------------------------

47927954-5124-4a18-9434-049f7f41a5b7 [] [da9f2821-09f9-4c2f-beb9-4174de6fdd3b]

Logical_Binding_Stats table

_uuid bytes_from_local bytes_to_local packets_from_local packets_to_local

------------------------------------ ---------------- -------------- ------------------ ----------------

ef1dc78d-ac5c-4804-b3e9-649f2d435e9c 2822346 5709962 20764 88835

Logical_Router table

LR_fault_status _uuid acl_binding description name static_routes switch_binding

--------------- ----- ----------- ----------- ---- ------------- --------------

Logical_Switch table

_uuid description name tunnel_key

------------------------------------ ----------- -------------------------------------- ----------

128cf0c6-dfc7-47ce-8d4d-cc29b672ba34 private "6be6cf3b-56c1-4f9e-b9e1-3a2f98e7bf06" 33

Manager table

_uuid inactivity_probe is_connected max_backoff other_config status target

----- ---------------- ------------ ----------- ------------ ------ ------

Mcast_Macs_Local table

MAC _uuid ipaddr locator_set logical_switch

----------- ------------------------------------ ------ ------------------------------------ ------------------------------------

unknown-dst a39cdd30-3817-4a6f-abb9-7627bc0360e8 "" bc94782a-17cd-441e-a7f9-3bca39b49496 128cf0c6-dfc7-47ce-8d4d-cc29b672ba34

Mcast_Macs_Remote table

MAC _uuid ipaddr locator_set logical_switch

----------- ------------------------------------ ------ ------------------------------------ ------------------------------------

unknown-dst b098a853-0da3-44ec-8cea-c57b86bce530 "" 1e060e3f-c7b7-4885-94bb-ba5b063ce20d 128cf0c6-dfc7-47ce-8d4d-cc29b672ba34

Physical_Locator table

_uuid dst_ip encapsulation_type

------------------------------------ --------- ------------------

e5b3826b-0005-4cfa-8bac-e9f693cc3715 "1.1.1.1" "vxlan_over_ipv4"

82bcef2f-1a8e-4191-b4b2-3255fdd6849c "2.2.2.2" "vxlan_over_ipv4"

Physical_Locator_Set table

_uuid locators

------------------------------------ --------------------------------------

1e060e3f-c7b7-4885-94bb-ba5b063ce20d [82bcef2f-1a8e-4191-b4b2-3255fdd6849c]

bc94782a-17cd-441e-a7f9-3bca39b49496 [e5b3826b-0005-4cfa-8bac-e9f693cc3715]

Physical_Port table

_uuid acl_bindings description name port_fault_status vlan_bindings vlan_stats

------------------------------------ ------------ ----------- ------ ----------------- ---------------------------------------- ----------------------------------------

070e59bc-ed70-4965-9519-dbec59b31033 {} "" "eth3" [] {1=128cf0c6-dfc7-47ce-8d4d-cc29b672ba34} {1=ef1dc78d-ac5c-4804-b3e9-649f2d435e9c}

Physical_Switch table

_uuid description management_ips name ports switch_fault_status tunnel_ips tunnels

------------------------------------ ------------------- -------------- ------- -------------------------------------- ------------------- ----------- --------------------------------------

da9f2821-09f9-4c2f-beb9-4174de6fdd3b "OVS VTEP Emulator" [] "vtep0" [070e59bc-ed70-4965-9519-dbec59b31033] [] ["1.1.1.1"] [58c2e509-f2ea-41e5-b12d-7a7c1d22a8eb]

Tunnel table

_uuid bfd_config_local bfd_config_remote bfd_params bfd_status local remote

------------------------------------ ----------------------------------------------------------- ----------------- ---------- ----------------- ------------------------------------ ------------------------------------

58c2e509-f2ea-41e5-b12d-7a7c1d22a8eb {bfd_dst_ip="169.254.1.0", bfd_dst_mac="00:23:20:00:00:01"} {} {} {enabled="false"} e5b3826b-0005-4cfa-8bac-e9f693cc3715 82bcef2f-1a8e-4191-b4b2-3255fdd6849c

Ucast_Macs_Local table

MAC _uuid ipaddr locator logical_switch

------------------- ------------------------------------ ------ ------------------------------------ ------------------------------------

"7c:fe:90:29:23:36" b5155023-cf6f-45ff-a2e0-3ebba9d5db8b "" e5b3826b-0005-4cfa-8bac-e9f693cc3715 128cf0c6-dfc7-47ce-8d4d-cc29b672ba34

Ucast_Macs_Remote table

MAC _uuid ipaddr locator logical_switch

------------------- ------------------------------------ ------------------- ------------------------------------ ------------------------------------

"fa:16:3e:26:96:a7" 91618bd9-3662-45d0-978c-97ed6659b8e6 "fdaa:e376:52b7::1" 82bcef2f-1a8e-4191-b4b2-3255fdd6849c 128cf0c6-dfc7-47ce-8d4d-cc29b672ba34

"fa:16:3e:77:89:90" 5878f85d-9ddc-428d-84f0-87cc34d0e2cb "10.0.0.9" 82bcef2f-1a8e-4191-b4b2-3255fdd6849c 128cf0c6-dfc7-47ce-8d4d-cc29b672ba34

"fa:16:3e:57:32:99" 5878f85d-9ddc-428d-84f0-87cc34d0e2cb "10.0.0.8" 82bcef2f-1a8e-4191-b4b2-3255fdd6849c 128cf0c6-dfc7-47ce-8d4d-cc29b672ba34

"fa:16:3e:79:d2:11" 01e25a76-b403-430e-bdf1-b33c8bdf2bee "10.0.0.1" 82bcef2f-1a8e-4191-b4b2-3255fdd6849c 128cf0c6-dfc7-47ce-8d4d-cc29b672ba34

"fa:16:3e:94:66:0d" 891cf889-ac79-4dac-99b4-e6ec6611a988 "10.0.0.2" 82bcef2f-1a8e-4191-b4b2-3255fdd6849c 128cf0c6-dfc7-47ce-8d4d-cc29b672ba34

Get the switch running-config

switch (config) # enable
switch (config) # configure terminal
switch (config) # show running-config

The expected configuration should look like this:

    

##

## L3 configuration

##

ip routing vrf default

interface ethernet 1/4 no switchport force

interface loopback 1

interface ethernet 1/4 ip address 2.2.2.1 255.255.255.0

interface loopback 1 ip address 1.1.1.1 255.255.255.255

##

## NVE configurations

##

protocol nve

interface nve 1

interface nve 1 vxlan source interface loopback 1

interface ethernet 1/7 nve mode only force

ovs ovsdb server

ovs ovsdb server listen tcp

And finally we check if ping is available from the bare metal to the VMs and vice versa.

Troubleshooting

1. Vlan number 0 and 1 cannot be assigned to the bare-metal ports, this limitation is caused from the ARP responder implementation.

原文地址:https://www.cnblogs.com/dream397/p/13225295.html