未来网络 SDN

未来网络的项目如今是如火如荼,例如欧洲的FIRE, 美国的FINDFIAGENI,国内的有谢高岗,吴建平等。其中目前发展良好的是FIRE和GENI,其他两者已经归入GENI。FIRE和GENI的主要目的是部署下一代网络的基础设施,为研究者提供实验床,但是在部署的过程中,各个项目孕育的关于未来网络的架构设想很值得参考。我目前在调研的主要是Nick McKeown@Stanford University  和Scott Shenker@UC Berkeley  联合研究的Openflow和SDN(Software Defined Networking). 基于Openflow的网络已经在GENI中部署,斯坦福大学校园内也有基于Openflow的通信网络。

SDN 与当今网络相比,By decoupling the network control and data planes(如图1所示):
•OpenFlow-based SDN architecture abstracts the underlying infrastructure from the applications that use it, 允许对网络编程和在一定规模内管理。
•An SDN approach fosters network virtualization, 使IT 员工用一系列共同的工具集管理服务器、应用、存储和网络。
•在运行商环境、商业数据中心或者校园里,SDN都可以提高网络的可管理性、可扩展性和灵活性。
 

SDN网络架构

在图1中,位于基础设施和控制层之间的是通信接口,而目前为止一个典型的标准的通信协议就是OpenFlow。 Openflow 是第一个标准的通信接口,工作在SDN体系结构中的控制层和数据层. 通信设备中的交换机需要支持Openflow协议,而openflow交换机可以是基于软件的交换机(使用虚拟机)或者设备提供商设计的基于硬件的交换机。openflow switch的基本结构如图2所示:
 
part 1:OPENFLOW

1.1目标:在真实的网上方便地试验新的网络协议。 需要解决三个问题:

1)说服网络管理员同意在他们管理的网络上试验新的网络协议

2)允许研究人员在实际网络上试验新协议而不会干扰网络的正常运行

3)交换机需要什么样的功能才能运行新的协议?

一个答案:OpenFlow提供了一个编写交换机/路由器流表的开放协议:允许管理员将网络流量划分成营运流和实验流,其中营运流仍然接受常规的处理;研究者通过选择数据包要经过的路径及接受的处理来控制实验流。

1.2 一个OpenFlow交换机由三个部分组成:

1)一个流表:定义不同的流及各个流应如何处理

2)一个安全通道:用于交换机和远程控制器之间的安全通信

3)OpenFlow协议:交换机与控制器通信的接口标准

运行机制:新协议运行在一个控制器(如用户的PC机)上,通过OpenFlow接口在流表中添加或删除相关的表项,指导交换机/路由器进行相应处理(如转发到哪个端口、丢弃等),避免将新的协议加载到交换机上。Openflow的工作机制如图3所示:数据包从入端口进入交换机,如果能在flow table里面找出match的flows,根据优先级进入pipeline接收对应的处理,其中操作进入action set,在经过所有匹配之后,执行“操作”,发送给controller或者下一个交换机,其中“操作”包含了forwarding 给哪一个交换机的信息;如果packets从入端口进入交换机后,不能在flow table里找出匹配的flows, 交换机通过特殊端口(安全通道)把数据包的头部信息(要匹配的内容)发给controller,controller通过应用程序或者协议计算后产生匹配flow,并把路由信息和flow信息分发给路径中的各个交换机, 之后类似的数据包就可以按照 matched packets的经历继续了由于openflow协议需要不断地完善,而且控制器处理方式可能也会有所不同,所以上述描述过程并不精确,但是大体方向还是对的。一个完整的openflow网络需要以下组件(如图4所示)。
 
 
part 2 : NOx:Towards an Operating System for Networks
2.1在看了openflow交换机的工作原理之后,一个很大的疑问就是controller是如何工作的,能否hold住那么多的任务:
(i) construct the network view (observation) and (ii) determine whether to forward traffic, and, if so, along which route (control)
首先来看ovservation任务。NOX’s network view includes the switch-level topology; the locations of users, hosts, other network elements; and the services (e.g., HTTP or NFS) being offered; bindings between names and addresses. NOX 有使用 DNS, DHCP, LLDP, and flow-initiations的应用程序,用来构建 the network view。
control任务:NOX的控制粒度:一旦在某个包上进行某种操作,那么具有同样头部的后续数据包都会有相同操作。这种flow-based粒度的控制,可以用来构建既能扩展又能灵活控制的网络。NOX具有access-control和routing应用:如果一个flow被允许控制,计算合适的L2路由,把flow entries安装在路径上的所有交换机中,再把数据包返回给原始交换机。 NOX has adopted the OpenFlow switch abstraction。管理程序通过发送指令给交换机来控制网络,这些指令独立于交换机硬件,支持flow-level粒度的控制。一个运行在普通PC上的简单控制器每秒处理100,000条flow initiations.
2.2 The Controller Placement Problem
类似于SDNs的网络架构把控制逻辑从包处理设备上移到了另外的控制器中。这些网络架构带来了一系列未解决的问题:和当前分布式系统相比,可靠性,可扩展性,性能等?
问题:给定一个SDN网络拓扑,需要多少控制器?部署在哪些地方?
结果:As expected, the answers depend on the topology. More surprisingly, one controller location is often sufficient to meet existing reaction-time requirements (though certainly not fault tolerance requirements)
For WANs, the best placement depends on propagation latency. Propagation latency bounds the control reactions with a remote controller that can be executed at reasonable speed and stability. we compare placements using node-to-controller latency
 
part 3 FlowVisor: A Network Virtualization Layer
•Computer virtualization’s success can be linked to a clean abstraction of the underlying hardware.
•The equipment currently deployed in our networks was not designed for virtualization and has no common hardware abstraction layer. (Individual technologies can slice particular hardware resources and layers, such as MPLS WDM VLAN)
•The resulting virtual network runs on existing or new low-cost hardware, runs at line-rate, and is backwardly compatible with our current legacy network.
• FlowVisor is a specialized OpenFlow controller. FlowVisor acts as a transparent proxy between OpenFlow-enabled network devices and multiple guest OpenFlow controllers .
•FlowVisor defines a slice as a set of flows running on a topology of switches. A slice is defined by a set of (possibly non-contiguous) regions(flowspace)
•There are five primary slicing dimensions: bandwidth, topology, traffic(flowspace), device CPU, forwarding tables.
•We implement FlowVisor in approximately 7000  lines of C and the code is publicly available for download.

•Bandwidth isolation:  FlowVisor isolate bandwidth by marking the VLAN priority bits in packets. VLAN tags have three bit field, FlowVisor rewrites all slice forwarding table additions to include a “set VLAN priority” action, map traffic to 8 priority queues. This way is not inherent to FlowVisor’s design, but rather a short-term work-around to interoperate with commodity hardware.

•Topology isolation: Since FlowVisor acts as a proxy between switch and controller, it only proxies in the quest’s virtual topology. Special care is taken to handle LLDP messages.
•Switch CPU isolation: track the reason and limit. CPU-isolation mechanisms are not inherent to flowvisor’s design, but a rather a work-around to deal with the existing hardware abstraction. A better long-term solution would be to expose the switch’s existing process scheduling and rate-limiting features via the hardware abstraction.
•Flowspace isolation: flowVisor performs message rewriting to transparently ensure that a slice only has control over its own flows .Each slices must be restricted to only affecting flows in their flowspace.
作者使用了简单的网络结构对flowvisor的性能进行评估,结果显示flowvisor的代理效果会增加时延,但是仍能保证交换机是line-rate。而对带宽和CPU隔离的效果只验证了在简单网络结构上还是可行的,直观感觉经不住实际网络的考验。就像现在数据挖掘和机器学习的复杂方法,一旦应用到大数据上就灰溜溜了,还是只能依赖于简单的线性方法;同样在网络集中式管理中,应该把一些基本的common的)控制放到硬件中去,网络虚拟化也难以规模化,所以SDN的提出者又给出了一种改进的(妥协的)网络体系结构,把SDN和MPLS结合起来,既不同于当前分布式的网络控制,也不同于小型的VPN,而是把两者有机结合起来。我隐隐地感觉到SDN更像是一种短期内探索未来网络的工具,他有助于我们研究新的体系结构,新的协议方法,但是SDN并不是最终的网络模型,个人认为最终网络仍然是分布式,ad-hoc的,基于小智能体的,只是目前需要集中式可编程的网络来帮助我们发现探索网络的本质。
 
 
 
原文地址:https://www.cnblogs.com/18fanna/p/2985749.html