Building SDN-ready high bandwidth IXP
Download
Report
Transcript Building SDN-ready high bandwidth IXP
Building SDN-ready high
bandwidth IXP
M.Sc.E.E. Goran Slavić
[email protected]
SOX
- Serbian Open eXchange
- Established in 2010
- Connecting most of the ISP and telecom.
operators in Serbia.
- Multiple POP-s in Belgrade, Vienna (Interxion),
Sofia (SDC and Telepoint).
- Carrier neutral.
- Outgrown the status of the “mere IXP” of the
Serbia's ISP-s in the last two years.
SOX
SOX
SFLOW traffic analysis
SOX and SDN
- Logical direction of future growth – SDN ready
network.
- “SDN ready network” the network that can use
both traditional telecom protocols on switches and
can be transfered to SDN without hardware
intervention
- Baremetal switches + adequate NOS that can
support both funcionalities.
Problems
- Lack of “best practices” for the SDN solution.
- Lack of people with experience with SDN.
- Most of the SDN work is currently done in lab
conditions – not traffic conditions and high
capacity networks of the IXP-s.
- Problem of “proved and tested features” – but
the weak performance with high volume of traffic.
- Time needed for migration to full-SDN
environment.
- Cost of migration to full-SDN environment.
Idea
1. Unlike classical networks IXP-s don’t deal with L3
routing (packet forwarding is done with MAC
addresses).
2. Still, there are clear benefits in implementing
SDN in IXP environment.
- Link protection - ability to reroute traffic on L2 level
in case of link failure)
- Link load balancing - ability to load balance traffic
over the links that would otherwise be blocked
(STP) or poorly load balanced (MSTP).
Problems
- High capacity switches needed = large amount
of money for something that “does not work” right
away.
- Inability to simply “turn-off” classical switch and
“turn on” SDN environment.
- IXP can’t be “on hold” for the duration of the
migration.
- The IXP needs to expand its services during the
migration (new users/clients).
IXP is not ISP
SDN on IXP is VERY different then SDN in ISP !
- Most of the work in SDN follows / is optimized for
the classical ISP topology and operation (customers
/ Internet, data center / world).
- ISP: customers-destination of the traffic vs.
Internet-source of the traffic)
- IXP: every customer can be both source and
destination of high traffic volume
- Large number of MAC/IP addresses in customer
networks.
SOX migration to “SDN ready”
1. Acquisition of the hardware/software
combination.
- Bare metal switch of the adequate capacity, formfactor and number of ports (48x10Gbps)
- NOS that can support both traditional L2/L3 and
SDN/OpenFlow operation
- Combination that has both the performance and
the features needed for work and monitoring of the
high - bandwidth environment (SNMP, SFLOW …)
SOX migration to “SDN ready”
2. Testing
- Testing of the features and performance of the
implemented switch.
- Building of the proper documentation for the
implemented solution.
- Testing in the production environment with gradual
increase of types and volume of the traffic.
- Testing with full volume of IXP traffic.
SOX migration to “SDN ready”
3. Migration
- Migration from existing switches to the baremetal
hardware / SDN ready in the IXP backbone.
- Verifying that the implemented infrastructure is
fully compatible with hardware/software
implementations of the IXP peer networks / clients.
(4). SDN solution
- Beginning of the implementation of the SOX
SDN solution.
Current progress
1. SOX has acquired several Edge Core 5712-54R
switches.
2. SOX has acquired equal number of licenses for
PicOS / Pica8 NOS
3. We have tested the traditional protocols and
monitoring functionalities like SSH, SFLOW, SNMP
… (both successfully and unsuccessfuly).
4. We have tested the L2/L3 functionalities like
LACP, Q-in-Q, BGP, static routing …(successfully).
Current progress
5. We have tested interoperability with equipment of
other manufactures (Cisco, Extreme, DCN …)
without running into any problems or difficulties.
6. We had an excellent support from Pica8.
7. We had serious problems with BGP
functionalities. Reason was our mistake (lesson:
don’t blame NOS for you bad configuration).
8. We used MSTP as means for manipulation
between regular path of the traffic and the one that
is that is going over the switch that is been tested.
MSTP testing
MSTP 1
MSTP 2
Transfer of VLAN-s from one MSTP instance to
another redirects the traffic for the link between
THQ on one and Palata+ETF POP-s to go over the
switch that is been tested
Problems
1. We had completely stable switch until we
exposed it to the 30Gbps+ of traffic.
2. When traffic reached 30Gbps SFLOW data
started to degrade in quality.
3. When traffic reached 40Gbps SFLOW sampling
started to put unbearable load on the CPU of the
switch.
4. After the traffic exceeded 40Gbps there was a
serious degradation in performance of the switch.
Problems
5. BGP problem (“self inflicted”). We have badly
configured the switch BGP process. The result was
filling of the BGP routing table with routes that had
the invalid next-hop parameter. When the NOS
realized this it cleared those routes from the
RIB/BGP table and started importing them again
from another source. During the import it would
again get the invalid routing for the next-hop – and
the process will repeat itself.
Conclusions
1. Most of the new NOS solutions are still in infant
stages.
2. All of the functionalities are there. What is needed
is the optimization of the NOS code and
performance tuning in order for the solution to be
up-to-the-standard of major vendors.
3. There needs to be a broad, clear and well defined
strategy for implementation of the SDN in IXP.
4. Migration to SDN ready backbone needs to be
done with extra care, gradually and in “small steps”
Thank you
M.Sc.E.E. Goran Slavić
[email protected]
IXP is not ISP