ppt - CSE Labs User Home Pages

Download Report

Transcript ppt - CSE Labs User Home Pages

Introduction to
Software-Defined Networking
• Overview
– What is SDN and Why SDN?
• SDN Control Plane Abstractions
• OpenFlow Switch and OpenFlow APIs
-- Based partially on the presentations of Prof. Scott
Shenker of UC Berkeley
“The Future of Networking, and the Past of Protocols”

Please watch the YouTube video of Shenker’s talk
Readings: please do required!
and optional readings if interested
CSci5221:
Introduction to SDN
1
Two Key Definitions
• Data Plane: processing and delivery of packets
– Based on state in routers and endpoints
– E.g., IP, TCP, Ethernet, etc.
– Fast timescales (per-packet)
• Control Plane: establishing the state in routers
– Determines how and where packets are forwarded
– Routing, traffic engineering, firewall state, …
– Slow time-scales (per control event)
• These different planes require different abstractions
Some literature also separates the notion of management plane
(router configuration, policy specs, …) from control plane
What is a Software Defined
Network (SDN)?
A network in which the control plane is physically
separate from the forwarding plane
and
A single control plane controls several forwarding
devices
--- wikipedia
So what is all the big buzz about SDN?
Key to Internet Success: Layers
Applications
…built on…
Reliable (or unreliable) transport
…built on…
Best-effort global packet delivery
…built on…
Best-effort local packet delivery
…built on…
Physical transfer of bits
Why Is Layering So Important?
• Decomposed delivery into fundamental components
• Independent but compatible innovation at each layer
• A practical success of unprecedented proportions…
Internet “Hourglass”
Layered Architecture
• Internet “hourglass” layered architecture is primarily
an abstraction for network data plane!
– hide various details of physical/data
link layer technologies, e.g., Ethernet, WiFi
– while accommodate & integrate them via
a “virtualized” IP layer on top (an overlay)
– two simple services, TCP/UDP, on top of IP
– enable a huge diversity of applications
• “dumb” networks, “smart” end systems
– based on “end-to-end” principles
• Tremendous success: a lot of innovations!
Network Control/Management Plane
A very different story!
•There are no general principles or abstractions guiding the
design of network control/management plane
– in contrast, e.g., to distributed systems or database systems
•Control plane is basically composed of various distributed
“control” protocols
– For each new feature/functionality needed, a new protocol is
designed  new software run by “flimsy” CPU in routers
•Worse, network boxes are “closed” equipment
– (mostly proprietary) software bundled with hardware
– vendor-specific interfaces;
– slow protocol standardization – often years!
Few people can innovate: new ideas/features can’t be quickly
incorporated into networks! vendors vs. service providers!
“The Power of Abstraction”
“Modularity
based on abstraction
is the way things get done”
Barbara Liskov
Abstractions  Interfaces 
Modularity
What abstractions do we have in
networking?
−
Networking vs. Other System Fields
• Other fields in “systems”: OS, Database, Distributed
Systems, etc.
– Many abstractions: e.g., virtual memory, processes and
threads, file systems, concurrency, consistency models,
ACID, atomic transactions, schemas and relational
databases, query languages, etc.
– Are easily managed
– Continue to evolve
• Networking:
– Big bag of protocols
– Notoriously difficult to manage
– Evolves very slowly
Limitations of Current Networks
• Old ways to configure a network
App
App
App
Operating
System
App
Specialized Packet
Forwarding Hardware
App
App
App
App
Operating
System
Specialized Packet
Forwarding Hardware
App
Operating
System
App
Specialized Packet
Forwarding Hardware
App
App
Operating
System
App
App
App
Operating
System
Specialized Packet
Forwarding Hardware
Specialized Packet
Forwarding Hardware
Managing Networks …
• Networks used to be simple: Ethernet, IP, TCP….
• New control requirements led to great complexity
– Isolation
– Traffic engineering
– Packet processing
middleboxes
– Payload analysis 
– …..



VLANs, ACLs
MPLS, ECMP, Weights
Firewalls, NATs,
Deep packet inspection (DPI)
• Mechanisms designed and deployed independently
– Complicated “control plane” design, primitive functionality
– Stark contrast to the elegantly modular “data plane”
A Typical Enterprise Network
Switches
12
http://www.excitingip.net/27/a-basic-enterprise-lan-network-architecture-block-diagram-and-components/
Limitations of Current Networks
Feature
Feature
Operating
System
Specialized Packet
Forwarding Hardware
Million of Many complex functions baked
into infrastructure
lines
of source
OSPF, BGP, multicast,
differentiated services,
code
Traffic Engineering, NAT,
Billions of
firewalls, …
gates
Cannot dynamically change according to network
conditions
13
Limitations of Current Networks
• Enterprise networks are difficult to manage
• “New control requirements have arisen”:
– Greater scale
– Migration of VMS
• How to easily configure huge networks?
14
Infrastructure Still Works!
• Only because of “our” ability to master complexity
• This ability to master complexity is both a
blessing…
– …and a curse!
Limitations of Current Networks
• No control plane abstraction for the whole network!
• It’s like old times – when there was no OS…
Wilkes with the EDSAC, 1949
Idea: An OS for Networks
Control Programs
Network Operating System
Simple Packet
Forwarding
Hardware
Simple Packet
Forwarding
Hardware
Simple Packet
Forwarding
Hardware
Simple Packet
Forwarding
Hardware
Simple Packet
Forwarding
Hardware
Idea: An OS for Networks
Control Programs
Network Operating System
Simple Packet
Forwarding
Hardware
Simple Packet
Forwarding
Hardware
Simple Packet
Forwarding
Hardware
Simple Packet
Forwarding
Hardware
Simple Packet
Forwarding
Hardware
Idea: An OS for Networks
• “NOX: Towards an Operating System for Networks”
Software-Defined Networking (SDN)
Control Programs
Global Network View
Network Operating System
Control via
forwarding
interface
Protocols
Protocols
19
The Future of Networking, and the Past of Protocols, Scott Shenker, with Martin Casado, Teemu Koponen, Nick McKeown
A Better Example: Programming
• Machine languages: no abstractions
– Mastering complexity was crucial
• Higher-level languages: OS and other abstractions
– File system, virtual memory, abstract data types, ...
• Modern languages: even more abstractions
– Object orientation, garbage collection,…
Abstractions key to extracting simplicity
Software Defined Networking
• No longer designing distributed control protocols
• Much easier to write, verify, maintain, …
– an interface for programming
• Network OS (NOS) serves as fundamental control
block
– with a global view of network
21
“The Power of Abstraction”
“Modularity
based on abstraction
is the way things get done”
Barbara Liskov
Abstractions  Interfaces 
Modularity
What abstractions should we have
in network control plane?
−
Abstractions
 Problem Decomposition
 Decompose problem into basic components (tasks)
 Define an abstraction for each component
 Implementation of abstraction can focus on one task
 If tasks still too hard to implement, return to step 1
Layers are Great Abstractions
• Layers only deal with the data plane
• We have no powerful control plane abstractions!
• How do we find those control plane abstractions?
• Two steps: define problem, and then decompose it.
The Network Control Problem
How does classical (IP-based) distributed network
control plane operate today?
•Compute the configuration of each physical device
– e.g., routing/forwarding tables, ACLs,…
•Operate without communication guarantees
•Operate within given network-level protocol
Only people who love complexity would find
this a reasonable request
Traditional Control Mechanisms
Distributed algorithm running between neighbors
Network of switches and/or routers
Programming Analogy
• What if programmers had to:
– Specify where each bit was stored
– Explicitly deal with all internal communication errors
– Within a programming language with limited
expressability
• Programmers would redefine problem:
– Define a higher level abstraction for memory
– Build on reliable communication abstractions
– Use a more general language
• Abstractions divide problem into tractable pieces
– and make programmer’s task easier
From Requirements to Abstractions
1. Operate without communication guarantees
Need an abstraction for distributed state
2. Compute the configuration of each physical
device
Need an abstraction that simplifies configuration
3. Operate within given network-level protocol
Need an abstraction for general forwarding model
Once these abstractions are in place, control
mechanism has a much easier job!
Control Plane Abstractions
1. Need an abstraction for general (data plane)
packet forwarding
2. Need an abstraction for distributed state
3. Need an abstraction that simplifies configuration
of each single device
Once these abstractions are in place, control
mechanism has a much easier job!
Traditional Control Mechanisms
Distributed algorithm running between neighbors
Network of switches and/or routers
Software Defined Networking
Control Program
e.g. routing,
access control
Global Network View
Network OS
Network of switches
and/or routers
Restructured Network
Feature
Feature
Network OS
Feature
Feature
Operating
System
Feature
Feature
Specialized Packet
Forwarding Hardware
Operating
System
Feature
Feature
Specialized Packet
Forwarding Hardware
Operating
System
Feature
Feature
Specialized Packet
Forwarding Hardware
Operating
System
Feature
Feature
Operating
System
Specialized Packet
Forwarding Hardware
Specialized Packet
Forwarding Hardware
Software-Defined Network
2. At least one Network OS
3. Well-defined open API
Feature
Feature
probably many.
Open- and closed-source
Network OS
1.
Open interface to
packet forwarding
Packet
Forwarding
Packet
Forwarding
Packet
Forwarding
Packet
Forwarding
Packet
Forwarding
1. Forwarding Abstraction
• Switches have two “brains”
– Management CPU (smart but slow)
– Forwarding ASIC (fast but dumb)
• Need a forwarding abstraction for both
– CPU abstraction can be almost anything
• ASIC abstraction is much more subtle:
– OpenFlow is such a data plane forwarding abstraction
• OpenFlow protocol (or API):
– Control switch by inserting <header;action> entries
– Essentially gives NOS remote access to forwarding table
– Instantiated in OpenvSwitch
What is OpenFlow Protocol/API?
 SDN is not OpenFlow
• OpenFlow is just one of many possible data plane
forwarding abstraction (others: e.g., POF, P4, …)
 OpenFlow is an open API that provides a standard
interface for programming the data plane switches
 Openflow standardization
•
•
•
•
Version 1.0: Most widely used version
Version 1.1: Released in February 2011
OpenFlow transferred to ONF in March 2011
Version 1.5.0 Dec 19, 2014
OpenFlow Switch & OpenFlow API
App
App
App
Controller
(Server Software)
OpenFlow Protocol/API
Ethernet
Switch
Control
Path
Data Path (Hardware)
OpenFlo
w
OpenFlow Switching
Software
Layer
server
OpenFlow Client
OpenFlow Table
Hardware
Layer
MAC
src
MAC
dst
IP
Src
IP
Dst
TCP
TCP
Action
sport dport
*
*
*
5.6.7.8
*
port 1
5.6.7.8
port 2
Controller
*
port 3
port 1
port 4
1.2.3.4
1. Separate Control from Datapath
SDN Controller: Making Control Plane Decisions, e.g., routing
2. Install/Cache Flow Decisions
in Datapath
“If header = x, send to port 4”
“If header = y, overwrite header with z, send to ports 5,6”
“If header = ?, send to me”
Flow
Table
Plumbing Primitives: <Match, Action>
Header
Data
Match: 1000x01xx0101001x
Match arbitrary bits in headers:
–
–
Match on any header, or new header
Allows any flow granularity
Action
–
–
–
Forward to port(s), drop, send to controller
Overwrite header with mask, push or pop
Forward at specific bit-rate
40
OpenFlow Table Entry
Rule
Action
Stats
Packet + byte counters
1.Forward packet to port(s)
2.Encapsulate and forward to controller
3.Drop packet
4.Send to normal processing pipeline
5.…
SwitchMAC MAC Eth VLAN IP
IP
IP TCP TCP
Port src dst type ID Src Dst Prot sport dport
+ mask
OpenFlow Examples
Switching
Switch MAC
Port src
*
MAC Eth
dst
type
00:1f:.. *
*
VLAN IP
ID
Src
IP
Dst
IP
Prot
TCP
TCP
Action
sport dport
*
*
*
*
VLAN IP
ID
Src
IP
Dst
IP
Prot
TCP
TCP
Action
sport dport
*
5.6.7.8 *
*
VLAN IP
ID
Src
IP
Dst
IP
Prot
TCP
TCP
Action
sport dport
*
*
*
*
*
*
port6
Routing
Switch MAC
Port src
*
*
MAC Eth
dst
type
*
*
*
*
port6
Firewall
Switch MAC
Port src
*
*
MAC Eth
dst
type
*
*
*
22
drop
OpenFlow Usage
» Alice’s code: Alice’s
OpenFlow
˃ Simple learning
switch
Rule
Switch
˃ Per Flow switching
˃ Network access
control/firewall
˃ Static “VLANs”
˃ Her own new routing
protocol:
unicast, multicast,
Alice’s
OpenFlowmultipath
Rule
Switch
˃ Home network manager
˃ Packet processor (in
controller)
˃ IPvAlice
OpenFlow/SDN tutorial, Srini Seetharaman, Deutsche Telekom, Silicon Valley Innovation Center
Alice’s
code
Controller
PC
Decision?
OpenFlow
Protocol
Alice’s
OpenFlow
Rule
Switch
43
Optional Material:
OpenFlow Protocol/API:
Some Specifics
• OpenFlow Switch Components
• OpenFlow Ports
• OpenFlow Tables
– Match-Action Semantics
– OpenFlow Pipeline Processing
– OpenFlow Group Tables
• OpenFlow Actions
OpenFlow Switch: Components
Ports
• Network interfaces for passing packets between Openflow
Processing and the rest of the Network
• Openflow switches connected through Openflow ports
• Types:
– Physical Ports
• Switch defined ports correspond to a hardware interface
(e.g., map one-to-one to the Ethernet interfaces)
– Logical Ports
• Switch defined ports that do not correspond to a hardware
switch interface (e.g. Tunnel-ID)
– Reserved Ports
• Defined by ONF 1.4.0, specify generic forwarding actions
such as sending to the controller, flooding and forwarding
using non-openflow methods, such as normal switch
processing
Ports - Reserved Port Types (Required)
• ALL
– represents all ports the switch can use for forwarding a
specific packets
– Can be used only as output interface
• CONTROLLER
– Represents the control channel with the OpenFlow
controller
– Can be used as an ingress port or as an output port
• TABLE
– Represents the start of the OpenFlow pipeline
– Submits the packet to the first flow table
• IN_PORT
– Represents the packet ingress port
– Can be used only as an output port
• ANY
– Special value used in some OpenFlow commands when no port
is specified
– Can neither be used as an ingress port nor as an output port
Ports - Reserved Port Types (Optional)
• LOCAL
– Represents the switch’s local networking stack and its
management stack
– Can be used as an ingress port or as an output port
• NORMAL
– Represents the traditional non-OpenFlow pipeline of the
switch
– Can be used only as an output port and processes the packet
using the normal pipeline
• FLOOD
– Represents flooding using the normal pipeline
– Can be used only as an output port
– Send the packet out on all ports except the incoming port
and the ports that are in blocked state
OpenFlow Table
• Every OpenFlow switch contains multiple flow tables
• Each glow table contains multiple flow entries
• The OpenFlow pipeline processing defines how packets interact
with these flow tables
• An OpenFlow switch is required to have at least one flow table
• The flow tables of an OpenFlow switch are sequentially numbered,
starting at 0
• Pipeline Processing
– Start at the first flow table
– Other flow tables may be used depending on the outcome of the match in
the first table
– Go only in forward direction not backward
– If packet is not redirected to another flow table, then pipeline processing
stops and the packet is processed with the associated action set
Pipeline Processing
Flow Entry
•
Match fields: consist of the ingress port, packet headers, and
optionally metadata specified by a previous table
•
Priority: matching precedence of the flow entry
•
Counters: updated when packets are matched
•
Instructions: to modify the action set or pipeline processing
•
Timeouts: maximum amount of time or idle time before flow is expired
•
Cookie: used by the controller to filter flow statistics, flow
modification and flow deletion. Not used when processing packets
Matching
Table-Miss & Flow Removal
• Table-miss
– Flow entry added by the controller
– Priority is 0 (lowest)
– Actions
• Send packets to the controller
• Drop packets
• Direct packets to a subsequent table
• Flow Removal
– Requested by the controller
– Time expiry (hard timeout, idle timeout)
– Optional switch eviction mechanism
Group Table
• Additional method for forwarding to a group of
entries (select, all)
• Main components:
– Group ID, Group Type, Counters, Action
buckets ( each action bucket contains a set of
actions to be executed)
Group Table - Group Type
• All
– Execute all buckets in a group
– Used mainly for multicast and broadcast – fwd a pkt on all
the ports
• Select (optional)
– Execute one bucket in a Group
– Implemented for load sharing and redundancy
• Indirect
– Execute one defined bucket in this Group
– Supports only a single bucket ( Eg. multiple routes are
pointing to same next hop)
• Fast failover (optional)
– Execute the first live bucket
– Eg. There is a primary path and secondary path – pass the
traffic on primary path and if it fails use the secondary one
Meter Table
• Consists of meter entries and defining per-flow
meters
• Per-flow meters enable OF to implement QoS
operations (rate-limiting), can be combined with
per-port queues for complex QoS operations
• Meters measures the rate of packets assigned to
it and enable controlling the rate of those packets
• Meters are attached directly to flow entries
Meter Table
• Components of Meter table:
– Meter ID, Meter Band, Counters
• Meter band: unordered list of meter bands, where
each meter band specifies the rate of the band
and the way to process packet
• Components of Meter band:
– Band Type, Rate, Counters, Type specific arguments
– Band Type : defined how to process a packet (drop/ dscp
remark)
Instructions
• Instructions are executed when a packet matches entry
• Instruction result: Change the packet, Action set, Pipeline
processing
• Supported instruction Types:
 Meter ID
 Direct a packet to the meter id. It may be droped because of metering.
 Apply-Actions
 Apply a specific action immediately here packets are modified between 2 flow
tables
 Clear-Actions
 clear all the actions in the action set immediately
 Write-Actions
 add a new action into the existing action set. if same action exists then
overwrite it.
 Write-Metadata
 write the masked meta data value
 Goto-Table
 Indicate the next table in the processing pipeline
Action Set
Action set is associated with each packet
FE modify the action set using write-action/ clear-action
Actions in the action-set will be executed when pipeline is stopped
Action set contains maximum of one action of each type
If multiple actions of the same type need to be added then use
“Apply-Actions”
 Need to follow the order below to execute action
 Different Types of Action Set:
















Copy TTL inwards – apply copy inward actions to the packet
Pop – apply all tag pop actions to the packet
Push MPLS – apply MPLS tag push action to the packet
Push PBB – apply PBB tag push action to the packet
Push VLAN: apply VLAN tag push action to the packet
Copy TTL outwards
Decrement TTL
Set – apply set field actions to the packet
QoS
Group – apply group actions
Output – forward a packet on the port specified by the output action
Action List
• “Apply-action” , “packet-out” messages include action
list
• Execute an action immediately
• Actions are executed sequentially in the order they
have been specified
• If action list contains an output action, a copy of the
packet is forwarded in its current state to the desired
port
• Action-set shouldn’t be changed because of action-list
Actions
 What to do with the packet when match criteria matches
with the packet
 Some of the Action Type:
 Output
 Fwd a pkt to the specified open flow port (physical/ logical/reserved)
 Set Queue
 Set Queue-id of the port : determines which queue should be used for
scheduling and forwarding packet
 Drop
 Packets which doesn’t have output action should be dropped
 Group
 Process the packet through specified group
 Push-Tag/ Pop-Tag
 Insert VLAN, MPLS tag
 Set-Field
 Rewriting a field in the packet header
 Change TTL
 Decrement TTL
2. Distributed State Abstraction
• Shield control mechanisms from state distribution
– While allowing access to this state
• Natural abstraction: global network view
– Annotated network graph provided through an API
• Implemented with “Network Operating System”
• Control mechanism is now program using API
– No longer a distributed protocol, now just a graph algorithm
– E.g. Use Dijkstra rather than Bellman-Ford
Major Change in Paradigm
• No longer designing distributed control protocols
– Design one distributed system (NOS)
– Use for all control functions
• Now just defining a centralized control function
Configuration = Function(view)
3. Specification Abstraction
• Control program should express desired behavior
• It should not be responsible for implementing that
behavior on physical network infrastructure
• Natural abstraction: simplified model of network
– Simple model with only enough detail to specify goals
• Requires a new shared control layer:
– Map abstract configuration to physical configuration
• This is “network virtualization”
Simple Example: Access Control
What
Abstract
Network
Model
How
Global
Network
View
Software Defined Network: Take 2
Abstract Network Model
Network
ControlVirtualization
Program
Global Network View
Network OS
What Does This Picture Mean?
• Write a simple program to configure a simple
model
– Configuration merely a way to specify what you want
• Examples
– ACLs: who can talk to who
– Isolation: who can hear my broadcasts
– Routing: only specify routing to the degree you care
• Some flows over satellite, others over landline
– TE: specify in terms of quality of service, not routes
• Virtualization layer “compiles” these requirements
– Produces suitable configuration of actual network devices
• NOS then transmits these settings to physical
boxes
Software Defined Network: Take 2
Specifies
behavior
Compiles to
topology
Transmits
to switches
Control Program
Abstract Network Model
Network Virtualization
Global Network View
Network OS
Two Examples Uses
• Scale-out router:
–
–
–
–
Abstract view is single router
Physical network is collection of interconnected switches
Allows routers to “scale out, not up”
Use standard routing protocols on top
• Multi-tenant networks:
– Each tenant has control over their “private” network
– Network virtualization layer compiles all of these individual
control requests into a single physical configuration
• Hard to do without SDN, easy (in principle) with
SDN
SDN Controller: Network OS
• Questions:
– How to obtain global information?
– What are the configurations?
– How to implement?
– How is the scalability?
– How does it really work?
• There are a number of SDN controllers out there
– Single Machine: NOX, POX, Ryu, Floodlight, …
– Multi-Machine (Distributed): OpenDaylight, ONOS, ...
70
SDN Controller Design
How to design a Network Operating System?
What
features or “abstractions” should be provided by this
“Network Operating System”?
In particular, what should be the “global network view” &
“programmatic interfaces” provided to control apps?

And
or what “low-level” details should be handled by Network OS?
what is the granularity of control allowed to “apps”?
Analogies (& possible differences?):
computer



OS and (high-level) programming models
computer architecture: instruction sets, CPU, memory, disks, I/O devices, ...
(high-level) programming language constructs: statements, data types, functions, …
OS: (virtual) memory, processes, I/O and drivers, system calls, …
(distributed)

file systems (or databases or data stores)
files, directories & permissions, transactions, relations & schemas; vs. disks, ….
71
SDN Controller Design Questions
Some Key Questions & Issues:
How
to obtain global (network-wide) information?
How to perform distributed state management?

time scales of state change dynamics? consistency issues?
What
are the configurations? Abstractions & APIs?
How to implement such a Network OS?

And will it really work? E.g., response time & other performance issues?
How
to program control apps?
Will it scale?

E.g., a SDN programming language?
Not only in terms of network size, but also # flows, control apps, etc.?
What
about reliability & security issues?
… (e.g., inter-operability, evolvability)
Are there some fundamental design principles we can adopt & apply?
72
NOX Case Study
1st open-source network OS implemented in C++ by Stanford

Components:




Network View:






NOX controller on PC server
network view (database)
control app processes
switch-level topology
locations of users, hosts, middleboxes & other network elements
services offered (e.g., web, NFS)
bindings between names & addresses
but NOT current state of network traffic
Control granularity


flow-level (as opposed to packet-level, or network-prefix level)
control exerted on flow initiation: e.g., 1st packet of a flow
(following packets treated same)
73
Time Scales & Control Granularity


Time scales (in conventional networks)
“Events” & control granularity




Packet arrivals: millions of arrivals per sec (on a 10G link)
Flow initiations: one or more orders less than packet arrivals
(notion of “flows” is more “persistent” than Netflow)
Changes in the “network views”: order of 10s of events per
second for a network of thousands of hosts
Scaling? network view vs. per-flow vs. per-packet states?
74
Programmatic Interface

Event-based:




Network View and Namespaces



Events: flow arrives, users come/go, links up/down, etc
• Some events are directly generated by Openflow switches,
e.g., switch join/leave, packet received, switch stats received
• Others by other services/applications: e.g., user authenticated
NOX applications use a set of event handlers to register for execution
when a particular event happens
Event handlers are executed in the order of their priority
• specified during handler registration (but how to determine priority?)
NOX includes a “base” applications to construct network view and maintain
a high-level namespace used by other applications
• e.g., various “name-address” bindings
Applications can be written in a “topology-independent” manner, then
“compiled” against network view to produce low-level “look-up” functions to
be enforced per-packet
Also include “high-level” services (“system libraries”)
75
Example I: User-Based VLAN Tagging
76
Example II: Simple Scan Detection
77
Example II: Simple Scan Detection
78
ONOS: Architecture Tiers
Northbound
Abstraction:
network graph
- application
intents
-
Apps
Northbound - Application Intent Framework
(policy enforcement, conflict resolution)
Core:
- distributed
- protocol
independent
Southbound
Abstraction:
- generalized
OpenFlow
- pluggable &
extensible
Distributed Core
(scalability, availability, performance, persistence)
Southbound
(discover, observe, program, configure)
OpenFlow
NetConf
...
ONOS: Distributed Architecture
for Fault Tolerance
Apps
NB Core API
Distributed Core
(state management, notifications, high-availability & scale-out)
SB Core API
Adapters
Adapters
Adapters
Adapters
Protocols
Protocols
Protocols
Protocols
General

forwarding model (data plane abstraction)
Currently based on Openflow (flow-level) forwarding model
•

SDN Summary
prioritized rules [header: counters, actions]: match  actions
assume forwarding elements provide (standardized) APIs
•
install and manipulate forwarding tables, perform match and
actions, & collect stats, etc.
Logically

serve as a “network operating system”
•

Control
•
provide distributed state management, map control logic to data
plane actions, etc.
provide a “global network view” to (high-level) “control apps”
•

centralized control plane (a “network OS”)
enable “higher-level” abstractions to hide “lower-level” details
apps operate on higher-level abstractions
control apps focus on “control logic” using network OS APIs
Hopefully, much easier to write, verify and maintain!
Does SDN Work?
• Is it scalable?
Yes
• Is it less responsive?
No
• Does it create a single point of failure?
No
• Is it inherently less secure?
No
• Is it incrementally deployable?
Yes
SDN: Clean Separation of Concerns
• Control prgm: specify behavior on abstract model
– Driven by Operator Requirements
• Net Virt’n: map abstract model to global view
– Driven by Specification Abstraction
• NOS: map global view to physical switches
– API: driven by Distributed State Abstraction
– Switch/fabric interface: driven by Forwarding Abstraction
SDN Architecture Overview
(ONF v1.0)
A Short History of SDN
 ~2004: Research on new management paradigms
• RCP, 4D [Princeton, CMU,….]
• SANE, Ethane [Stanford/Berkeley]
 2008: Software-Defined Networking (SDN)
 NOX Network Operating System [Nicira]
 OpenFlow switch interface [Stanford/Nicira]
 2011: Open Networking Foundation (~69 members)
• Board: Google, Yahoo, Verizon, DT, Msoft, F’book, NTT
• Members: Cisco, Juniper, HP, Dell, Broadcom, IBM,…..
 2012: Latest Open Networking Summit
• Almost 1000 attendees, Google: SDN used for their
WAN
• Commercialized, in production use (few places)
Useful References
•
•
•
•
•
•
OpenFlow Wiki
OpenFlow Specification 1.3.0
OpenFlow Specification 1.5.0
Old Versions of OpenFlow
Mininet
OpenVSwitch