Transcript - GENI Wiki
Ilya Baldin
RENCI, UNC-CH
Victor Orlikowski
Duke University
First things first
1. IMPORT THE OVA INTO YOUR VIRTUAL BOX
2. LOGIN as gec20user/gec20tutorial
3. START ORCA ACTORS
sudo /etc/init.d/orca_am+broker-12080 clean-restart
sudo /etc/init.d/orca_sm-14080 clean-restart
sudo /etc/init.d/orca_controller-11080 start
4. WAIT AND LET IT CHURN – THIS IS ALL OF EXOGENI IN ONE
VIRTUAL MACHINE!
WILL TAKE SEVERAL MINUTES!
13 GPO-funded racks built by IBM plus several “opt-in”
racks
◦ Partnership between RENCI, Duke and IBM
◦ IBM x3650 M4 servers (X-series 2U)
1x146GB 10K SAS hard drive +1x500GB secondary drive
48G RAM 1333Mhz
Dual-socket 8-core CPU
Dual 1Gbps adapter (management network)
10G dual-port Chelsio adapter (dataplane)
◦ BNT 8264 10G/40G OpenFlow switch
◦ DS3512 6TB or server w/ drives totaling 6.5TB sliverable storage
iSCSI interface for head node image storage as well as experimenter
slivering
Cisco (WVN, NCSU, GWU) and Dell (UvA) configurations
also exist
Each rack is a small networked cloud
http://wiki.exogeni.net
◦ OpenStack-based with NEuca extensions
◦ xCAT for baremetal node provisioning
3
5 upcoming racks at TAMU, UMass Amherst, WSU, UAF and PSC not shown
VPN Appliance (Juniper SSG5)
Management switch (IBM G8052R)
Worker node (IBM x3650 M4)
Static VLAN segments
provisioned
to the backbone
Direct L2 Peering
w/ the backbone
Worker node (IBM x3650 M4)
Worker node (IBM x3650 M4)
Worker node (IBM x3650 M4)
Worker node (IBM x3650 M4)
Worker node (IBM x3650 M4)
Worker node (IBM x3650 M4)
Worker node (IBM x3650 M4)
Worker node (IBM x3650 M4)
Worker node (IBM x3650 M4)
Management node (IBM x3650 M4)
1 or
10Gbps
Option 1
1Gbps remote management and
iSCSI storage connection
Option 2
To campus Layer 3
Network
10/100
1 Gbps
Campus
1 Gbps
Sliverable Storage
IBM DS3512
OpenFlow-enabled L2 switch
(IBM G8264R)
Dataplane to dynamic
circuit backbone
(10/40/100Gbps)
Dataplane to campus
OpenFlow
Network
CentOS 6.X base install
Resource Provisioning
◦ xCAT for bare metal provisioning
◦ OpenStack + NEuca for VMs
◦ FlowVisor
Floodlight used internally by ORCA
GENI Software
Worker and head nodes can be reinstalled remotely via IPMI +
KickStart
Monitoring via Nagios (Check_MK)
Syslogs collected centrally
◦ ORCA for VM, bare metal and OpenFlow
◦ FOAM for OpenFlow experiments
◦ ExoGENI ops staff can monitor all racks
◦ Site owners can monitor their own rack
7
OpenStack
◦ Currently Folsom based on early release of RHOS
◦ Patched to support ORCA
Additional nova subcommands
Quantum plugin to manage Layer2 networking between VMs
◦ Manages creation of VMs with multiple L2 interfaces attached to highbandwidth L2 dataplane switch
◦ One “management” interface created by nova attaches to management
switch for low-bandwidth experimenter access
Quantum plugin
◦ Creates and manages 802.1q interfaces on worker nodes to attach VMs
to VLANs
◦ Creates and manages OVS instances to bridge interfaces to VLANs
◦ DOES NOT MANAGE MANAGEMENT IP ADDRESS SPACE!
◦ DOES NOT MANAGE THE ATTACHED SWITCH!
Manages booting of bare-metal nodes for users and
installation of OpenStack workers for sysadmins
Stock software
Upgrading the rack means
◦ Upgrading the head node (painful)
◦ Using xCAT to update worker nodes (easy!)
Flowvisor
◦ Used by ORCA to “slice” the OpenFlow part of the switch for
experiments via API
Typically along <port><vlan tag> dimensions
◦ For emulating VLAN behavior ORCA starts Floodlight instances
attached to slices
Floodlight
FOAM
◦ Stock v. 0.9 packaged as a jar
◦ Started with parameters that minimize JVM footprint
◦ Uses ‘forwarding’ module to emulate learning switch behavior on a
VLAN
◦ Translator from GENI AM API + RSpec to Flowvisor
Does more, but not in ExoGENI
Control framework
◦ Orchestrates resources at user requests
◦ Provides operator visibility and control
Presents multiple APIs
◦ GENI API
Used by GENI experimenter tools (Flack, omni)
◦ ORCA API
Used by Flukes experimenter tool
◦ Management API
Used by Pequod administrator tool
Slice owner may
deploy an IP network
into a slice (OSPF).
slice
OpenFlow-enabled L2
topology
Cloud hosts with
network control
mputed embedding
Virtual network exchange
Virtual colo
campus net to circuit
fabric
Brokering Services
Site
Site
User facing
ORCA Agents
2.
Provision a dynamic slice of
networked virtual
infrastructure from multiple
providers, built to order for a
guest application.
Stitch slice into an end-toend execution environment.
BROKER
Q
RE
ST
UE
USER
BROKER
RE
Topology requests
specified in NDL
BROKER
DELEGATE
1.
User
DE
EM
SITE
SITE
SITE
NETWORK
/EDGE
SITE
NETWORK/
TRANSIT
STORAGE
COMPUTE
An actor encapsulates a piece of logic
◦ Aggregate Manager (AM) – owner of the resources
◦ Broker – partitions and redistributes resources
◦ Service Manager – interacts with the user
A Controller is a separable piece of logic encapsulating topology
embedding and presenting remote APIs to users
A container stores some number of actors, connects
them to
◦ The outside world using remote API endpoints
◦ A database for storing their state
Any number of actors can share a container
A controller is *always* by itself
Tickets, leases and reservations are used somewhat
interchangeably
◦ Tickets and leases are kinds of reservation
A ticket indicates the right to instantiate a resource
A lease indicates ownership of instantiated resources
AM gives tickets to brokers to indicate delegation of
resources
Brokers subdivide the given tickets into other smaller
tickets and give them to SMs upon request
SMs redeem tickets with AMs and receive leases which
indicate which resources have been instantiated for them
Slices consist of reservations
Slices are identified by GUID
◦ They do have user-given names as an attribute
Reservations are identified by GUIDs
◦ They have additional properties that describe
Constraints
Details of resources
Each reservation belongs to a slice
Slice and reservation GUIDs are the same across all actors
◦ Ticket issued by broker to a slice
◦ Ticket seen on SM in a slice, becomes a lease with the same GUID
◦ Lease issued by AM for a ticket to a slice
SM
7.
em
de
Re t s
e
ck
Ti
rn
tu
Re s
e
as
Le
CONTROLLER
8.
2. Reques t
Sl i c e
t
es
u
q
Re r c e s
.
3 s ou
Re
m
ee
d
Re t s
6. c k e
Ti
BROKER
1. Del egat i on
Ti c k et s
st
ue s
q
Re r c e
4. s ou
Re
e
id
v
o
Pr e t s
.
5
ck
Ti
AM
AM
AM
ORCA actor configuration
◦ ORCA looks for configuration files relative to $ORCA_HOME
environment variable
◦ /etc/orca/am+broker-12080
◦ /etc/orca/sm-14080
ORCA controller configuration
◦ Similar, except everything is in reference to
$CONTROLLER_HOME
◦ /etc/orca/controller-11080
Actor configuration
config/orca.properties – describes the container
config/config.xml – describes the actors in the
container
config/runtime/ - contains keys of actors
config/ndl/ - contains NDL topology descriptions of
actor topologies (AMs only)
Controller configuration
config/controller.properties – similar to orca.properties
describes the container for controller
geni-trusted.jks – Java truststore with trust roots for
XMLRPC interface to users
xmlrpc.jks – Java keystore with the keypair for this
controller to use for SSL
xmlrpc.user.whitelist, xmlrpc.user.blacklist – lists of
user urn regex patterns that should be allowed/denied
◦ Blacklist parsed first
Global ExoGENI Actors
ExoSM
DD AM
ExoBroker
BEN AM
OSCARS AM
headnode
B Broker
SL AM
I2
rci-hn.exogeni.net
DOE ESnet
LEARN AM
StarLight
LEARN
A SM
AL2S
A AM
OpenFlow
RCI XO
B AM
OpenFlow
A Broker
B SM
Campus OF
Other XO
BEN
Departure
Drive
headnode
bbn-hn.exogeni.net
B Broker
C Broker
2601-2610
C SM
C AM
B SM
B AM
OpenFlow
OpenFlow
BBN XO
Other XO
Campus OF
AMs and brokers have ‘special’ inventory slices
◦ AM inventory slice describes the resources it owns
◦ Broker inventory slice describes resources it was given
AMs also have slices named after the broker they
delegated resources to
◦ Describe resources delegated to that broker
Inspect existing slices on different actors using Pequod
Create an inter-rack slice in Flukes
Inspect slice in Flukes
Inspect slice in
◦ SM
◦ Broker
◦ AMs
Close reservations/slice on various actors
http://www.exogeni.net
http://wiki.exogeni.net