DYNES - Internet2

Download Report

Transcript DYNES - Internet2

October 3rd 2011 – Fall Member Meeting
Eric Boyd, Internet2
Jason Zurawski, Internet2
DYnamic NEtwork System (DYNES)
NSF #0958998
Agenda
•
•
•
•
DYNES Overview and Motivation
DYNES Hardware and Software
Current Status
Next Steps
• Recent Discussion
– Equipment Choice
– LHCONE/NDDI/OS3E
• Demonstrations
– GLIF 2011
– USATLAS Facilities Meeting
– SC11
• Conclusion
2 – 4/12/2016, © 2011 Internet2
DYNES Motivation
• Data movement to support science:
– Increasing in size (100s of TBs in the LHC World,
approaching PB sizes)
– Becoming more frequent (multiple times per day)
– Reaching more consumers (VO sizes stand to
increase, more VOs)
– Time sensitivity (data may grow “stale” if not
processed immediately)
• Traditional networking:
– R&E or Commodity “IP” connectivity is subject to
congestion by other users
– Supporting large sporadic flows is challenging for
the engineers, and frustrating for the scientists
3 – 4/12/2016, © 2011 Internet2
DYNES Motivation
• Solution
– Dedicated bandwidth (over the entire end to end path)
to move scientific data
– Invoke this “on demand” instead of relying on
permanent capacity (cost, complexity)
– Exists in harmony with traditional IP networking
– Connect to facilities that scientists need to access
– Integration with data movement applications
• Invoke the connectivity when the need it, based on network
conditions
• Proposed Deployment:
– Software and hardware support spanning domain
boundaries
• Campus
• Regional
• Backbone
– Integration with existing technologies and
deployments
4 – 4/12/2016, © 2011 Internet2
DYNES Generic Topology – Access to Resources
5 – 4/12/2016, © 2011 Internet2
DYNES Summary
• What is it?:
– A nationwide cyber-instrument spanning ~40 US
universities and ~14 Internet2 connectors
• Extends Internet2’s ION service into regional networks
and campuses, based on OSCARS implementation of IDC
protocol (developed in partnership with ESnet)
• High-performance file store at sites
• Who is it?
– A collaborative team including Internet2, Caltech,
University of Michigan, and Vanderbilt University
– Community of regional networks and campuses
– LHC, astrophysics community, OSG, WLCG, other
virtual organizations
6 – 4/12/2016, © 2011 Internet2
DYNES Community Support
• What are the goals?
– Support large, long-distance scientific data flows
• LHC
• LIGO,
• Virtual Observatory
– Build a distributed virtual instrument
• Internet2 received a total of 60 Letters of Collaboration
representing potential DYNES sites and their
collaborators
–
–
–
–
44 Universities (some duplicates)
14 Regional Networks
1 Virtual Organization
1 Federal Lab
• Total Funding of $1.74 Million
– Original Request of $2 Million
7 – 4/12/2016, © 2011 Internet2
DYNES Participants
• Application process required to establish participants
– Submit applications to gauge institutional/network
interest
– Encourage discussion with PIs to advance
understanding of the scientific use cases
• Deployment Announcements announced in Feb 2011:
– 25 End Sites
– 8 Regional Networks
– Collaboration with like minded efforts (DoE ESCPS)
• Plans to consider provisional applications (send email
to [email protected] if interested)
8 – 4/12/2016, © 2011 Internet2
DYNES Projected Topology (October 2011)
9 – 4/12/2016, © 2011 Internet2
Agenda
•
•
•
•
DYNES Overview and Motivation
DYNES Software and Hardware
Current Status
Next Steps
• Recent Discussion
– Equipment Choice
– LHCONE/NDDI/OS3E
• Demonstrations
– GLIF 2011
– USATLAS Facilities Meeting
– SC11
• Conclusion
10 – 4/12/2016, © 2011 Internet2
DYNES Hardware
• Inter-domain Controller (IDC) Server and Software
– IDC creates virtual LANs (VLANs) dynamically between
the FDT server, local campus, and wide area network
– IDC software is based on the OSCARS and DRAGON
software which is packaged together as the DCN
Software Suite (DCNSS)
– DCNSS version correlates to stable tested versions of
OSCARS. The current version of DCNSS is v0.5.4.
– Initial DYNES deployments will include both DCNSSv0.6
and DCNSSv0.5.4 virtual machines
• Currently XEN based
• Looking into KVM for future releases
• A Dell R410 1U Server has been chosen, running
CentOS 5.x
11 – 4/12/2016, © 2011 Internet2
DYNES Standard Equipment
• Fast Data Transfer (FDT) server
– Fast Data Transfer (FDT) server connects to the disk
array via the SAS controller and runs the FDT software
– FDT server also hosts the DYNES Agent (DA) Software
– The standard FDT server will be a DELL 510 server with
dual-port Intel X520 DA NIC. This server will a PCIe
Gen2.0 card x8 card along with 12 disks for storage.
• DYNES Ethernet switch options:
– Dell PC6248 (48 1GE ports, 4 10GE capable ports (SFP+,
CX4 or optical)
– Dell PC8024F (24 10GE SFP+ ports, 4 “combo” ports
supporting CX4 or optical)
12 – 4/12/2016, © 2011 Internet2
DYNES Software
• Dynamic Circuit Control
– OSCARS
– ION Service
• Monitoring
– perfSONAR Circuit Monitoring
• Data Movement
– FDT
– ESCPS
13 – 4/12/2016, © 2011 Internet2
DYNES Software – ION/OSCARS
• OSCARS v0.5.4
– Released March 14
– Features
• VLAN translation to allow integration into existing network
deployments
• Robust handling of circuit creation and failures
• Numerous Bugfixes
• Additional Documentation/Installation Guidance
• Security enhancements
• OSCARS v0.6
– Anticipated Fall 2011
– Features:
• Major re-write of the underlying codebase by ESnet
• Modular, web-services based design
• Integration with perfSONAR monitoring framework
• DYNES will deploy OSCARS v0.5 and transition to OSCARS v0.6
14 – 4/12/2016, © 2011 Internet2
DYNES Software – Monitoring Dynamic Circuits
• perfSONAR Monitoring
– Framework designed to monitor end to end
performance
– Early focus – Layer 3 measurements
– New projects
• Describing/mapping network topology at all layers
• Monitoring Layer 2 circuits (dynamic and static)
• Several collaborations working on this problem
– OGF Working Groups (NML, NMC, NSI)
– GLIF Working Groups
– Joint effort in DICE (DANTE Internet2
CANARIE/Caltech ESnet) collaboration
15 – 4/12/2016, © 2011 Internet2
DYNES Software – Monitoring Dynamic Circuits
• If a failure occurs, what can a user do?
16 – 4/12/2016, © 2011 Internet2
Monitoring Dynamic Circuits
• Goal: to enable users to get measurements in their circuits while
allowing domains to provide as much or as little information to the user
as the domain wants
• Develop a solution in collaboration with other groups and organizations
including DANTE, ESnet, the Network Markup Language Working Group
and the Network Measurement Control Working Group
– Broad agreement ensures that users can monitor their circuits, no
matter what domains they traverse
• Multi-faceted approach
– Enable domains to export monitoring data about circuits
– Enable users to discover the domains that make up their circuit, and the
monitoring data those domains contain about the circuit
• Leverage the standard perfSONAR infrastructure when available
17 – 4/12/2016, © 2011 Internet2
Circuit Monitoring Agent
• This agent is the “glue” that connects together a
Domain’s provisioning software (OSCARS) and monitoring
infrastructure with the perfSONAR services so that users
can find information about circuit statistics
• When new circuits are brought up, the agent looks at the
intra-domain path for the circuit, and builds a description
of that path.
– This description is then registered into a perfSONAR Topology
Service
• Needs to know how the domain monitors its devices to
ensure an appropriate description of the circuit
– If configured, the agent can use a user-defined script to start
circuit monitoring
18 – 4/12/2016, © 2011 Internet2
Router/Switch Monitoring Component
• Everyone has their own method of monitoring their
hardware
• Define the needed functionality instead of requiring a specific
solution
– Offer a specific solution to users who want to use it
• Requirements:
– Software that can measure the operational status and utilization
of the elements making up the circuit
– These measurements are made available using standard
perfSONAR protocols
• As long as the monitoring meets the above requirements, it can
be made to work in the Circuit Monitoring infrastructure
19 – 4/12/2016, © 2011 Internet2
Router/Switch Monitoring Component
• Specific Solution: ESxSNMP
– Developed by Jon Dugan at ESnet
– Uses SNMP to monitor operational status and utilization statistics
for all equipment elements, including physical interfaces, VLAN
interfaces and LSPs
– These interface statistics are then made available using the
perfSONAR-PS SNMP MA
– This software will be packaged for easy installation
20 – 4/12/2016, © 2011 Internet2
DYNES Software – FDT
• The DYNES Agent (DA) will provide the functionality to request the
circuit instantiation, initiate and manage the data transfer, and
terminate the dynamically provisioned resources. Specifically the DA
will do the following:
– Accept user request in the form of a DYNES Transfer URLs indicating the
data location and ID
– Locates the remote side DYNES EndPoint Name embedded in the Transfer
URL
– Submits a dynamic circuit request to its home InterDomain Controller
(IDC) utilizing its local DYNES EndPoint Name as source and DYNES
EndPoint Name from Transfer URL as the destination
– Wait for confirmation that dynamic circuit has been established
– Starts and manages Data Transfer using the appropriate DYNES Project IP
addresses
– Initiate release of dynamic circuit upon completion
21 – 4/12/2016, © 2011 Internet2
DYNES Data Flow Overview
22 – 4/12/2016, © 2011 Internet2
Agenda
•
•
•
•
DYNES Overview and Motivation
DYNES Hardware and Software
Current Status
Next Steps
• Recent Discussion
– Equipment Choice
– LHCONE/NDDI/OS3E
• Demonstrations
– GLIF 2011
– USATLAS Facilities Meeting
– SC11
• Conclusion
23 – 4/12/2016, © 2011 Internet2
DYNES Phase 1 Project Schedule
• Phase 1: Site Selection and Planning (Sep-Dec
2010)
– Applications Due: December 15, 2010
– Application Reviews: December 15 2010-January 31
2011
– Participant Selection Announcement: February 1, 2011
• 33 Were Accepted in 2 categories
– 8 Regional Networks
– 25 Site Networks
24 – 4/12/2016, © 2011 Internet2
DYNES Phase 2 Project Schedule
• Phase 2: Initial Development and Deployment (Jan 1Jun 30, 2011)
– Initial Site Deployment Complete - February 28, 2011
• Caltech, Vanderbilt, University of Michigan, MAX,
USLHCnet
– Initial Site Systems Testing and Evaluation complete:
April 29, 2011
– Longer term testing (Through July)
• Evaluating move to CentOS 6
• New functionality in core software:
– OSCARS 6
– perfSONAR 3.2.1
– FDT Updates
25 – 4/12/2016, © 2011 Internet2
DYNES Phase 3 Project Schedule
• Phase 3: Scale Up to Full-scale System Development (14 months) (July
1, 2011-August 31, 2012)
–
–
–
–
Phase 3-Group A Deployment (9 Sites): March 1-Fall, 2011
Phase 3-Group B Deployment (13 Sites): July 31-Late Fall, 2011
Phase 3-Group C Deployment (11 Sites): July 18 2011- Winter, 2012
Full-scale System Development, Testing, and Evaluation (Winter 2012August 31, 2012)
• Phase 4: Full-Scale Integration At-Scale; Transition to Routine O&M
(12 months) (September 1, 2012-August 31, 2013)
– DYNES will be operated, tested, integrated and optimized at scale,
transitioning to routine operations and maintenance as soon as this
phase is completed
26 – 4/12/2016, © 2011 Internet2
Phase 3 – Group A Schedule Details
• Phase 3-Group A Deployment (10 Sites) (March 1-Late Fall
2011)
– Teleconferences and Planning with individual participants: March
28-May 2, 2011
• Completed initial telecons with all Group A members
• Subsequent interaction during installation
–
–
–
–
Finalize Phase 3-Group A Equipment Order List: June, 2011
Place Equipment Order: July, 2011
Receive DYNES Equipment: Week of July 11th, 2011
Configure and Test Individual Participant Configurations: Late July
2011
– Ship Phase 3-Group A Equipment to sites: Late July 2011
– Deploy and Test at Phase 3-Group A Sites: Through July 31, 2011
– Site Level configurations: Through Fall 2011 (delays due to local
factors for the most part)
27 – 4/12/2016, © 2011 Internet2
Phase 3 Group A Members
•
•
AMPATH
Mid-Atlantic Crossroads (MAX)
– The Johns Hopkins University (JHU)
•
Mid‐Atlantic Gigapop in Philadelphia for Internet2 (MAGPI)*
– Rutgers (via NJEdge)
– University of Delaware
•
Southern Crossroads (SOX)
– Vanderbilt University
•
CENIC*
– California Institute of Technology (Caltech)
•
MREN*
– University of Michigan (via MERIT and CIC OmniPoP)
• Note: USLHCNet will also be connected to DYNES Instrument
via a peering relationship with DYNES
* temp configuration of static VLANs until future group
28 – 4/12/2016, © 2011 Internet2
Phase 3 – Group B Schedule Details
• Phase 3-Group A Deployment (15 Sites) (July 18 2011-Late
Fall 2011)
– Teleconferences and Planning with individual participants: 3rd and
4th Week of July 2011
• Completed initial telecons with all Group B members
• Subsequent interaction during installation
–
–
–
–
–
–
–
Finalize Phase 3-Group B Equipment Order List: Sept 2011
Place Equipment Order: Late Sept 2011
Receive DYNES Equipment: Late Sept – Early Oct 2011
Configure and Test Individual Participant Configurations: Oct 2011
Ship Phase 3-Group B Equipment to sites: Expected Late Oct 2011
Deploy and Test at Phase 3-Group A Sites: Expected Nov 2011
Site Level configurations: Expected through Dec 2011
29 – 4/12/2016, © 2011 Internet2
Phase 3 Group B Members
• Mid‐Atlantic Gigapop in Philadelphia for Internet2 (MAGPI)
– University of Pennsylvania
• Metropolitan Research and Education Network (MREN)
–
–
–
–
Indiana University (via I-Light and CIC OmniPoP)
University of Wisconsin Madison (via BOREAS and CIC OmniPoP)
University of Illinois at Urbana‐Champaign (via CIC OmniPoP)
The University of Chicago (via CIC OmniPoP)
• Lonestar Education And Research Network (LEARN)
–
–
–
–
–
–
Southern Methodist University (SMU)
Texas Tech University
University of Houston
Rice University
The University of Texas at Dallas
The University of Texas at Arlington
• Florida International University (Connected through FLR)
30 – 4/12/2016, © 2011 Internet2
Phase 3 Group C Members
• Front Range GigaPop (FRGP)
– University of Colorado Boulder
• Northern Crossroads (NoX)
– Boston University
– Harvard University
– Tufts University
• CENIC**
– University of California, San Diego
– University of California, Santa Cruz
• CIC OmniPoP***
– The University of Iowa (via BOREAS)
• Great Plains Network (GPN)***
– The University of Oklahoma (via OneNet)
– The University of Nebraska‐Lincoln
31 – 4/12/2016, © 2011 Internet2
** deploying own dynamic infrastructure
*** static configuration based
Agenda
•
•
•
•
DYNES Overview and Motivation
DYNES Hardware and Software
Current Status
Next Steps
• Recent Discussion
– Equipment Choice
– LHCONE/NDDI/OS3E
• Demonstrations
– GLIF 2011
– USATLAS Facilities Meeting
– SC11
• Conclusion
32 – 4/12/2016, © 2011 Internet2
Next Steps
• Fall 2011
– Group C Deployment
– Group A, B, and PI site Testing
• Winter 2011
– Group A, B, C, and PI site Testing
– Software upgrades as needed
– Additional sites come online as funding allows
• 2012 - 2013
– Robustness and scalability testing
– Hardware evaluation – determine if refresh is
possible/necessary
– Outreach to other scientific communities
– Encouraging integration of basic ideas into other software
packages (e.g. coordination with other in-progress efforts)
33 – 4/12/2016, © 2011 Internet2
Agenda
•
•
•
•
DYNES Overview and Motivation
DYNES Hardware and Software
Current Status
Next Steps
• Recent Discussion
– Equipment Choice
– LHCONE/NDDI/OS3E
• Demonstrations
– GLIF 2011
– USATLAS Facilities Meeting
– SC11
• Conclusion
34 – 4/12/2016, © 2011 Internet2
Equipment Choice
• Standard Equipment Overview
• Our Choices
• Recent Comments/Discussions
35 – 4/12/2016, © 2011 Internet2
Standard Equipment Overview
• IDC Server
– Inter-domain/Domain controller. Speaks with
OSCARS instances in other domains to arrange
circuit management
– Contains passive measurement tools (e.g. Circuit
Monitoring
• FDT Server
– Primary data movement server
– Available active measurement tools (OWAMP,
BWCTL)
• Switch
– Connects FDT and other resources, controlled by
IDC server.
36 – 4/12/2016, © 2011 Internet2
Our Choices
• http://www.internet2.edu/ion/hardware.html
• IDC
– Dell R410 1U Server
– Dual 2.4 GHz Xeon (64 Bit), 16G RAM, 500G HD
–
http://i.dell.com/sites/content/shared-content/data-sheets/en/Documents/R410-Spec-Sheet.pdf
• FDT
– Dell R510 2U Server
– Dual 2.4 GHz Xeon (64 Bit), 24G RAM, 300G Main,
12TB through RAID
–
http://i.dell.com/sites/content/shared-content/data-sheets/en/Documents/R510-Spec-Sheet.pdf
• Switch
– Dell 8024F or Dell 6048
– 10G vs 1G Sites; copper ports and SFP+; Optics on a
site by site basis
–
–
http://www.dell.com/downloads/global/products/pwcnt/en/PC_6200Series_proof1.pdf
http://www.dell.com/downloads/global/products/pwcnt/en/switch-powerconnect-8024f-spec.pdf
37 – 4/12/2016, © 2011 Internet2
Our Choices
• Why?
– LHC Community (e.g. 3 out of 4 PIs) have a good
relationship with Dell
– Competitive pricing for all components and add-ons
– Long term support
– Streamlined ordering/customer support process
(personal representatives)
• PI sites ordered/installed April/May 2011
– Tested May/June
• Group A ordered July 2011
– Installation in July, Testing July - Sept
38 – 4/12/2016, © 2011 Internet2
Recent Discussion
• Why Dell and not X?
– Prior Relationship in the LHC community
– Use of servers/switches in production
environments (1G and 10G)
– Pricing vs other vendors
• Switches are expensive
• Server choice was heavily influenced by experience
in LHC – need a server that can consistently
perform at 10G (NIC, CPU, and Disks)
39 – 4/12/2016, © 2011 Internet2
Recent Discussion
• Performance Testing?
– See results on the web: http://www.internet2.edu/ion/docs/20110525Dell_R510_Benchmarks.pdf
– Tests were ‘LAN’ for the purposes of evaluating
disk performance
– ‘WAN’ tests between PI sites started April, will
continue for remainder of year. Things we are
testing:
• TCP vs UDP performance over ION
• Ability of Switches to cope with multiple concurrent
1G/10G flows
• Software robustness (OSCARS/perfSONAR for
infrastructure, FDT for data movement)
• Server robustness (e.g. Disk’s ability to sustain 10G
network performance)
40 – 4/12/2016, © 2011 Internet2
Recent Discussion
• Performance Testing? (cont)
– Discussion on Performance WG list + Dynes Lists: “Big
Buffers for data movement”.
• Use Case: DYNES switch != Border router. Meant to live
in existing environment.
– Will handle multiple 1G flows (from storage/cluster
machines)
– Will handle multiple 10G flows from FDT server and other
devices
• ION network being upgraded to support more
bandwidth requests (late 2011), currently pinched in
some areas
• Yes, big buffers are important, no one will dispute this
(see evidence in past JTs and Member Meeting talks)
– Tradeoffs often must be made due to budget
considerations
41 – 4/12/2016, © 2011 Internet2
Recent Discussion
• Performance Testing? (cont)
– Reality Check
• Big Iron switches have higher cost. Object of DYNES was
to connect as many sites (regionals and campuses) as
possible.
• Budget is designed to do this, with guidelines for cost
and capabilities of switches.
• Experience has shown current hardware can function
well in production environment based on our use case
42 – 4/12/2016, © 2011 Internet2
Recent Discussion
• Performance Testing? (cont)
– Options available:
• Sites can choose to not take equipment if they have
doubts about functionality in their environment
• Sites can also choose to research other equipment,
DYNES is willing to work on ways to support individual
requests and work on funding options
– Can’t write a check,
– Cost must be similar to what we are paying for other sites
– Special equipment will require additional commitment
from end sites to support – DYNES PIs have expertise with
Dell, can’t speak for other vendors
43 – 4/12/2016, © 2011 Internet2
Agenda
•
•
•
•
DYNES Overview and Motivation
DYNES Hardware and Software
Current Status
Next Steps
• Recent Discussion
– Equipment Choice
– LHCONE/NDDI/OS3E
• Demonstrations
– GLIF 2011
– USATLAS Facilities Meeting
– SC11
• Conclusion
44 – 4/12/2016, © 2011 Internet2
LHCONE and NDDI/OS3E
• LHCONE
– International effort to enhance networking at LHC
facilities
– LHCOPN connects CERN (T0) and T1 facilities
worldwide
– LHCONE will focus on T2 and T3 connectivity
– Utilizes R&E networking to accomplish this goal
• NDDI/OS3E
– In addition to Internet2’s “traditional” R&E
services, develop a next generation service
delivery platform for research and science to:
• Deliver production layer 2 services that enable new
research paradigms at larger scale and with more
capacity
• Enable a global scale sliceable network to support
network research
• Start at 2x10 Gbps, Possibly 1x40 Gbps
45 – 4/12/2016, © 2011 Internet2
LHCONE High-level Architecture
46 – 4/12/2016, © 2011 Internet2
LHCONE – Early Diagram (June 2011)
“Joe’s Solution” – Result of June 2011 Meeting
•
•
48 – 4/12/2016, © 2011 Internet2
Two “issues” identified at the DC
meeting as needing particular
attention:
• Multiple paths across Atlantic
• Resiliency
Agreed to have the architecture
group work out a solution
• Layer 2 ‘islands’ joined by
Layer 3 connections
LHCONE – Layer 1 View
LHCONE keeps
open access
methods
On top of that,
2 VLANs
overlaid in tree
topology
49 – 4/12/2016, © 2011 Internet2
49
LHCONE Pilot (Late Sept 2011)
50 – 4/12/2016, © 2011 Internet2
Mian Usman, DANTE, LHCONE technical proposal v2.0
50
LHCONE Pilot
•
•
Multipoint:
– Domains interconnected through Layer 2 switches
– Two VLANs (nominal IDs: 3000, 2000)
• VLAN 2000 configured on GEANT/ACE transatlantic segment
• VLAN 3000 configured on US LHCNet transatlantic segment
– Allows to use both TA segments, provides TA resiliency
– 2 route servers per VLAN
• Each connecting site peers will all 4 route servers
– Enables up to 25G on the Trans-Atlantic routes for LHC traffic.
Point to Point:
– Suggestion: Build on efforts of DYNES and DICE-Dynamic service
– DICE-Dynamic service being rolled out by ESnet, GÉANT, Internet2,
and USLHCnet
• Remaining issues being worked out
• Planned commencement of service: October, 2011
• Built on OSCARS (ESnet, Internet2, USLHCnet, RNP) and
AutoBAHN (GÉANT), using IDC protocol
51 – 4/12/2016, © 2011 Internet2
Network Development and Deployment
Initiative (NDDI)
Partnership that includes Internet2, Indiana
University, & the Clean Slate Program at
Stanford as contributing partners. Many global
collaborators interested in interconnection and
extension
Builds on NSF's support for GENI and Internet2's
BTOP-funded backbone upgrade
Seeks to create a software defined advancedservices-capable network substrate to support
network and domain research [note, this is a
work in progress]
52 – 4/12/2016, © 2011 Internet2
Components of the NDDI Substrate




30+ high-speed Ethernet switches deployed
across the upgraded Internet2 network and
interconnected via 10G waves
A common control plane being developed by IU,
Stanford, and Internet2
Production-level operational support
Ability to support service layers & research slices
64 x 10G SFP+
4 x 40G QSFP+
53 – 4/12/2016, © 2011 Internet2
1.28 Tbps non-blocking
1 RU
Support for Network Research

NDDI substrate control plane key to supporting
network research



At-scale, high performance, researcher-defined
network forwarding behavior
virtual control plane provides the researcher with the
network “LEGOs” to build a custom topology
employing a researcher-defined forwarding plane
NDDI substrate will have the capacity and reach
to enable large testbeds
54 – 4/12/2016, © 2011 Internet2
NDDI & OS3E
55 – 4/12/2016, © 2011 Internet2
NDDI & OS3E
56 – 4/12/2016, © 2011 Internet2
NDDI & OS3E
57 – 4/12/2016, © 2011 Internet2
NDDI / OS3E Service Description
• This service is being developed in response to the
request from the community as expressed in the
report from the NTAC and subsequent approval by
the AOAC.
• Fundamentally it is a best effort service with long
term reservations.
– It is at Layer 2
– Different price points for hairpin service and internode service
– It has a completely open access policy
– Underlying wave infrastructure will be augmented
as needed using the same general approach as
used in the IP network.
58 – 4/12/2016, © 2011 Internet2
Deployment
59 – 4/12/2016, © 2011 Internet2
NDDI / OS3E Implementation Status
• Deployment
–
–
–
–
NEC G8264 switch selected for initial deployment
Chicago node installed
4 nodes by Internet2 FMM
5th node (Seattle) by SC
• Software
– NOX OpenFlow controller selected for initial implementation
– Software functional to demo Layer 2 VLAN service (OS3E) over
OpenFlow substrate (NDDI) by FMM
– Software functional to peer with ION (and other IDCs) by SC11
– Software to peer with SRS OpenFlow demos at SC11
– Open source software package to be made available in 2012
60 – 4/12/2016, © 2011 Internet2
Agenda
•
•
•
•
DYNES Overview and Motivation
DYNES Hardware and Software
Current Status
Next Steps
• Recent Discussion
– Equipment Choice
– LHCONE/NDDI/OS3E
• Demonstrations
– GLIF 2011
– USATLAS Facilities Meeting
– SC11
• Conclusion
61 – 4/12/2016, © 2011 Internet2
Demonstrations
• DYNES Infrastructure is maturing as we complete
deployment groups
• Opportunities to show usefulness of deployment:
– How it can ‘stand alone’ for Science and Campus use
cases
– How it can integrate with other funded efforts (e.g.
IRNC)
– How it can peer with other international networks and
exchange points
• Examples:
– GLIF 2011
– USATLAS Facilities Meeting
– SC11
62 – 4/12/2016, © 2011 Internet2
GLIF 2011
• September 2011
• Demonstration of end-to-end Dynamic Circuit
capabilities
– International collaborations spanning 3 continents
(South America, North America, and Europe)
– Use of several software packages
• OSCARS for inter-domain control of Dynamic Circuits
• perfSONAR-PS for end-to-end Monitoring
• FDT to facilitate data transfer over IP or circuit networks
– Science components – collaboration in the LHC VO
(ATLAS and CMS)
– DYNES, IRIS, and DyGIR NSF grants touted
63 – 4/12/2016, © 2011 Internet2
GLIF 2011 - Topology
64 – 4/12/2016, © 2011 Internet2
GLIF 2011 - Participants
65 – 4/12/2016, © 2011 Internet2
USATLAS Facilities Meeting
• October 2011
• Similar topology to GLIF demonstration, emphasis
placed on use case for ATLAS (LHC Experiment)
• Important Questions:
– What benefit does this offer to a large Tier2 (e.g.
UMich)
– What benefit does this offer to smaller Tier3 (e.g.
SMU)
– What benefit does the DYNES solution in the US give to
national and International (e.g. SPRACE/HEPGrid in
Brazil) collaborators
– Will dynamic networking solutions become a more
popular method for transfer activities if the capacity is
available?
66 – 4/12/2016, © 2011 Internet2
SC11
• November 2011
• Components:
– DYNES Deployments at Group A and Group B sites
– SC11 Showfloor (Internet2, Caltech, and Vanderbilt Booths –
all are 10G connected and feature identical hardware)
– International locations (CERN, SPRACE, HEPGrid, AutoBAHN
enabled Tier1s and Tier2s in Europe)
• Purpose:
– Show dynamic capabilities on enhanced Internet2 Network
– Demonstrate International peerings to Europe, South
America
– Show integration of underlying network technology into the
existing ‘process’ of LHC science
– Integrate with emerging solutions such as NDDI, OS3E,
LHCONE
67 – 4/12/2016, © 2011 Internet2
Agenda
•
•
•
•
DYNES Overview and Motivation
DYNES Hardware and Software
Current Status
Next Steps
• Recent Discussion
– Equipment Choice
– LHCONE/NDDI/OS3E
• Demonstrations
– GLIF 2011
– USATLAS Facilities Meeting
– SC11
• Conclusion
68 – 4/12/2016, © 2011 Internet2
DYNES Additional Activities
•We anticipate that will be able to add a some more
sites. Additional applications are being collected
from those that are interested.
•Send email to [email protected] if
interested
69 – 4/12/2016, © 2011 Internet2
DYNES Documents
• http://www.internet2.edu/dynes
• DYNES: A Nationwide Dynamic Network System –
Overview of the DYNES objectives and architecture
• DYNES: Regional Network and End-Site Participation
Requirements
• DYNES: Criteria for Site Selection
• DYNES: Application Package
• DYNES: End-to-End Data Flow Architecture
• DYNES: Frequently Asked Questions (FAQ)
• DYNES Regional Network Application
• DYNES End-site Application
• DYNES Deployment Plan
70 – 4/12/2016, © 2011 Internet2
DYNES References
• DYNES
– http://www.internet2.edu/dynes
• OSCARS
– http://www.es.net/oscars
• DRAGON
– http://dragon.east.isi.edu
• DCN Software Suite (DCNSS)
– http://wiki.internet2.edu/confluence/display/DCNSS/
• FDT
– http://monalisa.cern.ch/FDT/
• perfSONAR-PS
– http://psps.perfsonar.net
71 – 4/12/2016, © 2011 Internet2
DYnamic NEtwork System (DYNES)
NSF #0958998
October 3rd 2011 – Fall Member Meeting
Eric Boyd - Internet2
Jason Zurawski - Internet2
For more information, visit http://www.internet2.edu/dynes
72 – 4/12/2016, © 2011 Internet2