Digital 395 Middle Mile Project
Download
Report
Transcript Digital 395 Middle Mile Project
Digital 395
Vendor Overview for Electronics Requirements
August 2-5, 2011
For Discussion Purposes Only
Topics
Technology
Services
Anchors Summary
For Discussion Purposes Only
2
Network Location
For Discussion Purposes Only
3
Network Technology
Network Topography
432 Fiber backbone (local and long-haul applications)
Laterals: 24 to 72 fiber cable
Frequent points of interconnection (approx 4K’ intervals)
Optical Fiber SMF-28
Physical Point-to-Point / Logical Ring design
Primary and Protect routes in different buffer tubes
Electronics (1xN card level protection)
For Discussion Purposes Only
4
Network Technology
Nodes
15 stand-alone buildings (11.6’x26’)
8 Core Nodes (Reno, Carson City, Lee Vining,
Mammoth, Bishop, Lone Pine, Ridgecrest, Barstow.
3 Regeneration Sites (Coleville, Kramer Junction,
Olancha)
4 Tributary Nodes (Bridgeport, Benton, Big Pine, and
Independence)
Reno and Barstow Nodes are interconnection points into the
PSTN and IP Peering.
Note: Reno and Barstow nodes may be 11.5’x40’
For Discussion Purposes Only
5
Initial Customers Sites Per Node
Node
# of CAI
Node
# of CAI
Reno
0
Big Pine
12
Carson City
0
Independence
26
Antelope
8
Lone Pine
25
Bridgeport
22
Olancha
5
Lee Vining
15
Ridgecrest
54
Mammoth
32
Kramer
11
Bishop
62
Barstow
0
Benton
9
TOTAL
237
NOTE: For purposes of this project, initial customer sites are called “anchors”
For Discussion Purposes Only
6
Anchor Profiles
Anchor Number by Type
59 Schools (K-12)
13 Library
18 Medical or Healthcare Providers
38 Public Safety Entity
3 Community College
2 Other Institution of Higher Education
26 Other Community Support Organization
122 other Government Facility
Point of Interconnection
136 cell site that needs backhaul
For Discussion Purposes Only
7
Network Technology
Backbone Fiber Terminals
DWDM
Backbone Data Rate 100Gb Native Ethernet
Layer Two switching
Multiple Internet Peering
1 Gig ports in Reno and Barstow
BGP Routing
IPv6
Multi-Service Edge Devices – Customer
Premises
Gigabit Ethernet customer hand off.
Multiple embedded interfaces plus expansion
slot(s)
For Discussion Purposes Only
8
Network Services
High-speed Internet (HSI)
10Mbs up to 10GigE
Internet 2 connectivity
Ethernet
Ethernet Virtual LAN Service (EVLAN).
Fully Meshed or Point to Multi Point 10mbs up to
10GigE
4, 8 or 12 strands end to end or IOF
Wavelengths (WL)
Point to Point 10mbs up to 10GIgE
Dark Fiber
Ethernet Private Line Service (EPL).
1 GigE, 2.5 Gbps, 10 Gbps, 10 GigE, and 100 Gbps
SONET (Point to Point)
DS-1, DS-3, OC-3, OC-12, OC-48, and OC-192.
For Discussion Purposes Only
9
Considerations: Power
What is your bank-power scheme? These questions are
asked because the power system design for the node is not
complete and can't be until the electronic equipment is
selected.
How many feeders are required per bank? Some vendors
use 4, others only 2. The answer could have significant
impact on our power design.
What is the termination type of the power feeder on the
bank? Lug? Connector? Details are needed.
Our practice is to fuse the feeders at the distribution fuse
board. Secondary fuses (at the equipment will not be used.
Does the equipment use any additional fuses such as found
on some circuit packs. If so, please provide details.
Are any circuit packs or blades on your system configured as
n+1? Hot swappable? (Need to be accounted for in the
power load calculations.) Assuming normal operating voltage
is -48 vDC, what is your low limit? High limit?
For Discussion Purposes Only
10
Considerations: Heat
What is your NEBS load for a (fully equipped) 7 foot frame.
The reason for this question is often, significant bay space
must be left unoccupied for very high heat dissipation. I am
asking only of the heat issues not mechanicals.
What are your high and low operating temperatures?
If your equipment is hardened (over 50C) How hard? 65C?
Are there any sub-systems used that are not hardened? I ask
this because I have seen systems that were hardened to 65C
that used certain circuit packs that were “stuck at 50”.
For Discussion Purposes Only
11
Considerations: Racking and Support
Can you demonstrate your jumper management scheme for a
frame configured to the maximum capacity? The reason for
the question is the building jumper management will use ADC
Fiber Guide to keep the pile manageable.
Is the inner-frame routing compatible with ADC's system?
Will we be providing any services using CAT-5 or CAT-6
cabling? If so, then some CAT-6 interbay cabling will be
required.
Will we need to be concerned with routing and racking
supports for CAT-5 or CAT-6 cabling? What is your footprint?
We need to fit within the frame lineup and not intrude on
power space.
Are your banks configured for 19 inch racking? If so, are your
23 inch adapters compatible with ADC's fiber management
system?
For Discussion Purposes Only
12