Object Databases as HEP Data Stores

Download Report

Transcript Object Databases as HEP Data Stores

Internet-2, NGI and TEN-155
Lessons for the (European)
Academic and Research Community
David Williams
CERN - IT Division
Supercomputer’98 Mannheim
[email protected]
Slides: http://nicewww.cern.ch/~davidw/public/Mannheim/Mannheim.ppt
What about me?








Other side of the Röstigraben
CERN
Never a networking specialist
No longer a manager
ICFA-NTF
EU
So this talk is all my own views, but not all my own
work
Too much to tell you about
Outline of this Talk




Does the Internet work?
Applications
Technical changes
America
– Internet-2

-- NGI
Europe
– TEN-34 and successors
-- 5FP

What are the lessons?

Networks for supercomputer users
The (general-purpose) Internet in Europe

(How well) Does the (academic)
Internet work?
Not as well as it should
April 1998 Packet Loss (extract)
US university
brown.edu
umass.edu
pitt.edu
harvard.edu
uoregon.edu
washington.edu
princeton.edu
cmu.edu
hawaii.edu
duke.edu
umd.edu
indiana.edu
mit.edu
from Fermi
from CERN
16.15
14.92
10.71
8.06
4.18
2.97
1.50
1.49
1.18
0.30
0.16
0.14
0.02
17.29
15.77
15.52
9.48
0.33
0.30
5.14
14.61
0.70
6.60
5.07
15.25
0.27
NSF Awards NLANR Group at UCSD
$2.08 Million for Measurement and
Analysis of Internet Infrastructure (1/2)


UNIVERSITY OF CALIFORNIA, SAN DIEGO -May 27, 1998
The National Science Foundation has awarded
$2.08 million over 30 months to the University of
California, San Diego to monitor and analyze the
continent-wide research network that is a key
component of the next generation of Internet
technologies.
NLANR Award (2/2)


The award establishes the Measurement and
Operations Analysis Team (MOAT) as a formal
group within the National Laboratory for Applied
Networking Research to analyze network traffic
patterns and traffic behavior, evaluate service
models, and conduct research to enhance the
NSF's very high performance Backbone Network
Service
See http://www.npaci.edu/online/ and
http://moat.nlanr.net/ for more info
CERN to SLAC Monitoring over 1 week in December 1997
Above (left) is Round trip time (about 180 ms)
Below (right) is packet loss rate (very low)
CERN to Uni Tokyo - same week
Sat
Sun
RTT 350-500ms
Big daily peaks
Packet loss worse
Some 20-30% samples
One bad peak
Things do improve - CERN to Tokyo May 98
RTT at 300-400ms
Daily peak effect much less
Packet loss quite reasonable
Daily packet loss structure on a congested route
8 quiet hours at night
From ~01.00 to 09.00 CET
50% peaks
Thurs
Tue
Mon
Sun
Wed
Fri
Sat
Sun
Packet loss rate - Fermilab to Brown University - April 98
Performance inside the US can be bad too!
Same period - CERN to Brown University - packet loss
With RTT - same period - CERN to Brown
RTT = 250 to 400+ msec
Compare 180 msec to SLAC
HEP networking seen from Canada
NACSIS
CAnet
ESnet
DFN
TEN-34
Lessons so far?






Not everything is wonderful in the USA
A few universities badly connected, even some
very rich and famous ones
Everything depends on the detailed routing
InterNet = Interconnects + Nets
We know how to build the nets well
We have not yet learned how to build the
interconnects as well
Three factors for the future
Technology
Economics
Organisation
Applications
Application generations





First = what we are used to
Second = what is about to enter general production
Third = interesting, but needs lots of bandwidth
PLUS
Interactive = human intimately in loop
First generation apps

e-mail
– few 10s of packets; tolerant of high packet loss; noninteractive

Web access
– quite similar; not really very interactive

manually initiated file transfer
– more packets, but not very interactive

telnet, X-window
– interactive; people start to get very disturbed with
delays ~>5 sec; sensitive to packet loss rates
First generation apps (summary)



So far, so good
But not very adventurous
Fortunately only the interactive telnet and X
traffic is very sensitive to packet loss
Second generation apps (1)

streaming video and audio for individuals
– lots of data
– quite strong RT constraints
– intolerant of packet loss (esp. audio)

groupware for collaboration at a distance
– fundamentally important for collaborative science
– shared software development (here or next?)
– brainstorming (shared whiteboards, good access to
Web, incl. graphics, video and audio, ...)
– weekly meetings (10 people, 5 locations, …)
– must be easy to use, reliable, good performance
Second generation apps (2)

automated data access and transfer
–
–
–
–

contrast with manually initiated file xfer
automated file transfer systems for ‘production’
basic remote super-computer services
general client-server systems
shared file systems (e.g. AFS)
– form a special subset
– starting to become part of the “normally expected”
infrastructure in HEP
– need reasonable bandwidth and reliability
– so none to Japan, quite a bit across Atlantic, but less
than inside Europe and inside USA
Second generation apps (summary)


When they can be widely deployed they will
bring fundamental improvements to
collaborative science on national, European, and
inter-national levels
But they do require significant improvements in
the reliability of the links being used (more
bandwidth, lower packet loss)
Third generation apps

Collaboratories and Advanced groupware
– trying to break the distance barrier for individuals
who work together

Remote control rooms
– telescopes or physics experiments or …..
– trying to break the distance barrier for teams looking
after complex technical equipment

Remote VR or other very advanced graphics
– trying to break the distance barrier for people working
with computers
Third generation apps (summary)




No shortage of interesting ideas
Probably? a shortage of bandwidth
Or of money to pay for it
Or of means to make everything work on an
inter-continental basis
Technical changes
Bandwidth (1/2)





Presently drive laser signals down fibre optic cables
(Optically) amplify them every ~100 km
Lay cables under sea, on power lines, in conduits
along roads and railways
Fibre itself not expensive -- O(1 DEM/m)
Undersea amplifiers have to consume very little
power, which comes down cable (10 kV DC across
the Atlantic) and operate unattended for 25 years.
So only 4 pairs per cable, cf 24, 48, and more
overland. This is the reason why undersea
bandwidth is inherently more expensive today.
Bandwidth (2/2)



Signal processing today is electronic. 2.5 Gbps is
standardly available, 10 Gbps a little expensive, but
coming into use.
Big movement is to use multiple wavelengths. 4x in
regular use by carriers, 32x seems almost here,
some people guess that 1024x is feasible
Can you do everything optically?? Who knows?
Switches and routers





These are basically specialised computers, and
benefit from the overall improvement in
computer technology
Critical functions being incorporated in
specialised electronics
High bandwidth backplanes/switching fabrics
There seem to be no barriers to progress
And a lot of very smart people and companies
hoping to make money from Tbps routers
Service levels


Perhaps the next “religious war”
Telecoms people talk about “quality of service” and
know that ATM provides it, while Internet fans now
talk about “differentiated service” (instead of
“integrated services”), and are thinking about who
can tell what makes a service different, and how
you can profit from the knowledge
Internet-2
Some history




Agencies and universities
NSFnet and 1994-95
No NRN
UCAID
Agencies and universities



It is important (for Europeans) to understand
that in the USA the financing of the various
agencies of the federal government (such as
NSF, DoE, DOD, NIH,..) is entirely separate from
the funding of the universities, which is either on
a private or state basis.
Although you can find some parallels in Europe
(Germany in particular) the separation in the US
seems to be very strong.
This all means that until now there has been no
national A&R network in the US
NSFnet and 1994-95


The NSFnet played a key role in the
development of the Internet during the period
from ~1986 until 1994/95. It was effectively the
overall backbone, interconnecting the various
agency networks, and its capacity was upgraded
from 56 kbps to 1.5 Mbps to 45 Mbps during
that time.
It was decided to “privatise” this function, and
that took place during 1994/95, and NSFnet had
been decommissioned by 31 March 1995.
No NRN




The NSFnet backbone was “replaced” by
whichever Internet Service Provider (ISP) the
university took its business to.
Most (and essentially all of the research
universities) chose the InternetMCI service of
MCI, who had run the NSFnet.
95 and 96 were years of very fast growth of the
general purpose Internet in the US (home and
business use was booming).
Lots of congestion and frustration, leading in
October 1996 to the first Internet-2 meetings
Internet-2




This statement may not be PC (in the US), but
Internet-2 is a good approximation to the NRN that
the US has never had
Though Internet-2 per se has no (direct) relations
with or connection to the national labs
Top priority is on production services, and not on
research into networking
Though they do have plans to encourage advanced
apps and to investigate and deploy QoS features
UCAID





The University Corporation for Advanced
Internet Development is, to all intents and
purposes, the supervisory board of a national
university network
Responsible for Internet-2
122 (research) universities are members
14 corporate partners (AT&T, 3Com, ANS, Bay,
Cabletron, Cisco, Fore, IBM, Lucent, MCI,
Newbridge, Nortel, Qwest, StarBurst)
Chief executive is Doug van Houweling
GigaPoPs (1/2)




One theme of Internet-2, which I personally find
very interesting, is to construct GigaPoPs (which
officially stands for Gbps point-of-presence)
All sites wanting to connect in a given region (city,
state, …) connect to a commonly located, operated
and funded Point of Presence
Makes it far simpler, and far more competitive, for
different backbone providers to connect up
GigaPops
Also simplifies inter-connection with “other
networks” (such as ESnet) with decent performance
GigaPoPs (2/2)


Instead of universities and research institutes
needing to get to each carrier’s Point of Presence,
GigaPoPs allow the universities to specify where the
carriers must come to
It would be a good idea (according to me) for the
(A&R community in) European countries, regions
and cities to invest in such EuroPoPs or UserPoPs
vBNS backbone




Until recently the de facto backbone for I2 was
the NSF’s vBNS, provided by MCI
The very-high-performance Backbone Network
Service was initially created as a fast
interconnect between NSF’s Supercomputer
Centres
It basically provides 622 Mbps links
During 1997/98 it has been transformed into the
Connections Program and some 100?
universities are now connected to it
More competition



Recently (15 April 98) UCAID announced that
Abilene will form an alternative backbone for
Internet-2
Abilene is based on the use of the Qwest fibre
optic network, with equipment from Cisco and
Nortel (Northern Telecom)
I have seen essentially no discussion of the
financial terms, but it is clear that some of this is
supported financially by the three companies
concerned
More on Qwest



Qwest Teams with Cisco to Build the Next Generation of
High-Speed Voice/Data Networks
Dr. Shafei explained that Qwest's network starts with 48
fibers (with the capability to add ten times as many fibers
through additional in-place conduits), bidirectional, line
switching OC-192 ring SONET nationwide network. Each
fiber can carry 8 wave division multiplexing (WDM)
windows, where each WDM window has a bandwidth of 10
gigabits per second, providing Qwest with the potential of a
multi terabit-per-second capability.
From a 30 Sept 1997 Press Release
Next Generation Internet
(NGI)
Overview (1/2)

NGI initiative = multi-agency federal agency R&D
programme for:– developing advanced network technologies
– developing revolutionary apps needing advanced nets
– demonstrating via testbeds 100x-1000x faster end-to-end
than today’s Internet





i.e. not the universities (directly)
Started 1 October 1997 (FY’98)
Normally said to be 3-year program (I have seen 5)
DARPA, NASA, NIH, NIST, NSF all involved
DoE from FY’99??
Overview (2/2)




This is the real place where the “leading-edge” R&D
is being performed
But… insofar as NSF and hence vBNS are part of
NGI, the project plays an important role in getting
Internet-2 off the ground
Impressive (to me) how far the politicians (Gore et
al, but not only him) have understood the economic
importance of Internet evolution
One of my worries in Europe….
A first NGI vision

The Next Generation Internet will:
– Accelerate mission-critical and time-sensitive research for
Federal technology programs
– Expedite the introduction of advanced networking
services and applications
– Ensure and strengthen the technological and scientific
leadership of the United States
– Foster stronger technology research partnerships among
government, academia and industry
(From talk by Toole on 13 June 97, at www.ngi.gov/talks)
A second NGI vision

Imagine an Internet a thousand times faster than
today
– An Internet so ubiquitous that it interconnects all
Americans regardless of location, age, income or health
– An Internet so safe and reliable that Americans confidently
use it for most of their important communications
– An Internet so intelligent that it can be used effortlessly to
help us preserve our environment, improve our
productivity, and get first rate medical care
(From talk by Howell on 9 April 98, same location)
First NGI testbed



At least 100 sites (universities, federal labs,
other research partners) connected with end-toend speeds 100x faster than today’s Internet
Today’s end-to-end speed (such as available
between two workstations) is assessed as 10
Mbps at most
Led by NSF, NASA and DoE (from FY 99)
Second NGI testbed




About 10 sites with end-to-end speeds 1000x
faster than today’s Internet
Development of ultra-high speed switching and
transmission technologies, and end-to-end
speeds of 1+ Gbps
Speaks of laying groundwork for Tbps, with net
management, control and QoS guarantees
Led by DARPA, with participation by NASA, NSF,
DoE (from FY 99) etc.
Also



Experimental research for advanced network
technologies
Developments of “revolutionary” applications
Everything will, as far as possible, be tested on
the testbeds
SuperNet (1/2)



SuperNet will lay the groundwork for Tbps
networks
Coordinated by DARPA IT Office
Wide-Area Broadband Core
– It is DARPA’s intention that one or more metropolitan
networks with links capable of at least 40 Gbps
transmission rates be deployed.
– In addition some or all of the metro nets will be
connected to form a national ultra-high-capacity net
– The elements in these nets will be largely all-optical
with no electronic conversion
(from www.darpa.mil/ito/Solicitations)
SuperNet (2/2)

Broadband Local Trunking
– Searching for cost-effective ways for delivering really
high-performance services to users, everywhere
– Near-transparent (?) and service-independent
connectivity from customer premises to all-optical
backbone
– 20-40 Gbps fibre-based access? or
– Gbps RF access (including satellite access)
Timetable (selected items)

Starting in 1999
– First testbed with >100 sites connected to 622 Mbps
infrastructure over 155 Mbps connections

Starting in 2000
– Second testbed connecting ~10 sites with 2.5 Gbps
connections

Starting in 2001
– Integrate QoS over a variety of technologies and
carriers

Starting in 2002
– Tbps packet switching demo. Advanced apps tested
over second testbed
Budgets FY’98 and FY’99 (proposal)
Agency
FY’98
FY’99
DARPA
NSF
DoE
NASA
NIST
NIH
42
23
0
10
5
5
(All in MUSD) 40
25
25
10
5
5
Total
85
110
The relationship between
Internet-2 and NGI
Not easy to understand … (1/2)




Some aspects of telling politicians what they want
to hear
Internet-2 grew out of university frustration with
poor Internet performance. Above all it is about a
better production network for universities.
The federal agencies have had their own networks,
with little or nothing to be frustrated about
NGI is basically advanced R&D triggered and largely
funded by the US government, with the aim of
keeping US industry in a dominant position in
Internet technologies
Not easy to understand … (2/2)





Internet-2 and NGI do not intrinsically have much to
do with each other
But, as I already pointed out, Internet-2 starts out
by depending on vBNS for its backbone
What change does Abilene make??
The first NGI testbed looks awfully like Internet-2
with good connections from the various federal nets
via vBNS??
If things go well, there might be a big opportunity
for the US universities and federal agencies (and
industry) to make progress fast together……
TEN-34 and Ten-155
What is TEN-34?







Trans-European Network at 34 Mbps
A distributed switch interconnecting Europe’s
national A&R networks (NRNs)
A true InterNet (remember I said that building
these InterConnects is the hardest job)
A truly major technical and political achievement
Started (after almost two years preparatory
work) in ~April 97
Funded 40% by EU and 60% by NRNs
4FP project which formally ends 31 July 1998
Some comments



You can see that in fact not many of the lines in
TEN-34 are operated at 34 Mbps
And that there are in fact two basic networks - an
IP network from Unisource and an ATM network
from various PTTs. There was no credible single
supplier in mid-96.
Administrative structure is very complex
More comments



Slightly more capacity installed between Europe
and US than between European countries
About 100 Mbps added (to ~380 Mbps) to/from
the US in the last 2 months
The US capacity is very heavily used, so total
volume of A&R traffic transmitted from Europe
to US is much higher than inside Europe
Moving to TEN-155



Formal decisions will be made during the coming
weeks. The following information is believed to be
accurate, but should be treated as provisional
Arrangements have been made to extend the
TEN-34 project, so that it can run down until 31
December 98, at the same time as TEN-155 runs up
TEN-155 may/might be the name of the production
network which forms part of the Quantum Project,
one of the last projects to be funded from the 4FP.
Quantum also involves R&D into QoS issues
TEN-155 (more)



Should be “fully” operational by 1 Jan 99
Likely to last about 1 year, when an FP5 takes
over (see below)
Many of the major countries will have 155 Mbps
access to the TEN-155 backbone
The Fifth Framework Programme
Fifth Framework Programme




What is it?
When will it happen?
What is the structure?
What about A&R networking
Caveat 1: FP5 has not yet been finally defined
or approved. Everything we say about it has to
be taken conditionally.
Caveat 2: My information may be out-of-date
What are Framework Programmes?
My explanation




Every 5 years (in principle) the EU makes a plan
concerning all of the research and technical
development which it intends to carry out
These plans are called Framework Programmes
The basic idea is that the EU only does things
which cannot be done better nationally
The planning exercise is quite complex
What is it?
(from EU documentation)



Framework programmes are instruments which reflect the
scientific and technological priorities of their particular period,
as well as the prevailing economic and political circumstances.
Taking the form of a European Community legislative decision,
framework programmes set out, for their period of application,
the global objectives of Community RTD activities, the specific
priorities and research themes to be addressed, the rules and
procedures for implementation, the general conditions for
participation, the indicative budget and the allocation of
financial resources to the various research themes.
The research themes identified in the framework programme
decision are then implemented by a number of "specific
programmes" (e.g. Biomedicine and Health, Telematics
Applications, Innovation, etc.)
When will it happen?



FP4 covers the period 1994-1998. It was originally
allocated funding of 12.3 Gecu (for the five years), which
has since been increased to 13.2 Gecu.
FP5 should cover the period 1999-2002. Present
proposal is that the funding should be 16.3 Gecu (for the
four years). Of this 3.925 Gecu, or 1 Gecu annually,
would be allocated to “Creating a user-friendly
information society”. This is about 0.12‰ of Europe’s
GDP.
Pessimists expect the approval process for FP5 to last
into 1999, whereas optimists hope that Calls for
Proposals for specific programmes will already be issued
by end-1998.
Structure of FP5 - Thematic


There are four Thematic Programmes, one of which
is “Creating a user-friendly Information Society”.
This should have four “key actions”:–
–
–
–

Systems and services for the citizen
New methods of work and electronic commerce
Multimedia content and tools
Essential technologies and infrastructures
These activities are all about R&D (and nothing to
do with production services)
Structure of FP5 - Horizontal


There are also three Horizontal Programmes, one
of which is “Improving Human Potential”.
This has a “general objective, to be realised in
concert with related actions elsewhere in the FP”
– Support for research infrastructures

Among other things, this is the label under which
support for “TEN-155” and successors, including
better connections to/from outside Europe, might
be provided.
The Information Society Technologies
(IST) Programme



The “convergence” of computing, telecommunications
and consumer electronics is well recognised in Brussels.
The EU has woken up to the importance of the Internet.
It is starting to understand both the speed of change
and the economic impact, and wants to put in place a
programme for Internet development in Europe which
has some cohesion and identity.
In FP4, ESPRIT, ACTS and Telematics were separate
programmes. In FP5, they will be defined as part of the
same programme (the IST Programme) and coordinated
through a single EU management structure.
Networking in FP5


Funding levels available will depend on
discussions about whether A&R networking
should be funded only by the IST programme, or
also by the other FP5 programmes
Intention is to keep working on the
interconnection of the NRNs, as long as there is
a need, and as long as that need cannot be met
sensibly by commercial services
What are the lessons?
Need for coherent A&R networking





A&R community forms a natural grouping
It needs reliable high-bandwidth services, both
nationally and internationally
At the moment the needs and the traffic are
differentiated enough to keep them separate
from business, commercial, school or home use
[Good interconnections assumed]
Basically we need NRNs
Need for coherent European networking




Complexity - countries are different - different
structures for A&R - different ideas on industrial
policy
But all of the countries need to interconnect their
NRNs well (besides getting good connectivity to the
US)
Organisationally we need projects like TEN-34, TEN155 and successors
They will change over time - involving more open
competition for the supply of infrastructure and
services
Internet and the European economy




It is going to change society in many ways
It is going to be economically very important - both
for the telecoms suppliers, but also for “normal”
business
I believe that it will change the whole economics of
voice telephony and fax
As a group we understand quite a bit about this and
have a duty to explain it to our governments and
fellow citizens
Understanding Internet better




We need better facts
About traffic, performance, reliability
Who is using our networks - what for - whether
they are hppy (getting their job done)
We (users, NRNs, ISPs) must be more open
about these facts
QoS and traffic flow separation



Differentiated services offer a big opportunity to
understand which community generates which
sort of traffic
Might be useful for all sorts of reasons
Including sending critical flows over separate
lines (physically separate - maybe separately
funded)
Networks for supercomputer users
Basics






Supercomputers can generate information faster
than “normal” computers
Their users do not (normally) want or need to
live close to the supercomputer
They need shared file systems to prepare
programs and data
Plus collaborative software development
environments
But … more info == more bandwidth
How much (semi) interaction??
We all need ...

Like all serious network users, supercomputer
users need:– More bandwidth
– Smooth reliable operation
– No boundaries (between countries, between service
providers)

When they obtain those they will be able to do
better science and engineering
The (general-purpose) Internet in
Europe
Everything so far




Was about A&R networking
But the Internet is also about people, companies,
economics, and society as a whole
Europe has a tendency to treat the academic and
commercial worlds as quite distinct
Carried to extremes this can be dangerous
Reasons for European pessimism (1/2)




Others (in this context, especially Americans) are at least as
smart as we are.
And they have 2-3 years more experience than us in
understanding the impact of the Internet on the economy in
particular, and society in general. That is a long time!
The worlds of European business, commerce and banking
completely fail (in my opinion) to understand the speed with
which the Internet is transforming our basic economic
assumptions.
There are not many signs that the ex-PTTs will be able to
meet the double challenge of privatisation, and the Internet
revolution, hitting them at the same time. There is a strong
risk that they will stay national in their thinking for far too
long.
Reasons for European pessimism (2/2)




We never (really) created a strong European
computing industry
We lack the venture capital “philosophy”???
In Europe we did not succeed to build the alliance
between universities, research laboratories, the IT
industry, telecoms carriers, and politicians which, in
the USA, moved the Internet out from the labs and
into a mass market. Fuelling a complete and
virtuous economic cycle from research through to
production deployment and back to research
Or, where we built the alliance in Europe, it was not
based around IP
Reasons for European optimism (1/2)






We are at least as smart as anyone else
We can boast of many really excellent world class
companies
Both in general, and in particular in (traditional)
telecoms equipment supply
We have a vibrant mobile telephony industry
We (largely) liberalised the European telecoms
market in January 1998. Even Greece liberalises at
end 2000.
At this moment in time our national telecoms
carriers are cash-rich
Reasons for European optimism (2/2)



Many (not all) of our national A&R networks are
excellent
So is TEN-34 (if still too expensive)
(You can argue that) we have been good,
historically, at understanding the need for
infrastructures, at planning for their
implementation, and at keeping them in good
repair. [Roads, rail, airports, TGV et. al., motorways,
city centre transport, schools, health care, mobile
telephony,…]