slides - TNC15

Download Report

Transcript slides - TNC15

Connected Communities
Need Solid Foundations
TERENA 2015
John Day
June 2015
“There is something fascinating about science.
One gets such wholesale returns on conjecture
out of such a trifling investment of fact.”
- Mark Twain
“Life on the Mississippi”
In a network of devices why would
we route between processes?
- Toni Stoey, RRG 2009
In the Beginning, There was
The Beads on A String Model
phone
CO
CO
phone
• The Nature of their early technology led the Phone Companies to
Adopt what could be called, a “Beads-on-a-String” architecture.
– Deterministic, Hierarchical, master/slave.
• Perfectly reasonable for what they had.
• The model not only organized the work,
– But was also used to define markets: Who got sell what.
– Interfaces between boxes were market boundaries
– This was what was taught in most data comm courses prior to the 1980s.
• And for some, in a fundamental sense, never left.
© John Day, 2014
Rights Reserved
Packet Switching
• In the early 1960s, Paul Baran at The Rand Corporation writes a series of
reports investigating the networking requirements for the DoD.
•
Donald Davies at NPL in the UK had the same idea
• He finds that the requirements for data are very different than those for voice.
•
Data is bursty. Voice is continuous.
•
Data connections are short. Voice connections have long durations.
• Data would be sent in individual packets, rather than as continuous stream, on
a path through the network.
• Packet switching is born and
• By the late 1960s, the Advance Research Projects Agency decides building
one would reduce the cost of research and so we have the ARPANET.
© John Day, 2014
Rights Reserved
But was Packet Switching
a Major Breakthrough?
• Strange as it may seem, it depends on how old you were.
• If your formative years had occurred prior to the mid-60s (pre-boomer),
your model of communication was defined by telephony.
– Then this is revolutionary.
• If you are younger (boomer), your model is determined by computers.
– How to do communications? Data is in buffers
• Pick up a buffer and send it.
– What could be more obvious!
• That it was independently invented (and probably more than twice) supports that.
• But there was a more radical idea coming!
© John Day, 2014
Rights Reserved
The Cyclades Architecture
(1972)
Host or End System
Application
TS: End-to-End Reliability
Transport
Router
Network
Data Link
Physical
Cigale Subnet
• CYCLADES brings the layering from operating systems, but its different.
• Data Link – corrects media errors, not propagated to a wider scope.
• Network – relays using a connectionless datagram network, Cigale
• Transport recovers from loss due to congestion creating a reliable flow.
• Yielding a simpler, cheaper, more effective and robust data network.
• Since the Hosts won’t trust the network anyway, the network does not have to be
perfect, (and can’t be); it makes a “Best-Effort; need only be sufficiently reliable to
make end-to-end cost effective
• This represents a new model, in fact, a new paradigm completely at odds
© John Day, 2014
with the beads-on-a-string model.
Rights Reserved
A Note about Layers
• The advent of packet switching required much more complex software
than heretofore, and so the concept of layers was brought in from
operating systems.
– From Dykstra’s THE Operating System, 1968
• In operating systems, layers were seemingly a convenience, one design
choice, merely used for modularity.
• Most networking courses teach that layers are for controlling complexity,
for modularity in a stack.
– This is true, but not the primary reason for them. The primary reason is:
• In networks, Layers are a necessity.
• And more general.
The (really) Important Thing about Layers
(From first lecture of my intro to networks course)
•
•
•
Layers are a locus of distributed shared state of different scopes
At the very least, differences of scope require different layers.
It is this property that makes the earlier telephony or datacomm
“beads-on-a-string” model untenable.
– Or any other proposal that does not accommodate scope.
•
This was why CYCLADES used layers.
Increasing
Scope
Host or End System
Router
The New Model Had 4 Characteristics
• It was a peer network of communicating equals not a hierarchical
network connecting a mainframe master with terminal slaves.
• The approach required coordinating distributed shared state at
different scopes, which were treated as black boxes. This lead to the
concept of layers being adopted from operating systems and
• There was a shift from largely deterministic to non-deterministic
approaches, not just with datagrams in networks, but also with
interrupt driven, as opposed to polled, operating systems, and physical
media like Ethernet, and last but far from least,
• This was a computing model, a distributed computing model, not a
Telecom or Data comm model. The network was the infrastructure of a
computing facility.
• These sound innocuous enough. They weren’t.
• Not by a long shot!
© John Day, 2014
Rights Reserved
In Networking
IBM Found Itself at a Dead-End
You can always make a peer architecture hierarchical
But you can’t go the other way.
Mainframe
But IBM and the PTTs had carefully stayed out of each other’s turf.
Had IBM made SNA a peer network and subset it for the 70s hierarchical
market, the Internet would have been nothing but an interesting research project.
© John Day, 2014
Rights Reserved
The Beads-on-a-String Model
• Meanwhile the Phone Companies continues with what it is familiar with.
– Emulating the phone system in computers
– Who cares about this academic connectionless stuff? We have real networks to build.
– How do you charge for usage in a best-effort service?
• Asymmetrical/Connections/Deterministic
– And a tendency toward hierarchy
• This Model Can not Represent Scope.
• Purpose of the architecture is to define who owns what boxes (protect a monopoly).
– If you hear, X is in the network and X isn’t involved with moving bits or managing moving
bits, then it is beads-on-a-string.
The Network as seen by
the new Networking Model
Host
Interface
Router
Router
Host
DCE
Packet
-mode
DTE
Terminal
Start-stop
DTE
PAD
DCE
Interface
The Network as seen by the PTTs
© John Day, 2014
Rights Reserved
While the New Model Made Perfect Sense to Computing,
It Was a Threat to Phone Companies.
• Transport Seals Off the Lower Layers from Applications.
— Making the Network a Commodity, with very little possibility for value-add.
• TPC counters that Transport Layers are unnecessary, their networks
are reliable.
Transport
The Network
And they have their head in the sand, “Data will never exceed voice traffic”
© John Day, 2014
Rights Reserved
1972 Was an Interesting Year
• Tinker AFB joined the ‘Net exposing the multihoming problem.
Host
8
IMP
6
IMP
• The ARPANET had its coming out at ICCC ‘72.
• As fallout from ICCC 72, the research networks decided it would be
good to form an International Network Working Group.
– ARPANET, NPL, CYCLADES, and other researchers
– Chartered as IFIP WG6.1 very soon after
• Major project was an Internetwork Transport Protocol.
– Also a virtual terminal protocol
– And work on formal description techniques
There Were Two Proposals
•
INWG 39 based on the early TCP and
•
INWG 61 based on CYCLADES TS.
•
And a healthy debate, see Alex McKenzie, “INWG and the Conception of the Internet:
An Eyewitness Account” IEEE Annals of the History of Computing, 2011.
•
Two sticking points
–
–
•
How fragmentation should work
Whether the data flow was an undifferentiated stream or maintained the integrity of the units
sent (letter).
These were not major differences compared to the forces bearing down on
them.
After a Hot Debate
•
A Synthesis was proposed: INWG 96
•
There was a vote in 1976, which approved INWG 96.
•
As Alex says, NPL and CYCLADES immediately said they would convert to
INWG 96; EIN said it would deploy only INWG 96.
•
“But we were all shocked and amazed when Bob Kahn announced that DARPA researchers were too close to
completing implementation of the updated INWG 39 protocol to incur the expense of switching to another design. As
events proved, Kahn was wrong (or had other motives); the final TCP/IP specification was written in 1980 after at
least four revisions.”
– Neither was right. The real breakthrough came two years later.
•
But the differences weren’t the most interesting thing about this effort.
The Similarity Among all 3
Is Much More Interesting
• This is before IP was separated from TCP. All 3 of the Proposed
Transport Protocols carried addresses.
• This means that the Architecture that INWG was working to was:
TCP
IP
SNDC
Internetwork Transport Layer
Network Layer
SNAC
LLC
MAC
Data Link
Layer
• Three Layers of Different Scope each with Addresses.
• If this does not hit you like a ton of bricks, you haven’t been paying
attention.
• This is NOT the architecture we have.
INWG’s Internet Model
Internet Gateways
Host
Host
Application
Application
Internet
Transport
Internet
Transport
Network
Network
Data Link
Data Link
Network 1
Network 2
Network 3
• An Internet Layer addressed Hosts and Internet Gateways.
• Several Network Layers of different scope, possibly different
technology, addressing hosts on that network and that network’s routers
and its gateways.
– Inter-domain routing at the Internet Layer; Intra-Domain routing at the
Network Layer.
• Data Link Layer smallest scope with addresses for the devices (hosts or
routers) on segment it connects
• The Internet LOST A LAYER!!
So What Layer Did They Lose?
• It is not obvious.
• At first glance, one might say the Network Layer.
– The Protocol is called IP after all!
– Removing the ARPANET, “removed” the Network Layer,
– Everything just dropped down.
• But the IP Address names the Interface, something in the layer below,
just like ARPANET addresses did!
– At best, IP names a network entity of some sort, at worst, a data link entity
– Actions speak louder than words
• We must conclude that, . . .
They Lost the Internet Layer!!!
The Internet is a beads-on-a-string Network like the PSTN
Wait A Minute!
Names the Interface?
•
•
•
•
•
Remember Tinker AFB? The answer was obvious. Just like OSs!
Directory provides the mapping between Application-Names and the node
addresses of all Applications reachable without an application relay.
Routes are sequences of node addresses used to compute the next hop.
Node to point of attachment mapping for all nearest neighbors to choose path
to next hop. (Because there can be multiple paths to the next hop.)
This last mapping and the Directory are the same:
–
Mapping of a name in the layer above to a name in the layer below of all nearest neighbors.
Application
Name
Node
(Logical Address)
Point of
Attachment
(Physical Address)
Directory
Here
Route
And
Here
Path
Not in the Internet
•
The Internet only has a Point of Attachment Address, an interface.
–
–
•
There are no node addresses or application names.
–
–
–
•
Which is named twice!
No wonder there are addressing problems
Domain names are macros for IP addresses
Sockets are Jump points in low memory
URLs name a path to an application
This makes router table size 3-4 times larger than necessary and similarly the
number of addresses needed.
Application
Name
Application
Socket (local)
Node Address
IP Address
Point of Attachment
Address
MAC Address
As if your computer worked only with absolute memory addresses.
(kinda like DOS, isn’t it?)
The Big Mistake:
Splitting IP from TCP
• The Rules say if there are two layers of the same scope, the functions
must be completely independent.
• Are Separating Error and Flow Control from Relaying and Multiplexing
independent? No!
• Problem: IP fragmentation doesn’t work.
– IP has to receive all of the fragments of the same packet to reassemble.
– Retransmissions by TCP are distinct and not recognized by IP.
• Must be held for MPL (5 secs!)
• There can be considerable buffer space occupied.
• There is a fix:
MTU Discovery.
– The equivalent of “Doc, it hurts when I do this!” “Then don’t do it.”
– Not a “big” problem, but big enough to be suspicious.
But it is the Nature of the Problem
That is Interesting
• The problem arises because there is a dependency between IP and
TCP. The rule is broken.
– It tries to make it a beads on a string solution.
• A Careful Analysis of this Class of Protocols shows that the Functions
naturally cleave (orthogonally) along lines of Control and Data.
Data
Transfer
Data Transfer
Control
Delimiting
Seq Frag/Reassemb
Relaying/ Muxing
SDU Protection
• TCP was split in the Wrong Direction!
• It is one layer, not two.
– IP was a bad idea.
• Are There Other Implications?
Retransmission and
Flow Control
Delta-t Results (1978)
The Pouzin Society
• Watson proves that the necessary and sufficient conditions for distributed
synchronization requires only that 3 timers are bounded:
• Maximum Packet Lifetime
• Maximum number of Retries
• Maximum time before Ack
•
Richard Watson develops delta-t to demonstrate the result, which has
some unique implications:
– Assumes all connections exist all the time.
– TCBs are simply caches of state on ones with recent activity
• Watson shows that TCP has all three timers and more.
– delta-t is more robust under harsh conditions and more secure than TCP.
– SYNs, FINs are unnecessary.
• Also defines the bounds of networking or InterProcess Communication:
– It is IPC if and only if Maximum Packet Lifetime can be bounded.
– If MPL can’t be bounded, it is remote storage.
© John Day, 2013
All Rights Reserved
22
A Chance to Get Things on Track
• We knew in 1972, that we needed Application Names and some kind
of Directory.
• Downloading the Host file from the NIC was clearly temporary.
• When the time came to automate it, it would be a good time to
introduce Application Names!
• Nope, Just Automate the Host File. Big step backwards with DNS.
• Now we have domain names
– Macros for IP addresses
• And URLs
– Macros for jump points in low memory
– The path to the Application is named, but Nothing names the Application.
© John Day, 2014
Rights Reserved
Then in ‘86: Congestion Collapse
• Caught Flat-footed. Why? Everyone knew about this?
– Had been investigated for 15 years at that point
• With a Network Architecture they put it in Transport.
– Worst place.
• Most important property of any congestion control scheme is minimizing
time to notify. Internet maximizes it and its variance.
• Thwarts any attempt at doing Quality of Service.
• And implicit detection makes it predatory.
– Virtually impossible to fix
• Whereas,
© John Day, 2014
Rights Reserved
Congestion Control in an Internet is
Clearly a Network Problem
Internet Gateways
Host
Host
Application
Application
Internet
Transport
Internet
Transport
Network
Network
Data Link
Data Link
Network 1
Network 2
Network 3
• With an Internet Architecture, it clearly goes in the Network Layer
– Which was what everyone else thought.
• Time to Notify can be bounded and with less variance.
• Explicit Congestion Detection confines its effects to a specific
network and to a specific layer.
© John Day, 2014
Rights Reserved
Would be Nice to Manage the Network
•
All Management is Overhead! We need to minimize it.
– Then need Efficiency, Commonality, Minimize Uncertainty
•
With a choice between a object-oriented protocol (HEMS) and a “simple”
approach (SNMP), IETF goes with “simple” to maximize inefficiency
– Must be simple, has Largest Implementation of the 3: SNMP, HEMS, CMIP.
– Every thing about it contributes to inefficiency
• UDP maximizes traffic and makes it hard to snapshot tables
• No means to operate on multiple objects (scope and filter). Can be many
orders of magnitude more requests
• No attempt at commonality across MIBs.
• Polls?! Assumes network is mostly failing!
• Use BER, with no ability to use PER. Requests are 50% - 80% larger
•
Router vendors played them for suckers and they fell for it.
– Not secure, can’t use for configuration.
– (Isn’t ASN.1 an encryption algorithm?)
– Much better to send passwords in the clear.
– It is all about account control
© John Day, 2014
Rights Reserved
IPv6 Insists that It Name the Interface?
Why on Earth?
• Known about this problem since 1972
– No Multihoming, kludged mobility
– And router tables 3 – 4 times larger than necessary.
• Talk about cutting off your nose to spite your face! Good grief!
• When they can’t ignore it any longer, and given post-IPng trauma they
look for a workaround.
• “Deep thought” yields Loc/Id Split!
• But not ‘deep’ enough:
– Saltzer [1977] defines “resolve” as in “resolving a name” as “to locate an
object in a particular context, given its name.” All names locate in computing
– the locator is the interface on the path not the end of the path.
– the “identifier” locates communication with the application.
• It names everything but the node, where all paths terminate.
• There is no workaround. IP is fundamentally flawed.
© John Day, 2014
Rights Reserved
Never Get a Busy Signal on the Internet!
2010 They Discovered Buffer Bloat!
No Interface
Flow Control
Flow Control
• Golly Gee Whiz! What a Surprise!!
• With Plenty of Memory in NICs, Getting huge amounts of buffer
space backing up behind flow control.
• Well, Duh! What did you think was going to happen?
– If you push back, it has to go somewhere!
– Now you can have local congestion collapse!
• If peer flow control in the protocol, pretty obvious one needs interface
flow control as well.
© John Day, 2014
Rights Reserved
But What About Security?
• Security?
• Don’t you read the papers?!
– It is terrible! And all signs are getting worse.
– IPsec makes IP connection-oriented, so much for resiliency to failure.
– Everything does their own, so very expensive.
• Privacy? Can’t fix it, so same reaction as for QoS
– You don’t need it in the brave new world.
• They say the Reason is that Never Considered It at the Beginning.
– Later we will see how ignoring security can lead to better security.
• There have been a lot of “after the fact” attempts to improve it.
– With the usual results: greater complexity, overhead, new threats.
© John Day, 2014
Rights Reserved
Taking Stock
• The Internet has:
–
–
–
–
–
–
–
–
–
Botched the protocol design
Botched the architecture
Botched the naming and addressing
When they had an opportunity move in the right direction with application
names, they didn’t. They did DNS.
When they had an opportunity to move in the right direction with node
addresses, they didn’t. They did IPv6.
More than Botched Network Management
Botched the Congestion Control twice
Once so bad it probably can not be fixed.
Botched Security!
• By my count this makes them 0 for 9!
• It defies reason! Do these guys have an anti-Midas touch or wha!?
• So Much For Building on a Strong Foundation.
© John Day, 2014
Rights Reserved
But It’s a Triumph!
(By that argument, so was DOS)
•
•
•
•
But It Works!
So did DOS. Still does.
‘With Sufficient Thrust even Pigs Can Fly!’ - RFC 1925
As long as fiber and Moore’s Law stayed ahead of Internet Growth,
there was no need to confront the mistakes.
– Or even notice that they were mistakes.
• Now it is catching up to us, is limiting, and it can’t be fixed.
– Fundamentally flawed from the start, a dead end.
– Any further effort based on IP is a waste of time and effort.
• Throwing good money after bad
– Every patch (and that is all we are seeing) is making it more expensive and
less predictable and taking us further from where we need to be.
• This is not the solid foundation we need. What do we do?
© John Day, 2014
Rights Reserved
The Internet Never Got Past
Application
Or do whatever
you want
TCP
IP
Subnet
Or do whatever
you want
We Were Missing Something, but What was It?
Were Layers the Wrong Model?
They Weren’t Sure, But 7 Looked Good
But As Things Developed Things got more complex
Application
Application
Presentation
Session
Transport
Presentation
SNIC
Session
SNDC
Transport
SNAC
Network
LLC
Data Link
MAC
Physical
Physical
And While Better It Wasn’t The Full Answer
A Quick Review
1: Start with the Basics
Two applications communicating in the same system.
Application
Processes
A
B
Application
Flow
IPC Facility
Port
Ids
Communication within
a Single Processing System
This is establishes the API. The Application
should not be able to distinguish a slow
correspondent from operating over the Network.
How Does It Work Now?
Application
Processes
Port
Ids
FA
EFCP
FA
Distributed
IPC Facility
EFCP
• Turns out that Management is the first capability needed to find the
other application. Then of course to do that one needs,
• Some sort of error and flow control protocol to transfer information
between the two systems.
Simultaneous Communication
Between Two Systems
i.e. multiple applications at the same time
• Requires two new capabilities
EFCP
EFCP
EFCP
EFCP
EFCP
EFCP
Mux
Mux
• First, Will have to add the ability in EFCP to distinguish one flow from another.
• Typically use the port-ids of the source and destination.
Connection-id
Dest-port
Src-port Op
Seq # CRC
Data
• Will also need an application to manage multiple users of a single resource.
4: Communication with N Systems
Systems
A Little Re-organizing
A Virtual IPC Facility?
Res
Alloc
Finder
IAP
Dir
So we have a Distributed IPC Facility for each Interface, then to
maintain the API we need an application over all of them to
manage their use.
5: Communicating with N Systems
(On the Cheap)
Dedicated IPC
Systems
Host Systems
By dedicating systems to IPC, reduce the number of lines required and even out
usage by recognizing that not everyone talks to everyone else the same amount.
Communications on the Cheap
• But relaying systems over a wider scope requires carrying addresses
• And creates problems too.
– Can’t avoid transient congestion and bit errors in their memories.
• Will have to have an EFCP operating over the relays to ensure the
requested QoS reliability parameters
EFCP
EFCP
Dest Addr
Src Addr
Common Relaying and Multiplexing Application Header
Relaying
Application
Relaying
PM
Interface
IPC
Processes
Interface
IPC
Processes
The IPC Model
(A Purely CS View)
User Applications
Relaying
Appl
Mux
Mux
EFCP
EFCP
EFCP
EFCP
EFCP
EFCP
Distributed IPC Facilities
EFCP
EFCP
The Implications
• Networking is IPC and only IPC.
– We had been concentrating on the differences, rather than the similarities.
• All layers have the same functions, with different scope and range.
– Not all instances of layers may need all functions, but don’t need more.
• A Layer is a Distributed Application that provides and manages IPC.
– A Distributed IPC Facility (DIF)
• This yields a theory and an architecture that is simple, elegant, and
scales indefinitely,
– i.e. any bounds imposed are not a property of the architecture itself.
– And capabilities we didn’t expect.
• As I said before, The Network is Overhead. It shouldn’t get in the way.
– Our time should be spent on Communities and not connecting.
– This approach does that and then some.
• Let’s See What the Theory Tells Us
What a Layer Looks Like
IPC IPC
IPC Management
Control
Transfer
Delimiting
Applications, e.g., routing,
resource allocation,
Transfer
access control, etc.
Relaying/ Muxing
Common Application
PDU Protection
Protocol
Application-entities
•
Processing at 3 timescales, decoupled by either a State Vector or a Resource Information Base
–
–
–
•
Application Process
IPC Transfer actually moves the data ( ≈ IP + UDP)
IPC Control (optional) for retransmission (ack) and flow control, etc.
IPC Layer Management for routing, resource allocation, locating applications, access control,
monitoring lower layer, etc.
Remember that within a scope if there is a partitioning of functions, it will be orthogonal?
Well, here it is.
43
What are the Protocols?
The Pouzin Society
RIB Daemon
Flow Allocator
EFCP
Resource Allocation
RMT
IPC Management
IRM
CDAP
• Only two
– A data transfer protocol, EFCP, based on delta-t with mechanism and
policy separated. This provides both unreliable and reliable flows.
• Good Examples of separating mechanism and policy
– The common application protocol based on CDAP:
• Transition from an IPC Model to a Programming Model
• 6 Fundamental Operations on Objects.
• Assembler for Distributed Applications
© John Day, 2013
All Rights Reserved
44
Only Three Kinds of Systems
Interior
Routers
Hosts
Border
Routers
• Middleboxes? We don’t need no stinking middleboxes!
• NATs: either no where or everywhere,
•
NATs only break broken architectures
• The Architecture may have more layers, but no box need
have more than the usual complement.
– Hosts may have more layers, depending on what they do.
How Does It Work?
Enrollment or Joining a Layer
(N)-DIF
A
(N-1)-DIF
•
•
Do what the Model Says:
Nothing more than Applications establishing communication (for management)
– Authenticating that A is a valid member of the (N)-DIF
– Initializing it with the current information on the DIF
– Assigning it a synonym to facilitate finding IPC Processes in the DIF, i.e. an address
•
More than one way to complete enrollment.
How Does It Work?
Establishing Communication
A
Look over there!
B
• Simple: do what the model says.
–
–
–
–
–
A asks IPC to allocate comm resources to B
Determine that B is not local to A use search rules to find B
Keep looking until we find it.
Actually go see if it is there and whether we have access.
Then tell A the result.
• This has multiple advantages.
–
–
–
–
–
We know it is really there.
We can enforce access control
We can return B’s policy and port-id choices
If B’s has moved, we find out and keep searching
Decentralizes name resolution, rather than the centralized approach of DNS
Naming in RINA
IPC Process
Flow Allocator
EFCP
IRM
•
DIF
Allocator
IPC Process-name is just an application-process-name
– An address is a synonym that names an IPC Process with scope restricted to the DIF and
maybe structured to facilitate use within the DIF.
•
•
A port-id is a Flow-Allocator-Instance-Id (local scope).
A connection-endpoint-id (CEP-id) is an EFCP-instance-id (local scope).
• Note that these are local to the IPC Process.
•
A connection-id is created by concatenating source and destination CEP-ids.
•
That’s It!
Implications of the Model & Names
(Routing Table Size)
•
Recursion either reduces the number of routes or shortens them.
Metros
Regionals
Backbone
This Bounds Router Table Size
•
•
There will be Natural Subnets within a layer around the Central Hole.
Each can be a routing domain; Each Subnet is one hop across the Hole.
– The hole is crossed in the layer below.
(N)Routing
Domains
Metros
Regionals
(N-1)-Routing
Domains
Backbone
© John Day, All Rights Reserved, 2009
Implications of the Model & Names
(Multihoming)
(N)-IPC-Process
(N)-Address
Port -id
(N-1)-IPC-Process
•
Yea, so? What is the big deal?
– It just works
• An IPC-Process inspects the destination address of the (N)-PDU and forwards it to its
next hop. The current forwarding tables intend to deliver to left-hand (N-1)-IPC Process,
but it goes down. There is a routing update that now recognizes that the path to that
address is through the right (N-1)-IPC Process.
– Normal operation. Nothing special to do. Even uses 50% to 75% fewer addresses
and forwarding tables are commensurately smaller as well.
– Yes, as we saw the (N-1)-bindings may fail from time to time.
• Not a big deal. Because that is
Implications of the Model & Names
(Mobility)
(N)-IPC-Process
Port -id
Address
New Address
(N-1)-IPC-Process
•
Yea, so? What is the big deal?
– It just works just like multihoming only the (N-1)-port-ids come and go a bit more
frequently.
•
O, worried about having to change address if it moves too far? Easy.
• Assign a new synonym to it. Put it in the source address field on all outgoing PDUs. Stop
advertising the old address as a route to this IPC-Process. Advertise the new one.
• Want to renumber the DIF for some reason? Same procedure.
•
Again, no special configuration to do. It just works.
The Skewed Necklace
The Pouzin Society
(DIF view)
Base
Station
Metro
Subnet
Regional
Subnet
Mobile Infrastructure Network
•
•
Traditional ISP Provider
Network with normal necklace with
an e-mall top layer.
Notice: No special mobility protocols. No concept of a Home Router, No
Foreign Routers, No Tunnels to set up. It just works.
Clearly more layers could be used to ensure the scope allows sufficient time
for updating relative to the time to cross the scope of the layer.
– Space does not permit drawing full networks.
© John Day, 2013 53
Rights Reserved
Implications of the Model & Names
(Choosing a Layer)
•
In building the IPC Model, the first time there were multiple DIFs (data link
layers in that case), to maintain the API a task was needed to determine which
DIF to use.
A Virtual IPC Facility?
Finder
Res
Alloc
IAP
– User didn’t have to see all of the wires
– But the User shouldn’t have to see all of the “Nets” either.
•
This not only generalizes but has major implications.
Implications of the Model & Names
(A DIF-Allocator)
•
IPC Resource Manager (IRM) determines what DIFs applications are available.
– If this system is a not member, it either joins the DIF as before
– or creates a new one.
IRMs
– This is the generator function.
• IOW, a Global Address Space is Not Necessary.
•
Which Implies that the largest address space only has to be large enough for the
largest e-mall.
•
•
•
Given the structure, 32 or 48 bits is probably more than enough.
This is a Powerful New Tool for scaling and security.
Layers never need to get too big.
So a Global Address Space is Not Required but
Neither is a Global Application Name Space
DIF-Allocator
To Peers
In Oher DIFs
Actually one could still have distinct names spaces within a
DIFs (synonyms) with its own directory database.
•
•
•
•
Not all names need be in one Global Directory.
Coexisting application name spaces and directory of distributed databases are
not only possible, but useful.
Needless to say, a global name space can be useful, but not a requirement
imposed by the architecture.
The scope of the name space is defined by the chain of databases that point to
each other.
Scope is Determined by the
Chain of Places to Look
• The chain of databases to look for names determines the scope of the
name space.
– Here there are 2 non-intersecting chains of systems, that could be using
the same wires, but would be entirely oblivious to the other.
Stop For a Moment
• In case you hadn’t noticed:
• For us, testing the model, constantly challenging, questioning whether it
is right, teasing out new principles is as important as finding something
that works.
– What we don’t understand is as important as what to build, if anything more.
• If we don’t get the fundamentals right, they will come back and to haunt
us!
• As you have seen, it has been quite successful, most everything just
works and problems we didn’t consider initially have simply fallen out.
– It is amazing what a scientific approach, rather than a craft tradition can do.
• Like Security
How Does It Work?
Security
ISP
• Security by isolation, (not obscurity)
• Hosts can not address any element of the ISP.
• No user hacker can compromise ISP assets.
• Unless ISP is physically compromised.
Hosts and ISPs do not share DIFS.
(ISP may have more layers
How Does It Work?
Port:=Allocate(Dest-Appl, params)
Security
Access Control
Exercised
•
•
•
Do What the Model Tell Us:
Application only knows Destination Application name and its local port.
The layer ensures that Source has access to the Destination
–
•
•
•
All members of the layer are authenticated within policy.
Minimal trust: Only that the lower layer will deliver something to someone.
PDU Protection can provide protection from eavesdropping, etc.
–
•
Application must ensure Destination is who it purports to be.
Complete architecture does not require a security connection, a la IPsec.
The DIF is a securable container. DIF is secured not each component separately.
RINA is Inherently More Secure
and Does It With Less Cost
• A DIF is a Securable Container.
(Small, 2011)
– What info required to mount an attack, How to get the info
– Small does a threat analysis at the architecture level
• Implies that Firewalls are Unnecessary,
– The DIF is the Firewall!
• RINA Security is considerably Less Complex than the Current Internet
Security (Small, 2012)
– Only do a rough estimate counting protocols and mechanisms.
To Add
Security
Internet
RINA
Protocols
8
0
Non-Security
Mechanisms
59
0
Security Mechanisms
28
7
Copyright © 2012, Jeremiah Small. All Rights Reserved.
“Internet” Congestion Control
• As we saw a few minutes ago, the Internet didn’t realize this could
happen, and that they put it the worst possible place an avoidance
scheme that works by causing congestion.
• TCP congestion control “pulses” the network with a saw-tooth
oscillation based on randomly distributed round-trip times.
• QoS must be done in the network and enforced all the way to the wire.
– TCP thwarts any attempt at QoS that tries to do low jitter.
• Can’t be fixed without inflicting great pain.
How Does It Work?
“Congestion Control”
• In RINA, congestion control can not only be where it belongs,
• Because Congestion Control is done where it belongs the congestion
control strategy can be different for different layers and for different
QoS classes within layers.
How Does It Work?
The Internet and ISPs
• The Internet floats on top of ISPs, a “e-mall.”
– One in the seedy part of town, but an “e-mall”
– Not the only emall and not one you always have to be connected to.
Public Internet
ISP 1
ISP 2
ISP 3
How Does It Work?
The Internet and ISPs
• But there does not need to be ONE e-mall.
– Notice all the layers are private. Public layers are a form of private.
Facebook Boutique
Public Internet
My Net
Utility SCADA
Internet Rodeo Drive
Internet Mall of America
ISP 1
ISP 2
ISP 3
How Does It Work?
The User’s Perspective
e-common DIFs
e-common DIFs
Peering DIF
Peering DIF
Local Customer
Local
Network
Customer
Network
Provider Network
Provider Network
A Customer Network has a border router that makes several
e-malls available. A choice can be made whether the entire
local network joins, a single host or a single application.
In this case, one host on the local network chooses to join
one of the available e-malls.
Not Just a Network Model
• A Layer is a Distributed Application that Does IPC
• That Forced Us to Answer: What is a Distributed Application?
• We now are working with a Unified Model for
Application Processes
Task
s
Operating Systems
Laptop
Printer
Task
sched
Mem
Mngt
IPC
Disk
Distributed
Applications
OS - DAF
WiFi
-DIF
Networks
USB
-DIF
IRM
IRM
Distributed Application Facility (DAF)
The Pouzin Society
RIB Daemon
Tasks
IPC Management
•
•
IRM
Common Application
Protocol
A DAF consists of cooperating DAPs. Each one has
The Basic Infrastructure:
–
–
–
–
•
•
DAF Management
Distributed Resource
Allocation
Resource Management - Cooperates with its peers to manage load, generate forwarding table, etc.
RIB Daemon - ensures that information for tasks is available on whatever period or event require, a
schema pager.
IPC Management - manages the supporting DIFs, multiplexing and SDU Protection.
Tasks - the real work specific to why the DAF exists.
Which provide Enrollment and SDU Protection as well.
DAPs might be assigned synonyms with scope within the DAF and structured to
facilitate use within the DAF. (!)
© John Day, 2014
All Rights Reserved
69
Why Do You Care?
• The overhead of using a network like this is far lower both in equipment
and personnel. Management costs are lower and less error prone.
– The Network provides more, is more predictable, and cost less.
• Your focus should be on Communities and what makes them good places
to live, not fighting the infrastructure.
– This structure does that, it provides the strong foundation.
• And, the Infrastructure for Distributed Applications will make both
experimenting and deploying new applications faster, easier and less
costly. Far easier to just “try something.”
• We have already seen interesting new insights with the DAF structure,
just as we have seen for networks and operating systems and there are
undoubtedly more to find.
“But You Can’t
Replace the WHOLE Internet!”
• Wish I had a dollar for every time I have heard that.
– What are they putting in the water these days?
• They told us we would never replace the PSTN and IBM’s SNA.
– Even in the late 1980s, people said data would never exceed voice. (!!)
• Of course, it won’t be replaced overnight. Perhaps never. Does it matter?
– You have already seen the transition plan.
• The Internet is just another e-mall: A good place to test malware, conduct
cyberwarfare, steal credit cards, find drug dealers, sacrifice your privacy,
etc. . . . . . All sorts of useful things!
• We build over it, under it, around it. Use it for what you want.
• We build other e-malls along side it.
– Give people a choice, after all competition is good, right?
Transition? No, Adoption
RINA supported Applications
RINA Network
RINA Applications
Public Internet
Rina Provider
• Adopt. Don’t transition.
– If the old stuff is okay in the Internet e-mall, leave it there.
– Do the new capabilities in RINA
• Operate RINA over, under, around and through the Internet.
– The Internet can’t be fixed, but it will run better over RINA.
– New applications and new e-malls will be better without the legacy and
run better along side or over the Internet.
• The Microsoft Approach or the Apple approach?
– Microsoft tried to prolong the life of DOS. It still haunts them.
10 May 10
• A clean break with the past. The legacy is just too costly.
• We need engineering based on science, not myth and tradition.
© John Day, 2010
All Rights Reserved
72
Ongoing implementations
• Three: Implementations: Java, C (user-space), C++/C (user-space and kernel)
• Java: Boston University, Mostly for education and academic purposes
• C: TRIA Network Systems, RINA software in user-space (Linux)
• C/C++ version by the IRATI and PRISTINE consortiums1 split between
user/kernel space on Linux over TCP/UDP, 802.1q (VLANs) and shared memory
for Hypervisor/VM communication
•
Status: first version of high-level software architecture design document about to be
released (will be published at IRATI website during May 2013)
1 More details in the RINA session after the break!
There is Much More,
And Much More to Discover!
• An Invitation: Come explore it with us.
– There is much to explore:
• Believe it or not, this talk has left out a lot!
• How it applies to different environments, especially wireless.
• What are the dynamic properties?
– Routing, congestion control
• Start with Patterns in Network Architecture, Prentice Hall
–
–
–
–
–
Then the “Reference Model” (4 sections) and
Check out related work at
At www.pouzinsociety.org or
www.irati.eu or
csr.bu.edu/rina
Thank You