01AppsModels - Computer Science Division
Download
Report
Transcript 01AppsModels - Computer Science Division
Internet Service Models:
Toward Active Networks?
CS 241 Internet Services
© 1999-2000 Armando Fox
[email protected]
Administrivia
HW1 due today
Enrollment
mailing lists: please subscribe!
Future homeworks and readings to be posted later
today
Midterm logistics
Technical Advisory Board being finalized
Outline
The Post-PC World
Heterogeneous devices and networks
Adaptation by proxy
Proxies and the end-to-end argument
Do proxy-based services have a future?
Active Networks
The Post-PC World?
The “Post-PC” world
Moore’s Law has made computing
power smaller, cheaper, faster, lowerpower
Wireless communication is pervasive
and inexpensive enough for
consumers
The Internet has become a mass
market phenomenon: “Access Is the
Killer App”
How do we support these?
Client and Service Variation
Client & network variation spans orders of magnitude
Property
Desktop + Fast LAN
Bandwidth
10-100 Mb/s
Smart phone/
smart pager
4800-19.2 Kb/s
Display
CPU
Memory
1280x1024x16
266 MHz Pentium
10’s of MB
320x240x2
20-40 MHz ARM
2-4 MB
Ad hoc, per-client services should be unified
Email-to-pager gateways
Fax and faxback gateways
“Traditional” services (email, web, …)
Application-Level Proxying
Existing
servers
Adaptation performed on the fly by
proxy
Compression, filtering, devicespecific transformations…
Servers don’t change
Clients don’t change
Content Adaptation
Tailor content for each user, device, network
TranSend, an early example (details later)
6.8x
65x
10x
1.2 The Remote Queue Model
We introduce Remote Queues (RQ), ….
Aggressive Content Adaptation
Get content from origin
Internet servers
Armando Fox
Separate by datatype
PhD Candidate, UC
Berkeley Computer
Science Division
Perform device-specific
transformations
Advisor: Eric Brewer
Compression is a nice
side effect...
Network Adaptation
Transform content and repackage for a different
network
Existing
servers
What’s a Proxy-Based Application?
Intelligent level of indirection
mitigation of variation
offloading complexity from clients and servers
support for legacy servers and weird clients & networks
Generalization: intelligence in the infrastructure
Client
Proxy
Server
Why Proxied Services are Good
Public
Internet
Aggressive client targeting
Network adaptation, even on the fly
(Legacy) servers don’t change and don’t absorb additional
computational load
Economy of scale if many users can share computing
resource
Some Things You Can Build
Groupware
Group Annotation for the Web
Collaborative filtering over content from third-party servers
Teach old clients new tricks
Top Gun Wingman: rich Web browsing on your PDA
Top Gun MediaBoard: collaborative whiteboard extended to
PDA
Generalize application partitioning
Tweak your Internet experience
Filtering (SurfWatch)
Translation (Babelfish)
Acceleration (TranSend)
Why Are Proxies Bad?
Break end-to-end foo for various foo
Security (a big one)
Reliability
Server’s control over content presentation
Most proxied tasks can be accommodated by evolving endto-end networking
HTTP content negotiation
Progressive encodings
Proxies are an interim hack (not fundamentally useful long
term) and violate the end-to-end argument.
Hypotheses
Proxies are consistent with the end-to-end argument.
Proxies have long-term fundamental value as well as
immediate practical value in providing networking
services to mobile hosts.
End-to-End Arguments and Users
Users care about application behavior. To them, the
application is the “network service.”
Network performance?
Protocol reduction/compression?
Encryption/lossless compression?
Lossy compression?
Semantic filtering?
Client device adaptation?
Yes…all of the above
End-to-End Arguments and Proxies
The end-to-end argument is about placement of
functionality.
semantic filtering/compression, content reordering,
extraction and presentation of structure from
content...Where to put them?
At the server?
Need “critical mass” of users/devices/etc. to make economic
sense
Doesn’t scale as diversity of clients (& user desires for
customization of presentation) broadens
At the client?
You try it.
Long Term Value: When Will t =
?
“Legacy” systems accumulating faster than ever
Pentium 100 with 14.4 modem is already legacy
“Legacy” can also mean “no administrative control” (servers)
Obvious example: SQL-to-Web gateways
Client & server diversity increasing faster than ever
Convergent (smart phone) devices, graphical pagers, PDA’s
…even as WWW/Internet “standards” are still evolving
Who serves the emerging client and server types?
Conclusion: Until t reaches , I’ll stay in the proxy camp.
Transient vs. Fundamental Benefits
Useful now
Client adaptation (vs. targeting specific high-volume clients)
Protocol conversion (vs. IP everywhere; e.g. WAP)
Generic content adaptation (vs. HTTP/HTML(?) content negotiation)
Replication/caching (vs. multicast content dissemination)
Note: Nontrivial practical obstacles to achieving these goals...
Fundamentally useful
Client adaptation (clients proliferating faster than content or
formatting standards can keep up)
Protocol conversion (“read me my email”)
Application-specific content adaptation
Leveraging existing content and services rapidly
Proxies Considered Harmful?
Proxies break end-to-end security
Does every byte need to be end-to-end secure? If not, who
gets to decide which ones?
Proxies enhance anonymity (remailers/rewebbers)
Server loses control over content presentation
What about content hints? (<LINK> already used this way)
ThirdVoice and similar annotation services
Flip side: proxy reduces burden on server
Proxies introduce a new element into the critical path
(reduce reliability/availability)
Proxies can be engineered for high availability (in many
cases, more available than overwhelmed servers)
Proxies Break End-to-End Security
Who do you trust? And why?
A public infrastructure proxy?
A merchant who takes your credit card number?
VeriSign?
Do you need authenticity? Integrity? privacy?
Transformations that preserve invariants, which can be checked
using secure hashes
Not every byte needs to be encrypted
Proxies enhance some aspects of security
(anonymity/untraceability, as in remailers/rewebbers)
A cheesy analogy: Web caches and hit counting
Proxies Break Reliability Semantics
Do end-to-end reliability semantics belong in TCP?
At the network level…yes (“the bytes arrived/send more”)
At the app level…no (“the transaction committed”)
Already a problem for e-commerce and Web database front
ends…being solved end-to-end
Impedance matching may improve TCP performance
Link 1 is 9600 bps. Link 2 is >100Mbps. What does TCP
do?
Just move the buffering to the application level…
Peter Danzig’s (NetApp) excellent talk on Web caching
Proxies Break Content Presentation
Describing structure and presentation separately
Content hinting (“Don’t’ shrink this image”)
Recent W3C proposal from IBM?
XML-based structure-describing languages, e.g. WIDL
Cascading Style Sheets
Some content will break
Is that better or worse than none at all?
Ad-insertion services on the horizon already do this
Proxies Reduce Reliability/Availability
Proxied services can be engineered for high
availability
Inktomi Web caches
TranSend prototype at UCB
Proxies may increase overall availability
E.g. caching offloads traffic from overwhelmed servers
Automatic replication/mirroring
Aggressive compression reduces backhaul traffic and
congestion
Infrastructure proxies may have better availability than
servers
Moral: an end-to-end problem, after all
A RISC Approach to Internet Services
Keep networks simple
Active Networks? Well, sort of…
Keep TCP/IP and other network-level protocols simple and
fast, to improve wide-area routing, congestion behavior, etc.
Keep the servers and clients “simple”
Don’t burden servers to deal with network and client
variation, mirroring, etc.
Don’t burden clients with matching “least common
denominator”
and the related reliability & administration burdens
Proxies can be the “middle tier”
Content tier; client tier; intelligent delivery tier
Active Networks
Generalization of proxies
Observation: we’re all putting computing into the network
anyway, so let’s solve the general problem
Original proposal: MIT LCS and others
Every packet can carry code
Every router can (potentially) execute such code
Packets can leave state around at routers, for subsequent
packets to pick up and use (variables, etc.)
Possible uses
What application proxies do today
Deploying new network protocols
Compression, encryption, etc.
Problems?
What are the problems with Active Networks?
State management
Security
Critical paths
But the idea seems good...How might we start
solving them?
Restrict state management to well-defined subset of nodes
Use sandboxing and language techniques to restrict what
code can do (e.g. Java)
Put only the simplest, IP-level functionality in routers; the
rest is at application level
Observations
Proxies are likely here to stay
Will have a lot to say about building them later on
Lots of examples in practice today
End-to-end networking is one extreme; active
networks is the other extreme
Are there other interesting points in between?
If so, how might we build those?
A Couple of Different Approaches
SNS/TACC (1997)
Restricted application model: Web-like/proxy-like
applications
No type system, composition is Unix-pipeline-style
Focus was on separation of *ility from application logic
Ninja (1999)
Abstractions for scalable persistent state, super-high
availability, restricted failure modes, etc. provided in bases
Everything else in active routers
Service modules compose if their type interfaces match
Both have a number of interesting deployed apps
Conclusion: Use the right tool...
End-to-end networking will take over some functions
that are proxied today…
but not all of them.
Where Are You?
Answer:
Santiago de Compostela,
Galicia, Spain
What distinction does this
city share with only two
other places in the world?
What are those two other
cities?