Slides from review session
Download
Report
Transcript Slides from review session
COMS 414 - Prelim 2
Review Session
Yejin Choi
[email protected]
Daniel Williams
[email protected]
< DNS >
DNS == Domain Name System
Why do we need Domain Name?
– Domain names are easier to remember than IP
addresses
– IP addresses can be dynamically changing.
– IP addresses may not be unique.
Why do we need DNS?
–
Mapping between domain name and IP addresses.
< DNS >
By maintaining distributed host table
–
Scalability !!!
How changes to [domain name – IP
address] mapping will be updated?
–
–
Caching…
TTL…
< DNS >
Name resolution commands
–
–
NSLookup [ipaddress | sitename]
ping -a
Query scheme is simple
–
–
–
Query( domain name, RR type )
Answer( values, additional RRs )
RR == Resource record
DNS tree structure
NS RR “pointers”
.
edu.
cornell.edu.
cs.cornell.edu.
com.
cmu.edu.
jp.
us.
mit.edu.
eng.cornell.edu.
foo.cs.cornell.edu
bar.cs.cornell.edu
A
A
10.1.1.1
10.1.1.1
< CDN >
CDN == Content Distribution Networks
Replication of web servers
CDN V.S. Centralized server
–
–
Less latency, better performance
More robust service availability
Content Distribution Network
Hosting
Center
Backbone
ISP
Hosting
Center
Backbone
ISP
IX
Backbone
ISP
IX
Site
ISP
ISP
S
S
ISP
S
S
S
S
S
S
S
Sites
< CDN >
Cached CDN – cache contents on cache miss
Pushed CDN – push contents up-front
Issues
–
–
Difficulty with dynamic contents
Cache performance V.S. Content synchronization.
What if lots of clients try to access
the same CS?
Hosting
Center
Backbone
ISP
CS
Hosting
OS
Center
Backbone
ISP
CS
IX
Backbone
ISP
CS
IX
Site
ISP CS
ISP
S
S
ISPCS
S
S
S
S
C C C CC C
S
S
S
Sites
DNS & CDN together…
DNS load balancer
Picks a server that is least overloaded and
closer to the client.
DNS answer with a small TTL
–
–
30 seconds – one minute for fine-grained load
decisions
quickly offload a busy or even crashed content
server
< UDP >
Unreliable / Out-of-order message delivery.
Connection-less.
Datagram based.
–
–
–
Messages > MTU will be dropped.
MTU == Maximum Transmission Unit
Default ~1460bytes with Cisco routers
No flow control
No congestion control
< TCP >
Reliable / In-order message delivery.
Connection-oriented.
Stream based
- thus no restriction on transmission size
Flow control
Congestion control
TCP connection establishment
SYN, SeqNum=x
Three-way handshake
–
–
–
SYN+ACK, SeqNum=y, Ack=x+1
1. SYN
2. ACK + SYN
3. ACK
Connection established
only after all three
steps.
If not, time-out.
ACK, Ack=y+1
Client (active)
Server (passive)
TCP-SYN Attack
Classic DOS (Denial of Service)
attack.
Attack by creating myriads of halfestablished connections.
TCP Sliding Window
This is how below TCP properties come to life
–
–
–
–
Reliable delivery
In-order delivery
Any size message (stream based)
Flow control
Sliding window can’t slide if messages in the
window didn’t get through.
TCP Sliding Window
Advertisement of Window size via ACK
Small sliding window
–
–
Low performance due to delay waiting on ACK.
Bad with network with large RTT (Round Trip Time)
Large sliding window
–
–
Send data as a bulk, waiting ACK as a bulk.
Bad if network congestion, as bulk transfer will
make circumstance worse.
TCP Congestion Control
Interpret dropped packets as congestion
Maintain congestion window size
Additive Increase/Multiplicative Decrease
TCP sawtooth pattern
KB
Time (seconds)
Wireless environment
Issues
–
–
What do ‘dropped packets’ indicate?
–
–
High RTT(Round Trip Time)
Message loss pattern differs from wired network
TCP assumes congestion.
But it could be just lossy medium.
How will UDP/TCP behave on wireless?
VPN == Virtual Private Network
remote client can
communicate with the
company network securely
over the public network as if
it resided on the internal
LAN
NAT == Network Address Translation
allows an IP-based
network to manage its
public (Internet)
addresses separately
from its private (intranet)
addresses.
popular technology DSL
or cable LANs
Network Failure
Packet drop or packet delay
System Crash / halt
Byzantine failure
–
–
Some systems behaves incorrectly or unexpectedly
Could be a malicious attacker
Network Partition
–
–
Also known as “Split Brain Syndrome”
Some nodes in a cluster no longer communicate with each
other
IP Multicast
Reduces overhead for sender
Reduces bandwidth consumption in network
Useful in small subnet
–
I.e.) virtual meeting broadcast within a corporate
network
Multicast over internet?
–
Mbone. (buried in the history…)
< Virtual Memory Overview >
Memory
Virtual
Addresses
Page Table
0:
1:
0:
1:
Physical
Addresses
CPU
P-1:
N-1:
Disk
Address Translation: Hardware converts virtual addresses to
physical addresses via an OS-managed lookup table (page table)
Virtual Memory yet another picture..
Virtual Page
Number
Valid
1
1
0
1
1
1
0
1
0
1
Memory resident
page table
(physical page
or disk address)
Physical Memory
Disk Storage
(swap file or
regular file system file)
Multi-Level Page Tables
multi-level
–
page tables
Level 1 table:
1024
entries, each of which
points to a Level 2 page table.
–
Level 2 table:
1024
entries, each of which
points to a page
...
Level 1
Table
Level 2
Tables
Page Faults
PTE == Page Table Entry
–
Each entry is (pointer to physical address, flags)
If a process tries to access a page not memory
Page Fault Interrupt
OS exception handler “page-fault trap”
Paging and swapping
Before fault
After fault
Memory
Memory
Page Table
Virtual
Addresses
Physical
Addresses
CPU
Page Table
Virtual
Addresses
Physical
Addresses
CPU
Disk
Disk
< Page replacement schemes >
FIFO – first in first out
OPT - (or MIN) optimal page replacement
LRU – least recently used
LRU Approximation
–
–
–
–
Mimicking LRU when no hardware support for LRU
Reference bits
Additional reference bits algorithm
Second chance algorithm
LFU – least frequently used
MFU – most frequently used
FIFO and Belady's Anomaly
For some page replacement algorithms, the page
fault rate may increase as the number of allocated
frames increases.
OPT (or MIN)
Assumes knowledge for future requirement.
–
Replace the page that will not be used for the longest
period of time
Doesn’t show Belady’s anomaly
But practically too difficult to implement !
LRU
Assume pages used recently will be used again
–
throw away page not used for longest time
Popular policy to be taken
Doesn’t show Belady’s anomaly
Implementation options
–
–
Counters
Stack
Second-chance
LRU Approximation
Reference Bits + FIFO
–
–
if set, a page will be granted for second chance.
If a page used often enough, it will never be
replaced.
Implementation by “Circular Queue”
Bad if all bits are set degenerates to FIFO.
LFU
Assumes pages used actively will be used again.
What about a page used heavily only in the
beginning?
–
shift count by 1 at regular intervals
Virtual Memory Programmer’s view
Large
–
“flat” address space
Can allocate large blocks of contiguous addresses
Processor
–
–
“owns” machine
Has private address space
Unaffected by behavior of other processes
Virtual Memory System’s view
virtual
–
–
–
address space created by page mapping
Address space need not be contiguous
Allocated dynamically
Enforce protection during address translation
Multi-processing
–
performance
Switching to other processes when servicing disk I/O
for page fault
Levels in Memory Hierarchy
cache
CPU
regs
Register
size:
speed:
$/Mbyte:
line size:
32 B
1 ns
8B
8B
C
a
c
h
e
32 B
Cache
32 KB-4MB
2 ns
$100/MB
32 B
larger, slower, cheaper
virtual memory
Memory
Memory
128 MB
50 ns
$1.00/MB
4 KB
4 KB
disk
Disk Memory
20 GB
8 ms
$0.006/MB
Virtual Memory + Cache
VA
CPU
miss
PA
Translation
Cache
Main
Memory
hit
data
Problem?
Performs Address Translation before each cache lookup
–Which may involve memory access itself (of the PTE)
–We could cache page table entries…
–
Virtual Memory + Cache + TLB
hit
PA
VA
CPU
miss
TLB
Lookup
miss
Cache
hit
Translation
data
Speed up Address translation
Main
Memory
< How to Prepare Prelim >
Make sure to review homework problem sets.
Practice writing synchronization code on your
own !!
Sleep well and have your brain ready to think !
http://www.cs.cornell.edu/Courses/cs414/2003fa/
http://www.cs.cornell.edu/Courses/cs414/2002fa/