BGRP: (Border Gateway Reservation Protocol) A Tree
Download
Report
Transcript BGRP: (Border Gateway Reservation Protocol) A Tree
Xin Wang
Internet Real -Time Laboratory
Columbia University
(Joint work with Henning Schulzrinne, Dilip Kandlur, and Dinesh Verma)
http://www.cs.columbia.edu/~xinwang
1
Outline
•
•
•
•
•
•
•
•
Introduction to LDAP
Motivation
Background
Experimental Setup
Test Methodology
Result Analysis
Related Work
Conclusion
2
What is LDAP?
• Directory Service
– A simplified database, primarily for high volume efficient
reads; no database mechanisms to support roll-back of
transactions
• LDAP: Lightweight Directory Access Protocol
– A distributed client-server model over TCP/IP
– Can access stand-alone directory servers or X.500
directories
3
Motivation
• Wide use of LDAP
– personnel databases for administration, tracking
schedules, address translation for IP telephony,
storage of network configuration, etc.
• Performance of LDAP?
– relatively static data, caching to improve performance
– Can LDAP be used in a dynamic environment with
frequent searches?
4
Background: LDAP Structure
• Tree structure: entry, attributes, values
• Operations: add, delete, modify, compare, and
search.
5
Background (cont’d.)
• LDAP for SLS Administration
– A better than best effort service, e.g., int-serv, diff-serv,
requires a service level specification (SLS) between the
network and customer
– SLS specifies type of service, user traffic constraints,
quality expected, etc. May be dynamically negotiated
– LDAP directory contains: SLS, policy rules, network
provisioning information
6
LDAP Structure for SLS Management
– Management tools are used to populate and maintain
LDAP directory
– Decision entities download classification rules, service
specifications, and poll directory periodically.
– Enforcement entities query rules from the decision entities
and enforce them
7
LDAP Tree Structure in the Experiments
8
Experimental Setup
• Hardware:
– Server: dual Ultra-2 processors, 200 MHz CPUs, 256 MB
main memory; server was bound to one of the CPUs.
– Clients: Ultra1, 170 MHz CPU, 128 MB main memory
– 10 Mb/s Ethernet
• LDAP server:
– OpenLDAP 1.2, Berkeley DB 2.4.14
– Stand -alone LDAP daemon (slapd) : front end handling
communication with LDAP clients, and backend handling
database operations.
– LDBM backend: a high performance disk-based database
– cachesize: size in entries of in-memory cache; variable size
– dbcachesize: size in bytes of the in-memory cache
9
associated with each open index file; 10 MB
Experimental Setup (cont’d)
• LDAP Client
10
Test Methodology
• Search is likely to dominate the server operations,
mainly test search performance for downloading
policy rules
• Search filter: interface address, and corresponding
policy object
• Default parameters:
– Directory size: 10,000 entries
– Entry size: 488 bytes
• Search operation steps:
– ldap_open, ldap_bind, ldap_search, ldap_unbind
11
Search Sequences
12
Performance Measures and Objectives
• Latencies:
– Connect: ldap_open + ldap_bind
– Processing: ldap_search + result transmission
– Response: ldap_open -> ldap_unbind
(~connect+processing)
• Server throughput: requests served per second
• Objectives: use latencies and throughput to
evaluate
–
–
–
–
–
Overall LDAP performance
Effect of individual system components on performance
System scalability and performance limits
Performance under update load
13
Measures to improve system performance
Overall Performance
Average connection time,
Average server throughput
processing time, and response
time
14
Components of LDAP Search Latency
Client
Server
15
Components of LDAP Connect Latency
16
Effect of Nagle Algorithm
Average server connection ,
processing, and response time
Average server throughput
17
Effect of Caching Entries
Average connection, processing ,
and response time with 10,000 entry
cache and without cache
Average server throughput
18
Single vs. Dual Processor
Average server connection,
processing, and response time
Average server throughput
19
Single vs. Dual Processor (cont’d)
Read and write throughput
20
Scaling of Directory Size
a)10,000 entries in DB, 10,000 in cache; b) 50,000 DB, 50,000 cache;
c)100,000 DB, 50,000 cache
Average connection and processing
time
Average server throughput
21
Scaling of Directory Entry Size (in-memory)
(488 bytes vs. 4880 bytes)
Average connection and
processing time
Average server throughput
22
Scaling of Directory Entry Size (out-of-memory)
(488 bytes vs. 4880 bytes)
Average server connection time
and processing time
Average server throughput
23
Connection Reuse
(no reuse, 25 % reuse, 50% reuse, 75% reuse, 100% reuse)
Average server processing time
Average server throughput
24
Latency and Throughput for Search and Add
Average server connect,
processing, and response time
Average server throughput
25
Related Work
• Mindcraft
– Netscape Directory Server 3.0 (NSD3), Netscape
Directory Server 1.0 (NSD1), Novell LDAP services
(NDS)
– 10,000 entry personnel DB
– Pentium Pro 200 MHz, 512 MB RAM
– All experiments are in memory
– Throughput
•
•
•
•
NSD3: 183 requests/second
NSD1: 38.4 requests/second
NDS: 0.8 requests/second
CPU is found to be the bottleneck
26
Conclusion
• General Results:
– response latency 8 ms up to 105 requests/second
– Maximum throughput 140 requests/second
– 5 ms processing latency - 36% from backend, 64% from
front end
– Connect time dominates at high load, and limits the
throughput
• Disabling Nagle Algorithm reduces latency about 50
ms
• Entry Caching:
– for 10,000 entry directory, caching all entries gives 40%
improvement in processing time, 25% improvement in
throughput
27
Conclusion (cont’d)
• Scaling with Directory Size - determined by
back-end processing
– In memory operation, 10,000 -> 50,000: processing
time increases 60%, throughput reduces 21%.
– Out-of-memory, 50,000 ->100,000: processing time
increases another 87%, and throughput reduces 23%.
• Scaling with Entry Size (488 ->4880 bytes):
– In-memory, mainly increase in front-end processing,
i.e., time for ASN.1 encoding . Processing time
increases 8 ms, 88% due to ASN.1 encoding, and
throughput reduces 30%.
– Out-of-memory, throughput reduces 70%, mainly due
to increased data transfer time.
28
Conclusion (cont’d)
• CPU:
– During in-memory operation, dual processors improve
performance by 40%.
• Connection Re-use:
– 60% performance gain when connection left open.
29