Hash-Based Indexes - University of Houston

Download Report

Transcript Hash-Based Indexes - University of Houston

Last changed: February 6, 03, 3p
B+-Trees and Hashing
Techniques for Storage and
Index Structures
Covers Chapters [[8]], 10, 11 Third Edition
Last updated: January 30, 2003
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
1
Alternative File Organizations
Many alternatives exist, each ideal for some
situation , and not so good in others:
•
•
•
Heap files: Suitable when typical access is a file
scan retrieving all records.
Sorted Files: Best if records must be retrieved in
some order, or only a `range’ of records is needed.
Hashed Files: Good for equality selections.
• File is a collection of buckets. Bucket = primary
page plus zero or more overflow pages.
• Hashing function h: h(r) = bucket in which
record r belongs. h looks at only some of the
fields of r, called the search fields.
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
2
Index Classification
• Primary vs. secondary: If search key contains
primary key, then called primary index.
•
Unique index: Search key contains a candidate key.
• Clustered vs. unclustered: If order of data records
is the same as, or `close to’, order of data entries,
then called clustered index.
•
•
•
Alternative 1 implies clustered, but not vice-versa.
A file can be clustered on at most one search key.
Cost of retrieving data records through index varies
greatly based on whether index is clustered or not!
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
3
Clustered vs. Unclustered Index
• Suppose that Alternative (2) is used for data entries,
and that the data records are stored in a Heap file.
•
•
To build clustered index, first sort the Heap file (with
some free space on each page for future inserts).
Overflow pages may be needed for inserts. (Thus, order of
data recs is `close to’, but not identical to, the sort order.)
CLUSTERED
Index entries
direct search for
data entries
Data entries
UNCLUSTERED
Data entries
(Index File)
(Data file)
Data Records
Data Records
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
4
Index Classification (Contd.)
• Dense vs. Sparse: If
there is at least one data
entry per search key
value (in some data
record), then dense.
•
•
•
Alternative 1 always
leads to dense index.
Every sparse index is
clustered!
Sparse indexes are
smaller; however, some
useful optimizations are
based on dense indexes.
Ashby, 25, 3000
22
Basu, 33, 4003
25
Bristow, 30, 2007
30
Ashby
33
Cass
Cass, 50, 5004
Smith
Daniels, 22, 6003
40
Jones, 40, 6003
44
44
Smith, 44, 3000
50
Tracy, 44, 5004
Sparse Index
on
Name
Data File
Dense Index
on
Age
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
5
Index Classification (Contd.)
• Composite Search Keys: Search on a
combination of fields.
•
Equality query: Every field value is
equal to a constant value. E.g. wrt
<sal,age> index:
• age=20 and sal =75
•
Range query: Some field value is
not a constant. E.g.:
Examples of composite key
indexes using lexicographic order.
11,80
11
12,10
12
12,20
13,75
<age, sal>
• age =20; or age=20 and sal > 10
• Data entries in index sorted by
search key to support range
queries.
•
•
Lexicographic order, or
Spatial order.
10,12
20,12
75,13
name age sal
bob 12
10
cal 11
80
joe 12
20
sue 13
75
12
13
<age>
10
Data records
sorted by name
80,11
75
80
<sal, age>
Data entries in index
sorted by <sal,age>
20
<sal>
Data entries
sorted by <sal>
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
6
Physical Database Design
for Relational Databases
1. Select Storage Structures (determine how the
particular relation is physically stored)
2. Select Index Structures (to speed up certain
queries)
3. Select …
…
to minimize the runtime for a certain workload
(e.g a given set of queries)
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
7
Introduction Indexing Techniques
• As for any index, 3 alternatives for data entries k*:
1. Data record with key value k
2. <k, rid of data record with search key value k>
3. <k, list of rids of data records with search key k>
• Hash-based indexes are best for equality selections.
Cannot support range searches.
• B+-trees are best for sorted access and range
queries.
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
8
Static Hashing
• # primary pages fixed, allocated sequentially,
never de-allocated; overflow pages if needed.
• h(k) mod M = bucket to which data entry with
key k belongs. (M = # of buckets)
h(key) mod N
key
0
2
h
N-1
Primary bucket pages
Overflow pages
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
9
Static Hashing (Contd.)
• Buckets contain data entries.
• Hash fn works on search key field of record r. Must
distribute values over range 0 ... M-1.
• h(key) = (a * key + b) usually works well.
• a and b are constants; lots known about how to tune h.
• Long overflow chains can develop and degrade
performance.
• Two approaches:
• Global overflow area
• Individual overflow areas for each bucket (assumed in the following)
• Extendible and Linear Hashing: Dynamic techniques to fix this
problem.
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
10
Range Searches
• ``Find all students with gpa > 3.0’’
• If data is in sorted file, do binary search to find first
such student, then scan to find others.
• Cost of binary search can be quite high.
• Simple idea: Create an `index’ file.
Page 1
Page 2
Index File
kN
k1 k2
Page 3
Page N
Data File
 Can do binary search on (smaller) index file!
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
11
B+ Tree: The Most Widely Used Index
• Insert/delete at log F N cost; keep tree heightbalanced. (F = fanout, N = # leaf pages)
• Minimum 50% occupancy (except for root).
• Supports equality and range-searches efficiently.
Index Entries
(Direct search)
Data Entries
("Sequence set")
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
12
Example B+ Tree (order p=5, m=4)
• Search begins at root, and key comparisons
direct it to a leaf (as in ISAM).
• Search for 5*, 15*, all data entries >= 24* ...
Root
7
2*
3*
5*
7*
14* 16*
16
22
19* 20* 22*
29
p=5 because tree can have at
most 5 pointers in intermediate
node; m=4 because at most 4
entries in leaf node.
24* 27* 29*
33* 34* 38* 39*
 Based on the search for 15*, we know it is not in the tree!
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
13
B+ Trees in Practice
• Typical order: 200. Typical fill-factor: 67%.
•
average fanout = 133
• Typical capacities:
•
•
Height 4: 1334 = 312,900,700 records
Height 3: 1333 = 2,352,637 records
• Can often hold top levels in buffer pool:
•
•
•
Level 1 =
1 page = 8 Kbytes
Level 2 =
133 pages = 1 Mbyte
Level 3 = 17,689 pages = 133 MBytes
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
14
Inserting a Data Entry into a B+ Tree
• Find correct leaf L.
• Put data entry onto L.
• If L has enough space, done!
• Else, must split L (into L and a new node L2)
• Redistribute entries evenly, copy up middle key.
• Insert index entry pointing to L2 into parent of L.
• This can happen recursively
• To split index node, redistribute entries evenly, but
push up middle key. (Contrast with leaf splits.)
• Splits “grow” tree; root split increases height.
• Tree growth: gets wider or one level taller at top.
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
15
Inserting 4* into Example B+ Tree
• Observe how
minimum occupancy
is guaranteed in both
leaf and intermediate
node splits.
• Note difference
between copy-up and
push-up; be sure you
understand the
reasons for this.
Entry to be inserted in parent node.
(Note that 4 is
s copied up and
continues to appear in the leaf.)
4
2*
3*
4*
5*
16
4
7
22
7*
Entry to be inserted in parent node.
(Note that 16 is pushed up and only
appears once in the index. Contrast
this with a leaf split.)
29
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
16
Example B+ Tree After Inserting
4*
Root
16
4
2*
3* 4*
22
7
5* 7*
14* 16*
19* 20* 22*
29
24* 27* 29*
33* 34* 38* 39*
 Notice that root was split, leading to increase in height.
 In this example, we can avoid split by re-distributing
entries; however, this is usually not done in practice.
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
17
Deleting a Data Entry from a B+ Tree
• Start at root, find leaf L where entry belongs.
• Remove the entry.
•
•
If L is at least half-full, done!
If L has only d-1 entries,
• Try to re-distribute, borrowing from sibling (adjacent node with
same parent as L).
• If re-distribution fails, merge L and sibling.
• If merge occurred, must delete entry (pointing to L or
sibling) from parent of L.
• Merge could propagate to root, decreasing height.
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
18
Example Tree (after inserting 4*, and
deleting 19* and 20*) before deleting 24
Root
16
4
2*
3*
4*
24
7
5*
7*
14* 16*
22* 24*
29
27* 29*
33* 34* 38* 39*
• Deleting 19* is easy.
• Deleting 20* is done with re-distribution. Notice
that the intermediate node key had to be
changed to 24.
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
19
... And Then Deleting 24*
• Must merge.
• Observe `toss’ of
index entry (on right),
and `pull down’ of
index entry (below).
29
22*
27*
29*
33*
34*
38*
39*
Root
4
2*
3*
4*
5*
7*
7
14* 16*
16
29
22* 27* 29*
33* 34* 38* 39*
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
20
Example of Non-leaf Redistribution
• Tree is shown below during deletion of 24*. (What
could be a possible initial tree?)
• In contrast to previous example, can re-distribute
entry from left child of root to right child.
Root
21
4
2* 3* 4*
5* 7*
7
14* 16*
16
29
18
17* 18*
20* 21*
22* 27* 29*
33* 34* 38* 39*
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
21
After Re-distribution
• Intuitively, entries are re-distributed by `pushing
through’ the splitting entry in the parent node.
• It suffices to re-distribute index entry with key 20;
we’ve re-distributed 17 as well for illustration.
Root
16
4
2* 3* 4*
5* 7*
7
14* 16*
18
17* 18*
20* 21*
21
2
9
22* 27* 29*
33* 34* 38* 39*
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
22
Clarifications B+ Tree
• B+ trees can be used to store relations as well as index structures
• In the drawn B+ trees we assume (this is not the only scheme) that an
intermediate node with q pointers stores the maximum keys of each of
the first q-1 subtrees it is pointing to; that is, it contains q-1 keys.
• Before B+-tree can be generated the following parameters have to be
chosen (based on the available block size; it is assumed one node is
stored in one block):
• the order p of the tree (p is the maximum number of pointers an
intermediate node might have; if it is not a root it must have
between ((p+1)/2) and p pointers; ‘/’ is integer division)
• the maximum number m of entries in the leaf node can hold (in
general leaf nodes (except the root) must hold between (m+1)/2
and m entries)
• Intermediate nodes usually store more entries than leaf nodes
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
23
Why is the minimum number of pointers in an
intermediate node (p+1)/2 and not p/2 + 1??
• (p+1)/2: Assume p=10; then p is between 5 and 10; in the case of underflow
without borrowing, 4 pointers have to be merged with 5 pointer yielding a
node with 9 pointers.
• p/2 + 1: Assume p=10; then p is between 6 and 10; in the case of underflow
without borrowing, 5 pointers have to be merged with 6 pointer yielding 11
pointers which is one too many.
• If p is odd: Assume p=11, then p is between 6 and 11; in the case of an
underflow without borrowing a 5 pointer node has to be merged with a 6
pointer node yielding an 11 pointer node.
Conclusion: We infer from the discussion that the minimum 
maximum numbers of entries for a tree
• of height 2 is: 2*((p+1)/2)*((m+1)/2)  p*p*m
• of height 3 is: 2* ((p+1)/2)* ((p+1)/2)* ((m+1/2)  p*p*p*m
• of height n+1 is: 2*((p+1)/2)n *((m+1)/2)  pn+1*m
Remark: Therefore the correct answer for the homework problem
(p=10;m=100) should be: 2*5*50/10*10*100
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
24
What order p and leaf entry
maximum m should I chose?
Idea: One B+-tree node is stored in one block; choose
maximal m and p without exceeding block size!!
Example1: Want to store tuples of a relation E(ssn, name,
salary) in a B+-tree using ssn as the search key; ssn, and
salary take 4 Byte; name takes 12 byte. B+-pointers take 2
Byte; the block size is 2048 byte and the available space
inside a block for B+-tree entries is 2000 byte. Choose p
and m!!
px2 + (p-1)x4 =<2000 p=<2004/6=334
m =< 2000/20
Answer: Choose p=334 and m=100!
B+-tree Block Meta Data
Block
Storage for
B+-tree node entries
B+-tree Block Meta Data:
Neighbor pointers, #entries,
Parent pointer, sibling bits,…
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
25
Choosing p and m (continued)
Example2: Want to store an index for a relation E(ssn,
name, salary) in a B+-tree using; storing ssn’s take 4
Byte; index pointers take 4 Byte. B+-pointers take 4
Byte; the block size is 2048 byte and the available
space inside the block for B+-tree entries is 2000 byte.
Choose p and m!!
px4 + (p-1)x4 =<2000 p=<2004/8=250
m =< 2000/8 = 250
Answer: Choose p=250 and m=250.
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
26
Coping with Duplicate Keys
in B+ Trees
Possible Approaches:
1. Just allow duplicate keys. Consequences:
• Search is still efficient
• Insertion is still efficient (but could create “hot spots”)
• Deletion faces a lot of problems: We have to follow the leaf
pointers to find the entry to be deleted, and then updating
the intermediate nodes might get quite complicated (can
partially be solved by creating two-way node pointers)
2. Just create unique keys by using key+data (key*)
Consequences:
• Deletion is no longer a problem
• p (because of the larger key size) is significantly lower, and
therefore the height of the tree is likely higher.
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
27
Summary B+ Tree
• Most widely used index in database management systems
because of its versatility. One of the most optimized
components of a DBMS.
• Tree-structured indexes are ideal for range-searches, also
good for equality searches (log F N cost).
• Inserts/deletes leave tree height-balanced; log F N cost.
• High fanout (F) means depth rarely more than 3 or 4.
• Almost always better than maintaining a sorted file
• Self reorganizing data structure
• Typically 67%-full pages at an average
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
28
Extendible Hashing
• Situation: Bucket (primary page) becomes full.
Why not re-organize file by doubling # of buckets?
•
•
•
•
Reading and writing all pages is expensive!
Idea: Use directory of pointers to buckets, double # of
buckets by doubling the directory, splitting just the
bucket that overflowed!
Directory much smaller than file, so doubling it is
much cheaper. Only one page of data entries is split.
No overflow page!
Trick lies in how hash function is adjusted!
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
29
Example
LOCAL DEPTH
GLOBAL DEPTH
2
• Directory is array of size 4.
• To find bucket for r, take last
`global depth’ # bits of h(r); we
denote r by h(r).
• If h(r) = 5 = binary 101, it is in
bucket pointed to by 01.
00
2
4* 12* 32* 16*
Bucket A
2
1*
5* 21* 13*
Bucket B
01
10
2
11
10*
DIRECTORY
Bucket C
2
15* 7* 19*
Bucket D
DATA PAGES
 Insert: If bucket is full, split it (allocate new page, re-distribute).
 If necessary, double the directory. (As we will see, splitting a
bucket does not always require doubling; we can tell by
comparing global depth with local depth for the split bucket.)
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
30
Insert h(r)=20 (Causes Doubling)
LOCAL DEPTH
2
32*16*
GLOBAL DEPTH
2
00
Bucket A
2
3
1* 5* 21*13* Bucket B
10*
Bucket C
000
2
1* 5* 21* 13* Bucket B
15* 7* 19*
Bucket D
2
011
10*
Bucket C
101
2
110
15* 7* 19*
Bucket D
111
2
4* 12* 20*
010
100
2
DIRECTORY
32* 16* Bucket A
001
2
11
3
GLOBAL DEPTH
01
10
LOCAL DEPTH
Bucket A2
(`split image'
of Bucket A)
3
DIRECTORY
4* 12* 20*
Bucket A2
(`split image'
of Bucket A)
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
31
Points to Note
• 20 = binary 10100. Last 2 bits (00) tell us r belongs in
A or A2. Last 3 bits needed to tell which.
•
•
Global depth of directory: Max # of bits needed to tell
which bucket an entry belongs to.
Local depth of a bucket: # of bits used to determine if an
entry belongs to this bucket.
• When does bucket split cause directory doubling?
•
Before insert, local depth of bucket = global depth. Insert
causes local depth to become > global depth; directory is
doubled by copying it over and `fixing’ pointer to split
image page. (Use of least significant bits enables efficient
doubling via copying of directory!)
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
32
Directory Doubling
Why use least significant bits in directory?
 Allows for doubling via copying!
6 = 110
6 = 110
3
3
000
000
001
2
1
0
1
6*
00
11
010
1
011
01
10
100
2
6*
100
0
101
1
110
6*
110
10
6*
01
11
111
Least Significant
00
010
001
6*
101
011
6*
111
vs.
Most Significant
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
33
Comments on Extendible Hashing
• If directory fits in memory, equality search
answered with one disk access; else two.
•
•
•
100MB file, 100 bytes/rec, 4K pages contains 1,000,000
records (as data entries) and 25,000 directory elements;
chances are high that directory will fit in memory.
Directory grows in spurts, and, if the distribution of hash
values is skewed, directory can grow large.
Multiple entries with same hash value cause problems!
• Delete: If removal of data entry makes bucket
empty, can be merged with `split image’. If each
directory element points to same bucket as its split
image, can halve directory.
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
34
Linear Hashing
• This is another dynamic hashing scheme, an
alternative to Extendible Hashing.
• LH handles the problem of long overflow chains
without using a directory, and handles duplicates.
• Idea: Use a family of hash functions h0, h1, h2, ...
•
•
•
•
hi(key) = h(key) mod(2iN); N = initial # buckets
h is some hash function (range is not 0 to N-1)
If N = 2d0, for some d0, hi consists of applying h and looking
at the last di bits, where di = d0 + i.
hi+1 doubles the range of hi (similar to directory doubling)
B+-Trees and Hashing, R. Ramakrishnan and J. Gehrke; extended and significantly revised by Ch. Eick
35