Straightforward* Lock Free B-tree Design
Download
Report
Transcript Straightforward* Lock Free B-tree Design
Locality-Conscious
Lock-Free Linked Lists
Anastasia Braginsky & Erez Petrank
1
Lock-Free Locality-Conscious Linked Lists
3
7
9
12 18 25
26
31
40 52
63 77
89
92
List of constant size ''containers", with minimal and
maximal bounds on the number of elements in container
Traverse the list quickly to the relevant container
Lock-free, locality-conscious, fast access, scalable
2
Non-blocking Algorithms
Ensures progress in finite number of steps.
A non-blocking algorithm is:
◦ wait-free if there is a guaranteed per-thread
progress in bounded number of steps
◦ lock-free if there is a guaranteed system-wide
progress in bounded number of steps
◦ obstruction-free if a single thread executing in
isolation for a bounded number of steps will make
progress.
3
Existing Lock-Free Lists Designs
J. D. VALOIS, Lock-free linked lists using
compare-and-swap, in Proc. PODC, 1995.
T.L. HARRIS, A pragmatic implementation of nonblocking linked-lists, in DISC 2001.
M.M. MICHAEL, Hazard Pointers: Safe Memory
Reclamation for Lock-Free Objects, in IEEE 2004.
M. FORMITCHEV, and E. RUPERT. Lock-free linked
lists and skip lists, in Proc. PODC, 2004.
4
Outline
Introduction
A list of memory chunks
Design of in-chunk list
Merges & Splits via freezing
Empirical results
Summary
5
The List Structure
A list consists of
◦ A list of memory chunks
◦ A list in each chunk (chunk implementation)
When a chunk gets too sparse or dense,
the update operations on the list are
stopped and the chunk is split or merged
with its preceding chunk.
6
An Example of a List of Fixed-Sized
Memory Chunks
NULL
HEAD
Chunk A
NextChunk
NextChunk
EntriesHead
EntriesHead
Key: 3
Data: G
Chunk B
Key: 14
Data: K
Key: 89
Data: M
Key: 67
Data: D
Key: 25
Data: A
7
When No More Space for Insertion
NULL
HEAD
Freeze
Chunk A
NextChunk
Key: 6
Data: B
NextChunk
EntriesHead
EntriesHead
Key: 3
Data: G
Chunk B
Key: 14
Data: K
Key: 9
Data: C
Key: 12
Data: H
Key: 89
Data: M
Key: 67
Data: D
Key: 25
Data: A
8
Split
NULL
HEAD
Freeze
Chunk A
NextChunk
Key: 6
Data: B
Key: 14
Data: K
Chunk C
Key: 9
Data: C
Key: 12
Data: H
NextChunk
EntriesHead
Key: 3
Data: G
NextChunk
EntriesHead
EntriesHead
Key: 3
Data: G
Chunk B
Key: 6
Data: B
Key: 89
Data: M
Key: 67
Data: D
Chunk D
Key: 25
Data: A
NextChunk
EntriesHead
Key: 9
Data: C
Key: 14
Data: K
Key: 12
Data: H
9
Split
NULL
HEAD
Freeze
Chunk A
NextChunk
Key: 6
Data: B
Key: 14
Data: K
Chunk C
Key: 9
Data: C
Key: 12
Data: H
NextChunk
EntriesHead
Key: 3
Data: G
NextChunk
EntriesHead
EntriesHead
Key: 3
Data: G
Chunk B
Key: 6
Data: B
Key: 89
Data: M
Key: 67
Data: D
Chunk D
Key: 25
Data: A
NextChunk
EntriesHead
Key: 9
Data: C
Key: 14
Data: K
Key: 12
Data: H
10
When a Chunk Gets Sparse
HEAD
Chunk C
NULL
Freeze slave
Chunk B
NextChunk
EntriesHead
Key: 3
Data: G
Key: 6
Data: B
NextChunk
EntriesHead
Key: 9
Data: C
Key: 89
Data: M
Chunk D
Key: 67
Data: D
Key: 25
Data: A
NextChunk
EntriesHead
Key: 14
Data: K
Freeze master
11
Merge
HEAD
NULL
Freeze slave
Chunk C
Chunk B
NextChunk
EntriesHead
Key: 3
Data: G
EntriesHead
Key: 6
Data: B
Key: 9
Data: C
Chunk E
Key: 6
Data: B
Key: 89
Data: M
NextChunk
Chunk D
EntriesHead
Key: 3
Data: G
NextChunk
Key: 67
Data: D
Key: 25
Data: A
NextChunk
EntriesHead
Key: 14
Data: K
Freeze master
Key: 9
Data: C
Key: 14
Data: K
12
Merge
HEAD
NULL
Freeze slave
Chunk C
Chunk B
NextChunk
EntriesHead
Key: 3
Data: G
EntriesHead
Key: 6
Data: B
Key: 9
Data: C
Chunk E
Key: 6
Data: B
Key: 89
Data: M
NextChunk
Chunk D
EntriesHead
Key: 3
Data: G
NextChunk
Key: 67
Data: D
Key: 25
Data: A
NextChunk
EntriesHead
Key: 14
Data: K
Freeze master
Key: 9
Data: C
Key: 14
Data: K
13
Outline
Introduction
A list of memory chunks
Design of in-chunk list
Merges & Splits via freezing
Empirical results
Summary
14
A List of Fixed-Sized Memory Chunks
HEAD
NULL
Chunk A
NextChunk
NextChunk
EntriesHead
EntriesHead
Key: 3
Data: G
Chunk B
Key: 14
Data: K
Key: 89
Data: M
Key: 67
Data: D
Key: 25
Data: A
15
The Structure of an Entry
2 machine words
Freeze bit: to mark chunk entries frozen.
A ┴ (bottom) value is not allowed as a key
value. It means that entry is not allocated.
Data
Key
32 bit
31 bit
KeyData word
Freeze
bit
Next entry pointer
Delete
bit
Freeze
bit
62 bit
NextEntry word
16
The Structure of a Chunk
Head:
dummy entry
Key: ┴
Key: 7
Data: 89
Counter:
4
Key: 14
Data: 9
new
pointer
Key: ┴
Key: 22
Data: 13
Merge
Buddy
pointer
Key: 24
Data: 78
Deleted bit: 1
Freeze
State
2 bits
Key: ┴
NextChunk
pointer
Key: 23
Data: 53
Deleted bit: 1
Key: 11
Data: 13
An array of entries of size MAX
17
Initiating a Freeze
When a process p realizes that
◦ A chunk is full, or
◦ A chunk is sparse, or
◦ A chunk is in progress of being frozen,
Then p starts a freeze or p helps another
process that has already started a freeze.
18
The Freeze Process Starts by:
Going over all the entries in the array and
setting their freeze bit
Finish
◦ insertions of all currently allocated entries that
are not yet in the list
◦ deletions of entries already marked as deleted
but still in the list
19
Chunk List is Different from Known
Lock-Free Linked Lists
Non-private insertion: entry is visible
when allocated, even before linking to the
list.
Allow help with insertion.
Boundary conditions causing merges and
splits.
20
Entry Allocation
k:3
d:9
f:1
k:4
d:2
f:1
k:┴
d:0
f:1
k:8
d:5
f:0
k:┴
d:0
f:0
1.
Entry is allocated at the
beginning of the insertion process
2.
Find zeroed entry, with ┴ key value
3.
Allocate by swapping the KeyData word to the
desired value.
◦ Upon a failure of the CAS command, goto 2.
◦ Frozen entry can not be allocated
4.
If no entry is found -- freeze starts
Next, use allocated entry for list insertion…
21
Entry Allocation
k:3
d:9
f:1
k:4
d:2
f:1
k:┴
d:0
f:1
k:8
d:5
f:0
k:6
d:2
f:0
1.
Entry is allocated at the
beginning of the insertion process
2.
Find zeroed entry, with ┴ key value
3.
Allocate by swapping the KeyData word to the
desired value.
◦ Upon a failure of the CAS command, goto 2.
◦ Frozen entry can not be allocated
4.
If no entry is found -- freeze starts
Next, use allocated entry for list insertion…
22
Insertion Algorithm
previous
k:3
d:9
f:1
k:4
d:2
f:1
next
k:┴
d:0
f:1
1.
Record entry’s next pointer value in savedNext.
2.
Find a location for adding the new entry.
k:8
d:5
f:0
k:6
d:2
f:0
◦ If key already exists (in a different entry) – free allocated entry
by clearing it and return.
3.
CAS entry’s next pointer from savedNext to the next entry
in the list
4.
CAS previous entry’s next pointer to newly allocated entry
◦ If any CAS fails, goto 1 (restarting from the beginning of a
chunk)
5.
Increase the counter and return
23
Insertion Algorithm
previous
k:3
d:9
f:1
k:4
d:2
f:1
next
k:┴
d:0
f:1
1.
Record entry’s next pointer value in savedNext.
2.
Find a location for adding the new entry.
k:8
d:5
f:0
k:6
d:2
f:0
◦ If key already exists (in a different entry) – free allocated entry
by clearing it and return.
3.
CAS entry’s next pointer from savedNext to the next entry
in the list
4.
CAS previous entry’s next pointer to newly allocated entry
◦ If any CAS fails, goto 1 (restarting from the beginning of a
chunk)
5.
Increase the counter and return
24
Insertion Algorithm
previous
k:3
d:9
f:1
k:4
d:2
f:1
next
k:┴
d:0
f:1
1.
Record entry’s next pointer value in savedNext.
2.
Find a location for adding the new entry.
k:8
d:5
f:0
k:6
d:2
f:0
◦ If key already exists (in a different entry) – free allocated entry
by clearing it and return.
3.
CAS entry’s next pointer from savedNext to the next entry
in the list
4.
CAS previous entry’s next pointer to newly allocated entry
◦ If any CAS fails, goto 1 (restarting from the beginning of a
chunk)
5.
Increase the counter and return
25
Deletion
Standard implementation, except for taking care not to get
under the minimum number of entries
Counter always holds a lower bound on the actual
number of entries.
◦ increased after actual insert
◦ decreased before actual delete
Decrementing the counter below the minimum allowed
number, initiates a freeze
Frozen entry can not be marked as deleted
26
Outline
Introduction
A list of memory chunks
Design of in-chunk list
Merges & Splits via freezing
Empirical results
Summary
28
Freezing
Phase I: Marking entries with frozen bits
◦ Non-frozen entries can still change concurrently
Phase II: List stabilization
◦ Everything frozen, now finish all incomplete
operations.
Phase III: Decision
◦ Split, merge, or copy.
Phase IV: Recovery
◦ Implementation of the above decision
29
Phase IV - Recovery
Allocate new chunk or chunks locally
Copy the frozen data to the new chunk
Execute the operation that initially caused the
freeze
Attach the new chunk to the frozen one
Replace frozen chunk(s) with new chunk(s) in
the entire List’s data structure
30
Remarks
Search can run on a frozen chunk (and is
not delayed).
◦ Wait-free except for the use of the hazard
pointer mechanism
A chunk can never be unfrozen
31
Outline
Introduction
A list of memory chunks
Design of in-chunk list
Merges & Splits via freezing
Empirical results
Summary
32
The Test Environment
Platform: SUN FIRE with UltraSPARC T1
8-core processor, each core running 4
hyper-threads.
OS: Solaris 10
Chunk size set to virtual page size -- 8KB.
◦ All accesses inside a chunk are on the same
page
33
Workload
Each test had two stages:
◦ Stage I:
Insertions (only) of N random keys (in order to obtain a
substantial list)
N: 103, 104, 105, 106
◦ Stage II:
Insertions, deletions and searches in parallel
N operations overall out of which 15% insertions, 15% deletions,
and 70% searches.
Reporting results for runs of 32 concurrent
threads.
34
Reference for Comparison
Michael’s lock-free linked list implemented in C
according to the pseudo-code from
◦ MICHAEL, M. M., Hazard Pointers: Safe Memory
Reclamation for Lock-Free Objects., in IEEE 2004.
◦ Uses hazard pointers.
A Java implementation of the lock-free linked list
provided in the book “The Art of Multiprocessor
Programming”
◦ Garbage collection is assumed.
35
Comparison with Michael’s List
Constantly better
Total Time
Already at
20000
Stage I total time
/ N we get
same
Original List
Chunk List
performance368.08
1000
237.93
27.00
10
1.16
0.56
0.16
0.1
2.050
1.15
1
0.071
0.1
0.01
0.01
0.01
0.001
1000
20.269
10
4.90
1
33.91
24.68
time (s)
logarithmic
scale
time (s)
logarithmic
scale
1000
100
100
0.01
performance.
For substantial
More then 10Stage IIlists
total time
/N
in
more
then
Original List
Chunk List
times faster
10 times
10000
100000
N
1000000
0.004
0.001
1000
10000
100000
1000000
N
36
Comparison with Michael’s List
Single Operation Average
Better
performance, as
lists are going
more substantial
Again constantly
better
performance
37
Comparison with Lock-Free List in Java
Total Times
38
Comparison with Lock-Free List in Java
Single Operation Average
39
Outline
Introduction
A list of memory chunks
Design of in-chunk list
Merges & Splits via freezing
Empirical results
Summary
40
Conclusion
New lock-free algorithm for chunked linked list
Fast due to:
◦ Skips over chunks
◦ Restarting from the beginning of a chunk
◦ Locality-conscious
May be useful for other structures that can use
the chunks
Good empirical results for the substantial lists
41
Questions?
42
Thank you !!
43