Scalable Localization with Mobility Prediction for
Download
Report
Transcript Scalable Localization with Mobility Prediction for
MetaTM/TxLinux: Transactional Memory
For An Operating System
Hany E. Ramadan, Christopher J. Rossbach, Donald E.
Porter and Owen S. Hofmann
Presenter: Zhong Zhou
CSE Department
University of Connecticut
1
Outline
Introduction & Background
Architectural Model
Interrupts and Transactions
Stack memory and Transactions
Modifying Linux to use HTM
Evaluation
Conclusions
2
Introduction
What is Transaction?
A finite sequence of machine instructions, executed by a
single process, satisfying the following two conditions
Serializability: transactions appear to execute serially, the steps
of one transaction never appear to be interleaved with the steps
of another.
Atomicity: each transaction makes a sequence of changes to
shared memory, when it completes, it either commits, make all
changes visible to others, or it abort, causing its changes to be
discarded.
3
Transactional memory
A multiprocessor architecture for memory access
synchronization as conventional mutual exclusion
technique
Basic instruction for manipulating transaction status
Commit
Abort
Validate
Basic primitive instruction for accessing memory
Load transactional
Load transactional-exclusive
Store transactional
4
Why Transaction memory?
Problem with Lock-based code
Deadlock
Convoying
Priority inversion
Transaction memory: Reduce programming
complexity without sacrificing performance
STM: software transaction memory
HTM: hardware transaction memory
5
Main Contribution
Examination of the architectural support
necessary for an operation system to
use HTM
Creation of a transactional operating
system
Examination of the characteristics of
transactions in TxLinux
6
Architecture Model
Transaction semantics
7
Managing multiple
transactions
Stack based: independent transaction
model
XPUSH: Suspend the current transaction,
saving its current state.
XPOP: Restore the previously xpushed
transaction. Allowing the suspended
transaction to resume
8
Contention management
Many content manage strategies have
been implemented
Polite, Karma, Eruption, et al.
A new policy (size matters)
Favors the transaction that has the large number of unique
bytes read or written in its transaction working set
9
Backoff
If conflict occurs between transactions, with
high probability, it will happen again if
immediate restart these transactions. Backoff
mechanism is needed here
Exponential backoff
Linear backoff
Random backoff
10
Interrupts and Transactions
Interrupt handling background
The top-half interrupt handler
Disable all interrupt with equal and lower
priorities
Relatively short execution time, push as much
work as possible to bottom-half
The bottom-half interrupt handler
11
Observations for Interrupt
handling
Observations
Transaction length: usually large
Interrupt frequency: relative high
Interrupt routing limitations: less flexible
12
Interrupt handling in TxLinux
Start with XPUSH to suspend the
currently running transaction
Interrupt handler is allowed to start
new, independent transaction
Interrupt return path ends with an
XPOP instruction
13
Stack memory and Transaction
Stack memory is shared between thread in
the Linux kernel
On the X86 architecture, Linux thread share
their kernel stack with interrupt handlers.
The sharing of kernel stack address requires
stack addresses to be part of transaction
working sets to ensure isolation
14
Transactions that span
activation frames
Support independent Xbegin and Xend instruction. Xbegin and
Xend can be called in different stack frame
Example:
15
Live stack overwrite problem(1)
One example:
16
Live stack overwrite problem(2)
This problem stems from
Calls to xbegin and xend occur in different stack
frame
The x86 trap architecture reuses the current stack
on interrupt that does not change privilege level
A transaction that is suspended can restart
Solutions
Change the interrupt handler stack to avoid the
overwrite problem.
17
Transactional dead stack
problem(1)
One example:
18
Transactional dead stack
problem(1)
Solution
Stack-based early release: During an
active transaction, any time the stack pointer is
incremented, if the new value of the stack pointer
is on a different cache line from the old one, then,
the processor releases the old cache lines from
the transactional line
19
Modifying Linux to use HTM
Spinlocks
Atomic instructions
Seqlocks
Lock->Xbegin, Unlock->Xend
Regions protected by seqlock loops are
protected by a transaction in TxLinux
Read-copy-update
20
Performance evaluation
Simulation settings
Split instruction\data L1 cache: 16KB, 4way associative, 64-byte cache line
Unified L2 cache: 4MB, 8-way associative,
64-byte cache line
Main memory: 1GB
IPC:1 instruction per cycle
21
Cache miss rates
22
System time
23
Contention management
24
Backoff policy
25
Conclusions
HTM has the potential to greatly simplify the
operating system synchronization while
retaining a high degree of concurrency
Examine several aspect of HTM
implementation and policies
Polka contention management policy is
effective
Backoff policy is important, both linear and
exponential backoff work well
26
Thank You!
Q & A?
27