Microsoft SQL Server 2014: In-Memory OLTP End-to

Download Report

Transcript Microsoft SQL Server 2014: In-Memory OLTP End-to

Applications hitting a wall today
with SQL Server
• Locking/Latching
• Scale-up
• Throughput or latency SLA
Common Patterns
o High Data Input Rate – “Shock
Absorber”
o Read Performance and Scale
o Compute Heavy Data Processing
Applications which do not use
SQL Server today
• Key/Value pair where desire relational
characteristics
• Scenarios where previously might not
have implemented database in the
critical path
o Latency critical OLTP
o Session State
Implementation
Scenario
Pattern Characteristics and
Challenge
 Primarily append-only store
 Inability to ingest write
workload
Read Performance and  High performance read
Scale
operations
 Unable to meet scale-up
requirements
High Data Insert Rate
Main Benefits of In-Memory
OLTP
 Eliminate contention
 Minimize IO logging
 Eliminate contention
 Efficient data retrieval
 Minimize code execution
time
 CPU efficiency for scale
Implementation
Scenario
Pattern Characteristics and Challenge
Main Benefits of In-Memory
OLTP
Compute Heavy
Data Processing
 Insert/Update/Delete workload
 Heavy computation inside
database
 Read and write contention
 Require low latency business
transactions
 High concurrency exacerbates
latency
 Eliminate contention
 Minimize code execution
time
 Efficient data processing
 Eliminate contention
 Minimize code execution
time
 Efficient data retrieval
Low Latency
Session State
Management
 Heavy insert, update, point lookups  Eliminate contention
 User scale under load from
 Efficient data retrieval
multiple stateless web servers
 IO reduction/removal
Traditional Execution Stack
Client
Connectivity
Query
Execution
Data Access
(Buffer Pool)
Transaction
Logging
Gains
In-Memory OLTP Execution Stack
No Improvement
2X to 30X Improvement
*Including concurrency gains
Same Latency
Potentially Less Volume for Durable
I/O Eliminated for Non-Durable
Client
Connectivity
Procedure
Execution
Data Access
(Memory)
Transaction
Logging
Transaction
Logging
Invest in workloads that are OLTP in nature
Manual Analysis if MDW is too heavy
SELECT TOP (5) b.name AS TableName, a.database_id, a.singleton_lookup_count, a.range_scan_count
FROM
sys.dm_db_index_operational_stats(DB_ID(), NULL, NULL, NULL) AS a
INNER JOIN sys.objects b on a.object_id = b.object_id
WHERE b.type <> 'S‘ AND (a.singleton_lookup_count > 0 OR a.range_scan_count > 0)
ORDER BY a.singleton_lookup_count DESC
GO
SELECT TOP (5)a.database_id,
latch
so.object_id, so.name AS TableName, a.page_la
tch_wait_count,
a.page_latch_wait_in_ms,
a.page_lock_wait_count,
a.page_lock_wait_in_ms
FROM sys.dm_db_index_operational_stats(DB_ID(),
NULL, NULL, NULL) a
INNER JOIN sys.objects AS so
ON a.object_id = so.object_id
WHEREso.type = 'U' AND a.page_io_latch_wait_coun
t > 0
ORDER BY a.page_latch_wait_count DESC
SELECT TOP (10) sp.database_id,
T-SQL
so.name AS StoredProcName,
sp.total_worker_time,
sp.execution_count,
sp.total_logical_reads,
sp.total_logical_writes,
sp.total_logical_reads
FROM sys.dm_exec_procedure_stats AS sp
INNER JOIN sys.objects AS so
ON (sp.object_id = so.object_id)
WHEREso.type = 'P' AND sp.database_id = DB_ID() ORD
ER BY sp.total_worker_time DESC;
ostress.exe -S. –E -dAdventureWorks2012
-Q"EXEC Demo.usp_DemoInsertSalesOrders @use_inmem = <0,1>, order_count=100000" –n<varies>
http://msdn.microsoft.com/en-us/library/dn511655(v=sql.120).aspx
Taking an Iterative Approach to Migration
Identify bottlenecks
in tables
Address surface area
limitations and
migrate code
Identify performance
critical transaction
code
Address unsupported
constructs and
migrate data
Perform minimal
changes for
interpreted access
Log Drive
Memory Allocation
Memory Size = Table Size + SUM(Index Size)
Table Size = Row Size * Row Count
Row Size = SUM(Column Sizes) + Row Header Size
Row Header Size = 24 + 8 * Index Count
Column Size = Dependent on column type and associated padding/overhead
Hash Index Size = Bucket_Count * 8 bytes
Nonclustered Index Size = Row Count * (Key Size + 8) bytes
Details available in Books Online
Nonclustered memory-optimized:
Execution similar to nonclustered on disk-based tables
Choose index direction that conforms to seek direction
Definition of column in index is distinct ordering during seek – Should match ORDER BY
Nonclustered Hash memory-optimized:
Bucket_count requirement
Bucket-count too low or too high impacts read and writ
Operation
Nonclustered
Specify Bucket Count
Not Required
Index Scan, retrieve all table rows
Supported
Index seek on equality predicate(s) Supported*
(=)
Index seek on inequality predicates Performs better than
(>, <, <=, >=), BETWEEN
Nonclustered Hash*
Retrieve rows in a sort-order
matching the index definition
Retrieve rows in a sort-order
matching the reverse of the index
definition
Nonclustered Hash
Required
Supported
Performs better than Nonclustered.
Predicate must include all columns in
hash key, otherwise will result in scan
Will result in scan
Supported
Not Supported
Not Supported
Not Supported
* Predicate does not need to include all key columns but must
include at least leading key columns, otherwise will result in scan
Lack of parallelism can be problematic for some queries
Consider trade-off between faster execution (native) vs. parallel operation
Joins between memory-optimized table and columnstore can be problematic
Statistics creation and updates can be problematic for large tables
Requirement of FULLSCAN can take a long time
If query has many attributes individual column create stats can have an impact
Consider impact of no auto-update statistics
For InterOp queries: determining how often and when can affect plans
For native procedures: Do data load, execute statistics update, create native compiled procedure
A few difficult scenarios to convert to native compiled queries
Disjunction (NOT/OR)
Outer Join
• Interpreted T-SQL Access
– Access both memory- and
disk-based tables
– Less performant
– Virtually full T-SQL surface
• When to use
– Ad hoc queries
– Surface area not available
in native
– Compilation at runtime can
impact plan (positively)
• Natively Compiled Procs
‒ Access only memoryoptimized tables
‒ Maximum performance
‒ Limited T-SQL surface area
• When to use
‒ Well defined OLTP-style
operations
‒ Optimize performance
critical business logic
‒ Query plan unlikely to
change based on statistics
Review
In-Memory OLTP Common Workloads and Migration Considerations
In-Memory OLTP Internals Overview
In-Memory OLTP Sample Database and Workload (based on AdventureWorks
In-Memory OLTP Blog Series
Books Online: In-Memory OLTP
Breakout Sessions
http://www.trySQLSever.com
http://www.powerbi.com
http://microsoft.com/bigdata
http://channel9.msdn.com/Events/TechEd
www.microsoft.com/learning
http://microsoft.com/technet
http://microsoft.com/msdn