Peace of Mind: Making DBCC CHECKDB Go Faster

Download Report

Transcript Peace of Mind: Making DBCC CHECKDB Go Faster

Argenis Fernandez
Senior Database Engineer
[email protected]
About your Speaker Today…
• Member of the Database Engineering team
at SurveyMonkey
• Formerly: Senior Consultant, Microsoft
Consulting Services & Senior DBA at
Outerwall and Ultimate Software
• Microsoft Certified Master: SQL Server 2008,
Microsoft Certified Solutions Master: Charter
- Data Platform
• 2014 Friend of Red Gate Software
• 2012-2013 Idera SQL Server ACE
• DBA/Dev/SysAdmin for 15 years
• Loves Windows and SQL Server Internals
• Regular Speaker (PASS Summit, PASS
SQLRally, SQLBits XI, TechEd 2013,
• Twitter enthusiast; infrequent blogger.
Why Am I Qualified To Speak About This?
• My name is not Paul Randal
• I took consistency checks on our 8.4TB database from 9+ days down
to 10 hours on slower storage, 6 hours on an all-flash array
• Note: This session lasts < 60 minutes, so we are not going to watch
paint dry
Making DBCC CHECKDB Go Faster
• Get the biggest server you can find with the fastest CPUs, all the RAM
you can pack on it and all PCI-Express based storage
• Tune Windows
• Tune SQL Server
• You’re Done!
Consistency Checks?
Why Should I Care?
What Causes Corruption?
What Should Be My Goal?
What Am I Trying To Achieve?
What Should Be My Goal?
• To try to make DBCC CHECKDB run as fast as possible?
• No, you cannot really measure as fast as possible.
• Use a baseline, but which one?
• That is your theoretical “limit”
What Kind Of Hardware Do I
What Kind Of Hardware Do I Need?
• Obviously the fastest, biggest the better, but:
• It’s perfectly okay to offload consistency checks to a smaller, slower
box knowing that it’ll take longer, but you will not impact your
production instance AS LONG as you run WITH PHYSICAL_ONLY
checks on the primary/prod instance
• In that case, it can be commodity hardware
Offloading Consistency Checks
Where Is It Okay To Offload Consistency
Checks To?
• Log Shipping Secondary on STANDBY?
• Snapshot on a Mirroring Partner?
• Replication Subscriber?
• AlwaysOn Availability Groups Readable Secondary?
• Restored copy of the prod database from a fresh FULL backup?
• SAN Snapshot?
What Does CHECKDB Actually
Breakdown of DBCC CHECKDB Tasks
• Checks metadata and filesystem directories/files for FILESTREAM
• Checks Service Broker structures
• Breaking down these processes separately might be a good choice if
you have extremely large databases
Great, But How Do I Make It Go
Check Your Power Plan On Windows
• Default is “Balanced” – awful
• Set to “High Performance”
• Use powercfg.cpl or powercfg.exe to correct
• Consider disabling all power saving features in your server’s
BIOS/UEFI settings
• Verify you are okay by leveraging CPU-Z from
tempdb Defaults Suck. Fix them.
• tempdb’s default data file size sucks (8MB data/1MB log)
• Its autogrowth settings suck (10% for both files)
• The number is tempdb files is usually not optimal
PFS/SGAM/GAM contention can be observed in a lot of systems
Increase to # of files = # of cores to begin with for <= 8 cores
For > 8 cores increase by 4 as long as you see contention
For a good brain-melting discussion on tempdb, watch Bob Ward’s “Inside
Tempdb” talk at the 2011 PASS Summit, available here:
• Run DBCC CHECKDB WITH ESTIMATEONLY to get an idea of how much
tempdb space consistency checks will require
The Default MAXDOP Sucks. Fix It.
• CHECKDB can and will go parallel on Enterprise Edition
• MAXDOP default is zero
• Change to number of cores per NUMA node
• Use SysInternals’ COREINFO.EXE to find out how many Cores/NUMA node in
your system
• Check out Paul Randal’s blog post on CHECKDB DOP scalability
• You can choose to turn off parallelism for CHECKDB with Trace Flag
Consider BYOS: Bring Your Own Snapshot
• Applies to Enterprise Edition only
• CHECKDB creates an internal snapshot to obtain a consistent view of
the database before it performs any checks
• You can choose to roll your own snapshot and run CHECKDB against it
• You can choose to keep the NTFS alternate stream (copy-on-write)
files on different storage
Consider Using WITH TABLOCK, go SingleUser, or Read-Only
• Probably not feasible in your environment
• Database is not available to regular users
• No snapshot needed, nor one is created
Consider Using Trace Flags
• Trace Flags 2549 and 2562 were specifically created to increase
performance of CHECKDB on very large databases
• Documented in this KB:
• Aaron Bertrand has a list of builds that support these trace flags here: - the _entire_ article is a must read
• Although developed specifically to increase WITH PHYSICAL_ONLY
performance, the KB acknowledges that they can also increase the
performance of full checks – and we have observed that they do help a bit
in our environment
• [CAUTION] Test carefully – these might have a negative impact on
Consider Disabling/Dropping Certain NonClustered Indexes
• On Computed Columns
• Filtered on Sparse Columns
• Blame the expression evaluator
Consider Limiting The Memory Available to
• Wait, WHAT???
• CHECKDB grossly overestimates the amount of memory it will need to
execute, stealing pages from the Buffer Pool and decreasing overall
• Use Resource Governor to limit. Enterprise Edition only
• Our tests point to a 1GB of execution memory is usually optimal
Thank You!
• Ola Hallengren’s Maintenance solution:
• Paul Randal’s blog, CHECKDB category:
Contact Information
[email protected]