SQL Server Performance – configure your Temp Db

Tomasz Modelski

In this post I want to give you a very simple but powerful performance tip for MS SQL Server :
Configure your Temp Db properly!!
– even on your development environment!

Workers-performance-banner

Temp Db is one of MS SQL Server’s bottlenecks and it is badly configured by default!!
If you don’t want your queries to behave like on this picture, try those simple steps below.

I’ll shortly explain why the Temp Db configuration is so important later in this post, but all you need to know for now is that:

every single MS SQL Server instance contains only one ‘Temp’ database,
which is used in many situations by all databases on those instance,

so you need to do such optimization on every SQL Server instance.

Solution:

Apply those simple configuration steps to Temp Db to improve performance :

Select database ‘Temp’ -> Properties -> Files page and :

  1. Enlarge the initial size of Temp.mdf to minimum 128 – 512 MB (from default 3 or 5 MB). 512 MB is the optimal dixr, but it depends from the number of files (point 3) and your disk size ex. on developer / virtual machine.
    A very easy but important step.
    .
  2. Set Autogrowth property of that file to ‘In Megabytes’ option and to minimum 128 MB.
    A very easy but important step.
    .
  3. Split Temp.mdf into multiple files, based on the number of processor cores on your system.
    Create multiple ‘TempN.mdf’ files but WITH the SAME SIZE (as in point 1) and the same Autogrowth values.
    .
    How many files should you create ? The general simple ‘heuristic‘ rule is:\

– On the system with 8 cores or less Temp.mdf should be splitted on the number of files equal to the number of cores of your system.

– If you have more than 8 cores, use 8 files and if you’re seeing in-memory contention, add 4  files more at a time. These rule is described in MS KB article 2154845.

Or in other words: on a system with more that 8 cores, the number of files should be no less than 8 and, if more files are needed -> between 1/4 and 1/2 of the number of cores. On a heavy production environment you should do some experiments to find the optimal value. There’s a production example of the best performance on a 32 cores server with Temp Db splitted to 64 files!
You can read about determining a number of files for Temp Db here :
http://www.sqlskills.com/blogs/paul/a-sql-server-dba-myth-a-day-1230-tempdb-should-always-have-one-data-file-per-processor-core/

Why it’s so important ?

A lot of T-SQL operations (parts of T-SQL queries) are executed within Temp database
which is only ONE and SHARED by ALL databases on a single MS SQL Server instance.
Klaus Aschenbrenner on the latest polish SqlDay 2015 conference called Temp Db ‘a public toilet for all Sql Server’.
It’s a good example – it shows the importance of the configuration of Temp Db. Imagine how it does look like when the only one public toilet available is ‘not maintained correctly’ after one day with few thousands of ‘visitors’!

T-SQL operations that use / store esults / run with the help of Temp Db are for example :

  • storing user objects like local temp tables (prefix #), global Temp tables (prefix ##), table variables.
  • Tables returned from Table Valued Functions
  • Work tables for Hash Joins operations (‘Large’ joins when there’re no proper indexes) & hash aggregations.
  • Work tables for many GROUP BY, ORDER BY, UNION, SORT, SELECT DISTINCT
  • Work tables for ‘internal’ purposes : DBCC CHECKDB, DBCC CHECKTABLE, processing cursors, processing Service Broker objects, Work files for sorts that result from creating or rebuilding indexes (SORT_IN_TEMPDB).
  • Version Store – Snapshot & Read Committed Snapshot Isolation (!!), triggers like ‘on insert, on update’. .
  • …….

As you see, Temp Db is used ‘internally’ in many situations.
Also keep in mind that I/O operations are expensive.

So, why those 3 simple configuration tips will improve SQL Server performance ?

  1. Temp Db is always recreated from scratch with it’s configured size on SQL Server restart.
    3 or 5 MB are always not enough, so they must be sooner or later expanded – and this operation is expensive on I/O level.
    Why loose time to expand ? Let it have enough space from the beggining !
    It’s also connected with tip #2.
    .
  2. Autogrowth 10% or 1 MB? It’s ridiculous.
    With the initial size 3-5 MB such autogrowing will occur OVER and OVER again – and it’s an expensive operation!!
    Also a tip from Klaus Aschenbrenner :
    NEVER ever configure autogrowth on a percentage value, only on a fixed value!! 
    Even a 10-30% value can be unacceptable big when your actual file size is big.
    50% can lead to a catastrophe and server crash in some scenarios! Misconfiguration in points 1 & 2 often leads to an effect I call ‘database warn-up’: database becomes  fully responsible after some period of time (ex. few minutes).
    .
  3. Temp Db splitted into many files based on the number of processor cores.
    It’s also related to costs and synchronizing in-memory data with disk which is implemented on ‘physical’ low level done with help of files paging.
    Synchronizing one page with disk is always done in the one single thread!!
    So – there are scenarios when db splitted into many files will perform a lot better because of a possibility of concurrent I/O operartions.
    But also you should not create to many files :-).
    Details are explained here : http://www.sqlskills.com/blogs/paul/a-sql-server-dba-myth-a-day-1230-tempdb-should-always-have-one-data-file-per-processor-core/

Additional resources :

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s