Some tips & tricks for the transaction log

It has been some time that I blogged. But lately I really have little time.

But this does not mean I don’t follow up on blogs, topics and so on. Lately I found some rather interesting things (but I don’t remember on which blog) for the transactionlog-performance.

 

First I will talk a little about transaction log fragmentation.

No, NOT about the physical fragmentation. I think that issue is well-known by now (at least, I hope so).

 

I want to talk about internal fragmentation!

Internally, the physical logfiles are divided in VLF (=Virtual log files) and also these can become fragmented by shrinking and growing the logfiles all the time. Also when you grow your logfiles by small amounts, this happens. (In the second part of this blog, I will talk about a trick to avoid this kind of fragmentation.)

The basic idea is that you shouldn’t have more than 20-30 VLF’s.

Now how do you know how many VLF’s you have?

Easy : Run “DBCC LOGINFO” in the SQL server management studio. Each record is a VLF in your current database.

 

First customer I checked this, had over 10.000 VLF’s, and also heavy physical fragmentation (also a logfile-size that was 10 times the size of the DB but that is another story).

Now how did I eliminate these?

If possible, it is best that no-one is working. That makes it easier.

First I made a full backup of the DB.

Then I put the recovery model to simple to clean the logfile.

I created an extra logfile of 10MB.

Then I shrunk the logfile (alternating between “Release unused space”, “Reorganize pages before releasing unused space with shrink file to 10MB [or lower]”. The problem is that you can’t remove the primary logfile. This would make it a lot easier.

Once I got as low as possible (about 12MB). I checked the VLF’s in it and I had 4. Then I grew the logfile up to 4 GB (the DB was about 3GB and I had the place, so why not use it? And 4GB is still smaller than the 26GB they had before!). After this I had 22 VLF’s. Nicely inside my range of 20-30 VLF’s.

I DIDN’T delete the secondary logfile (in the second part, more on that).

I left the recovery model to simple and implemented (in accordance to their IT-guy) a differential backup each hour (for differential backups, you don’t need full recovery model). In the end, they didn’t need the full-recovery model-strategy (they had it and didn’t know it, made backups with a third party backup tool and they didn’t make transactionlog backups [do I need to explain why they had a 26GB transactionlog file?]).

 

After having created both logfiles, I just stopped the service and defragged the TL-disk (the disk-fragmentation-analysis showed me a blood red display!) and restarted the service.

 

Why is it important NOT to have too much VLF’s in the transactionfiles? Always the same reason : performance!

The speed with which SQL can write into the TL-file determines the performance for writing data to the DB (remember that posting in NAV is quite write-intensive).

This way you can increase write-performance (I don’t know how much in percentage, but remember that a lot of small things make 1 big!) of the transactionlog without throwing hardware (or reconfiguring it) to it.

 

 

Now the second part of the blogpost.

How to avoid you get too many VLF’s in your transactionlog?

The trick I read was this:

Create a big enough transaction log from the beginning, considering how big the DB will get, recovery model,…

For this customer I made the transactionlog 4 GB (for a current DB of about 3GB). This size is enough to contain a full rebuild index (I had to create a maintenance plan for this) also in the near (and also no-so-near future) in this customers’ case.

I created the file with a FIXED SIZE! So NO autoincrement!

 

To avoid that the DB gets completely blocked in case the TL is full, I created a secondary TL of 10MB (Not a tipo : it is really ten megabyte!) and put autogrowth to 1GB.

 

What is the impact of this when all goes well?

The transaction log is always written sequentially. So first the 4GB-file will be written and then it will switch to the 10MB file. Writing to this is probably slower than the other file, but it is only for 10MB. SQL will almost immediately swap back to the 4GB file to have very little impact on performance.

 

What is the impact of this when something goes wrong and the logfile fills up completely?

The primary log file will fill up and then the secondary logfile will fill up until all logspace is used. In that moment, the secondary logfile will grow until the problem is fixed (or until the drive is full).

To fix the problem, you first have to know WHY it happened (maybe a restore of a NAV backup ; the transactionlogbackups didn’t work anymore [WHY?!!!],…). After you fixed the problem, you can shrink the logfile with option “Empty file by migrating the data to other files in the same filegroup” and remove the file and then recreate it with the same parameters as before.

5 Responses to “Some tips & tricks for the transaction log”

  1. Thanks for your informative blog. I’m not sure you accept questions but I’ll try. I have a vendor application using SQL 2005. The original autogrowth for the trans log was set to 1M and as the result we have now 34,000 VLF. We take hourly trans log backup. If I run DBCC SQLPERF(LOGFILE) Immediately after trans log backup the app db log still shows %used=20% (=~2GB). It never goes below 20%. DBCC OPENTRAN will say:” no open active transaction” and when I run DBCC LOGINFO very few VLF will have status=2
    My questions are:
    1. If there are no open transactions and only few VLF are active why %used shows 20%?
    2. What can I do to get rid of the huge amount of VLF and start the log from scratch with more reasonable initial size?

  2. First check if you don’t have some logshipping or other things that necessitate full recovery model. If you have, then afterwards, you have to fix those.
    These are the steps I would try:
    1) everyone stops working
    2) make a full backup (for safety!)
    3) put recovery model to simple to clean the logfile
    4) create an extra logfile of 10MB.
    5) Shrink the logfile (alternating between “Release unused space”, “Reorganize pages before releasing unused space with shrink file to 10MB [or lower]”. The problem is that you can’t remove the primary logfile. This would make it a lot easier.
    6) Get as low as possible (in my example I arrived at 12MB).
    7) grow the primary log file to a size +- like your DB and put autogrowth to false.
    8) Leave the extra logfile with size of a few MB and with growth of 1GB.

  3. Hi Alain,

    actually a simpler way is just to delete the log file. Basically you just detach the database from SQL, delete the log file then reattach. When you reattach a small clean log file is created that you can easily manipulate and clean up.

  4. That’s a faster way but not exactly a clean way.
    And considering that SQL guru’s (I am following up on a lot of SQL blogs) don’t even like to put the recovery model to simple and then back to full to clean the logfile….

  5. Read also this if you want to delete the transaction log: http://sqlserverpedia.com/blog/sql-server-bloggers/deleting-the-transaction-log/

Leave a Reply

You must be logged in to post a comment.

Create a new blog and join in the fun!