It has been some time that I blogged. But lately I really have little time.
But this does not mean I don’t follow up on blogs, topics and so on. Lately I found some rather interesting things (but I don’t remember on which blog) for the transactionlog-performance.
First I will talk a little about transaction log fragmentation.
No, NOT about the physical fragmentation. I think that issue is well-known by now (at least, I hope so).
I want to talk about internal fragmentation!
Internally, the physical logfiles are divided in VLF (=Virtual log files) and also these can become fragmented by shrinking and growing the logfiles all the time. Also when you grow your logfiles by small amounts, this happens. (In the second part of this blog, I will talk about a trick to avoid this kind of fragmentation.)
The basic idea is that you shouldn’t have more than 20-30 VLF’s.
Now how do you know how many VLF’s you have?
Easy : Run “DBCC LOGINFO” in the SQL server management studio. Each record is a VLF in your current database.
First customer I checked this, had over 10.000 VLF’s, and also heavy physical fragmentation (also a logfile-size that was 10 times the size of the DB but that is another story).
Now how did I eliminate these?
If possible, it is best that no-one is working. That makes it easier.
First I made a full backup of the DB.
Then I put the recovery model to simple to clean the logfile.
I created an extra logfile of 10MB.
Then I shrunk the logfile (alternating between “Release unused space”, “Reorganize pages before releasing unused space with shrink file to 10MB [or lower]”. The problem is that you can’t remove the primary logfile. This would make it a lot easier.
Once I got as low as possible (about 12MB). I checked the VLF’s in it and I had 4. Then I grew the logfile up to 4 GB (the DB was about 3GB and I had the place, so why not use it? And 4GB is still smaller than the 26GB they had before!). After this I had 22 VLF’s. Nicely inside my range of 20-30 VLF’s.
I DIDN’T delete the secondary logfile (in the second part, more on that).
I left the recovery model to simple and implemented (in accordance to their IT-guy) a differential backup each hour (for differential backups, you don’t need full recovery model). In the end, they didn’t need the full-recovery model-strategy (they had it and didn’t know it, made backups with a third party backup tool and they didn’t make transactionlog backups [do I need to explain why they had a 26GB transactionlog file?]).
After having created both logfiles, I just stopped the service and defragged the TL-disk (the disk-fragmentation-analysis showed me a blood red display!) and restarted the service.
Why is it important NOT to have too much VLF’s in the transactionfiles? Always the same reason : performance!
The speed with which SQL can write into the TL-file determines the performance for writing data to the DB (remember that posting in NAV is quite write-intensive).
This way you can increase write-performance (I don’t know how much in percentage, but remember that a lot of small things make 1 big!) of the transactionlog without throwing hardware (or reconfiguring it) to it.
Now the second part of the blogpost.
How to avoid you get too many VLF’s in your transactionlog?
The trick I read was this:
Create a big enough transaction log from the beginning, considering how big the DB will get, recovery model,…
For this customer I made the transactionlog 4 GB (for a current DB of about 3GB). This size is enough to contain a full rebuild index (I had to create a maintenance plan for this) also in the near (and also no-so-near future) in this customers’ case.
I created the file with a FIXED SIZE! So NO autoincrement!
To avoid that the DB gets completely blocked in case the TL is full, I created a secondary TL of 10MB (Not a tipo : it is really ten megabyte!) and put autogrowth to 1GB.
What is the impact of this when all goes well?
The transaction log is always written sequentially. So first the 4GB-file will be written and then it will switch to the 10MB file. Writing to this is probably slower than the other file, but it is only for 10MB. SQL will almost immediately swap back to the 4GB file to have very little impact on performance.
What is the impact of this when something goes wrong and the logfile fills up completely?
The primary log file will fill up and then the secondary logfile will fill up until all logspace is used. In that moment, the secondary logfile will grow until the problem is fixed (or until the drive is full).
To fix the problem, you first have to know WHY it happened (maybe a restore of a NAV backup ; the transactionlogbackups didn’t work anymore [WHY?!!!],…). After you fixed the problem, you can shrink the logfile with option “Empty file by migrating the data to other files in the same filegroup” and remove the file and then recreate it with the same parameters as before.