Multiple SQL Data Files and NAV

gwmadregwmadre Member Posts: 13
edited 2009-05-19 in SQL Performance
We have a RAID 10 Array that is almost full. We would like to add an additional RAID 10 array to the drive enclosure and add another secondary data file to store on this array.

We haven't split the NAV database into multiple data files before and are not sure if this is recommended or supported. Has anyone done this before?

Comments

  • strykstryk Member Posts: 645
    Hi!

    Yes, of course it is possible to create additional database files, e.g. for filegroup "Data Filegroup 1". This is not a NAV feature, it's SQL "standard".
    Splittng the DB into multiple files is feasible if you could store each file on a different physical drive, thus you splitt I/O and gain performance.

    But even if you have all files on the same volume, there could be benefits regarding performance, depending on the "strategy" you have in mind (e.g. different filegroups for different purposes).

    Hence, in you case, adding another file which is placed on its own RAID10 should work OK! BUT: SQL will try to fill all files equally. Thus, once you added the new file, SQL will exclusively write to the new file until both files are filled up to the same degree - there will be no "load balancing" at first ...

    Maybe you want to look into the DBCC SHRINKFILE EMPTYFILE feature, which could be used to newly distribute the data over multiple files.
    Regards,
    Jörg
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
  • David_SingletonDavid_Singleton Member Posts: 5,479
    Is there a big benefit to splitting the SQL database into multiple files. If so is it a big difference?

    If you had say 10 drives, would you create
    • 5 x RAID1 and 5 data files
    • 1 x RAID10 and 1 data file
    • 1 x RAID 10 and 5 data files (does this even make sense)
    • Something else?

    Also would you match the number of Data files to the number of drives or to the number of CPUS/Cores?
    David Singleton
  • gwmadregwmadre Member Posts: 13
    We are planning to do a 2 x RAID10 with 2 secondary data files. This is so we can add another RAID 10 and continue to use our existing storage.

    I'm hoping that this increases performance as well.

    However, I noticed that when you install NAV it typically creates one primary data file (nav_data.mdf) and one secondary data file (nav_1_data.ndf). The primary file stays small while the secondary grows. Can someone educate me on what is happening here as I haven't seen this with other applications?
  • bbrownbbrown Member Posts: 3,268
    SQL will stripe the data across the disk arrays (assuming they're balance) so it is also important that they be of similar performance. You don't want to match a newer SAS array with an older SCSI array. You may be better off purchasing 2 new arrays and using DBCC SHRINKFILE EMPTYFILE to move the data. Then using the old array elsewhere.
    There are no bugs - only undocumented features.
  • DenSterDenSter Member Posts: 8,304
    NAV uses the primary data file to store table definitions, and objects. The secondary file is used for the actual data. At some point in the past, someone decided that this was the optimal way to split the data files, and this has been the standard way for NAV to create the files. I have not yet found any confirmation that this is actually the best way, but have also not found anything to suggest that it hurts anything.
  • SkeeveSkeeve Member Posts: 33
    Multiple DB files are definitely beneficial for performance.

    In fact, it makes a huge difference and it is one of the "low hanging fruits" to improve NAV's performance on SQL.

    As a general best practice (SQL in general), you should have 0.25 to 1 file per CPU core. HyperThreading does not count, but dual core CPUs do.

    The same is true for the TEMPDB by the way.

    All the files should be the same size, otherwise, SQL will favor the file(s) with more space, defeating the purpose.
  • David_SingletonDavid_Singleton Member Posts: 5,479
    Skeeve wrote:
    Multiple DB files are definitely beneficial for performance.

    In fact, it makes a huge difference and it is one of the "low hanging fruits" to improve NAV's performance on SQL.

    Have you got any before/after stats on this. I would be very interested to hear numbers if you have them.
    David Singleton
  • David_SingletonDavid_Singleton Member Posts: 5,479
    Skeeve wrote:
    ... you should have 0.25 to 1 file per CPU core. ...

    Don't you mean the other way around? Or are you basically saying that if you had 8 DB parts then you would want 32 CPUs? I would have thought 4 DB files to one CPU, not 4 CPUs to one DB file?

    I am about to do some tests to compare the number of files to check what performance difference it makes, so any feedback would be much appreciated.
    David Singleton
  • SkeeveSkeeve Member Posts: 33
    .25 to 1 is correct.

    So basically, if you have 4 cores (doesn't matter whether that's 4 sockets or 2 dual core CPUs or one quad, you would have

    - 1 file in case of .25 per core
    - 2 files for .5 per CPU core
    - and 4 files for 1 per CPU core

    I usually go with .5 or 1 DB file per CPU core and performance is absolutely better that way. However, it all depends on many other components as well:

    - You want to apply the same to the TEMPDB.
    - Keep the TLOG on a separate dedicated logical drive, ideally also RAID10
    - Better yet, also keep the TEMPDB on a dedicated RAID 10 (this is really for larger implementations)
    - Make sure SQL can actually use all the memory in the system, not just 2GB ;)
    .....and many other things

    And for the skeptics, go check out http://www.microsoft.com/technet/prodte ... op-10.mspx which is Microsoft own SQL Server Storage Top 10 Best Practices 8)
  • David_SingletonDavid_Singleton Member Posts: 5,479
    In your original post you said 0.25 File per Core which means 1 file per 4 CPUs, so I guess you made a typo in the orginal. :wink:

    Also in a "Typical SQL implementation" for sure splitting the files will give benefits. The question is; will there also be a benefit in a NAV environment. I was really hoping from the post you made that you have done direct comparisons between single and multiple files WITH NAVISION. And would be very interested to hear your results.
    David Singleton
  • SkeeveSkeeve Member Posts: 33
    .25 to 1 per core is the original post as well as the follow-up post. Yes, you are right, if you choose to go with .25 per core, you end up with only one file on a 4-core machine (which is the overwhelming majority out there). Better is to go with .5 to 1 which gives you 2 or 4 files on a 4-core machine. No typo there ;-) I guess .25 shouldn't even be mentioned because on a 4-core machine, you end up with 1 file.

    Anyhow, I have not done a precise benchmark on this, but several clients have confirmed the noticeable performance improvement after I added DB files of EQUAL size to the one that existed.

    Problem is, NAV defaults to:

    - 1 actual DATA file (should be 2 to 4, depending on the number of CPUs)
    - no rowlocking (should be on)
    - only 501 records cached for FINDSET (should be 1024)
    - only 10 seconds lock timeout (should be more like 30)
    - auto update and create statistics (should be off and done at night via maintenance job)
    - 10% file growth (should be higher)

    But again, this is only a part of the overall picture - I am just mentioning the low-hanging fruits.

    Great performance of a NAV DB depends on many more things.
  • David_SingletonDavid_Singleton Member Posts: 5,479
    Skeeve wrote:
    I guess .25 shouldn't even be mentioned because on a 4-core machine, you end up with 1 file.

    Yes that makes sense, because you were contradicting your self by suggestion one file with a quad core. Now I understand what you were attempting to say.
    David Singleton
  • DenSterDenSter Member Posts: 8,304
    The difficult thing here is not whether these are good suggestions. They all are, but you really can't tell for sure which one is contributing how much unless you implement these one at a time. Have you actually gone in and did a measurement, put in multiple files, measure again, implement the next change, measure again, etc.? My guess is that most of the time, many changes are introduced at once, and it is not really possible to pinpoint how much each thing is contributing.

    Couple of things I want to add here:
    • Always rowlock: don't just turn this on unless you have plenty of RAM, and you are on 64bit. rowlocking is a memory hog, so be careful with this. If you are on 32bit, then this is definately not a good idea, as the extra RAM is not used for lock caching.
    • It's not the file size that SQL Server looks at, it's how the files are filled. When you just add files, SQL Server will not use the original file until the new ones are about as full as the original one. So if you go from 1 to 2 files, theoretically you won't notice a difference until file number 2 is filled up as much as file number 1
    • Number of files per core is highly debatable. This makes a big difference when you have long running queries that are handled by SQL Server in parallel CPU usage. In NAV implementations the queries are so small that some people would argue that expanding the number of files will not make much of a difference. I would really be interested in implementations where this particular thing is the only one that is implemented, with measurements before and after.
    • File growth has absolutely nothing to do with performance, unless the autogrow is constantly triggered. 10% is a perfectly acceptable number. The DBA has to keep an eye on the file however, and expand it manually before it hits the limit, and prevent an auto grow event from happening.
    • Lock timeout depends on user preference. I just had a go-live where they tried 30 seconds, and they were very unhappy about it, they tried 20 seconds, and ended up putting it back to 10 seconds. The timeout itself does not affect performance, other than the fact that the same transaction might be started more than once during peak hours. In itself this setting does not affect performance.
    One thing I don't like about blanket statements like this is that it doesn't always work that way. What I have found is that each situation calls for its own unique approach. Just slapping everything in there in one standard way is not always the right thing.

    Other than that, good advice, all well worth investigating :mrgreen:
  • DenSterDenSter Member Posts: 8,304
    By the way, the biggest impact low hanging fruit is still index tuning. Compared to the impact of index tuning, everything else is small potatoes.
  • SkeeveSkeeve Member Posts: 33
    Wow - I'm afraid I don't have the time to respond properly to this and the previous posting, but you should realize that you are questioning the storage best practices as published by Microsoft.

    I mean, how can you possibly say that file growth has nothing to do with performance?

    And then you are saying that "number of files is highly debatable" - Dude, check the MS best practice for SQL.

    But let me tell you that index tuning is not the low-hanging fruit. Index tuning is what takes the most time and improves performance in small increments, unless of course you have some very stupid code/index combination, but then it only applies to maybe one or two specific processes.

    If you have a poorly performing DB across the entire application (not just ILE or CLE or some other process), you will gain the most improvement and immediate customer satisfaction by looking at those "low-hanging fruits".

    If, on the other hand, you have ruled out all the low-hanging fruits, and the DB is still slow, THEN I agree that index tuning is the next step ;-)

    Now, what I do completely agree with you on is, that every implementation is different. Some of these things might be called for in one impl, and not in another. However, I was trying to come up with a best practices-type of posting here. All we see in these postings is that someone is having performance issues. We don't have any detail on the number of users, customizations, integrations, server architecture, SQL installation and configuration, verticals, etc.

    Again, following the storage best practices, published by MS, has improved performance considerably in ALL cases where I have done it, that's NAV implementations ranging from 10 to several hundred users.
  • DenSterDenSter Member Posts: 8,304
    edited 2008-11-15
    Sorry you feel offended, and I don't mean for this to turn into a pissing match, but advice like this should not be given without putting things into perspective.
    Skeeve wrote:
    you are questioning the storage best practices as published by Microsoft.
    No I am questioning just blindly following those guidelines without thinking about them. You should look at each bullet point and think about whether each one is a valid point specifically for NAV. You should also read what I added to each bullet point instead of simply dismissing what I say
    Skeeve wrote:
    I mean, how can you possibly say that file growth has nothing to do with performance?
    Because it doesn't. What causes a problem in performance is the auto grow event itself, not by how much the file size grows. The PROPER WAY to take care of that is to expand the file size BEFORE it hits the limit. Incrreasing the increment only makes the step bigger. The autogrow event itself will occur less frequently, but when it does, the bigger the growth the longer it takes to perform the task. As long as there is free space in the data file it doesn't make a damn bit of difference in performance whether the file growth is 10% or 50%.
    Skeeve wrote:
    And then you are saying that "number of files is highly debatable" - Dude, check the MS best practice for SQL.
    I said it is debatable, meaning we can have a debate about its merits, based on actual measurements done before and after making that one change. I happen to have seen very bad performance on a database that had many files, and when I put the whole thing in one file, the performance was much better. So my personal experience tells me the exact opposite of that this link of yours says. Again, I am not dismissing it, I am questioning it, based on my personal experience and experience of people whose opinion I value highly, in NAV implementations. Because I don't just take my one experience as gospel, I actually verify what I find with some experts that I work with. Those recommendations are made for the 'average SQL Server implementation', and are not NAV specific.

    I also said that I am interested in hearing about actual measurements where multiple files is the only change, because (apparently unlike you it seems) I am willing to consider the possibility that my opinion might not be the right one, look at things and possibly adjust my opinion. Just because someone at MSFT put a best practices article on MSDN doesn't make it true for everyone NAV database out there.
    Skeeve wrote:
    But let me tell you that index tuning is not the low-hanging fruit. Index tuning is what takes the most time and improves performance in small increments, unless of course you have some very stupid code/index combination, but then it only applies to maybe one or two specific processes.
    Using a specific toolset (I think we all know which one), I do index tuning projects in 2 days, with up to 8 hours of implementation assistance follow up. I would definately say that using this approach, index tuning is definately not a slow incremental process.

    What I think you missed in my previous post is that I think all of these recommendations are worth investigating, I am not dismissing any of it just by face value. I just put some comments around some of the things I think you don't quite get, that's all.
  • SkeeveSkeeve Member Posts: 33
    The SQL Perform Tools are fantastic - and I love to implement them.

    Let's say you have a crappy server, with everything storage-wise completely wrong. Now you go in and use the Perform-Tools to tune indexes. Yes, you will improve the performance considerably, within your 2 days.

    Let's take all the changes you are making and reverse them.

    Now I am going in and I am fixing all the storage problems (and other configuration issues) in 60 minutes and the result will be a considerable performance improvement.

    Now you combine both and the client goes to heaven.

    That's my point though, in my opinion, (if the storage and SQL is not configured correctly), this should be the first thing you do. And then you go in and apply your tools to increase performance even further, in fact, all of the things I mentioned would be discovered during a 1 or 2 day performance analysis engagement and would be part of my recommendations document.

    Anyhow - I am pretty sure Hynek will agree with that ;-)
  • DenSterDenSter Member Posts: 8,304
    There, see how putting a little context in there makes a difference? :mrgreen:

    Considering that the 60 minute change you are proposing is a result of a 2-3 day investigation, I'd say that coming from a crappy implementation, your changes and my changes would probably tie as far as amount of work go :mrgreen:. I do agree, in your work (because I used to work there too), you see a lot of crappy implementation due to very bad initial recommendations, and somehow someone always needs to go in and fix the first mistakes. So yes, of course, if there are glaring hardware faults, those are easiest to fix. I am just questioning the general validity of those specific recommendations.

    Coming from a situation in which the hardware is already pretty well configured though, which is quite common in my experience, index tuning will make a MUCH bigger impact. I do hardware checks, but rarely find big fat glaring faults at work. If you're already on a well performing SAN, on separated RAID10 arrays for data and transaction log, I can guarantee you that splitting up the data files will make a minimal impact, compared to what you can accomplish with index tuning.
  • SkeeveSkeeve Member Posts: 33
    So now you know where I work, huh? :roll:

    I agree with most of what you said, with one exception. It does not take me 2 - 3 days to find the hardware problems. I usually do that initial investigation in one-hour desktop sharing sessions, resulting in an 8-hour desktop sharing session, with the person responsible for the DB, to correct these things.

    If performance is still un-acceptable after that, I start looking at indexes, running SQL traces, etc.

    Hey, but that's only true if hardware/configuration problems were found during the first 60 minutes. Otherwise, I go down the same path you are :mrgreen:
  • DenSterDenSter Member Posts: 8,304
    Skeeve wrote:
    So now you know where I work, huh? :roll:
    That's right, you even sat in the cube next to mine, you forgot about that? #-o
  • SkeeveSkeeve Member Posts: 33
    Man, that was a long time ago. Time flies :roll:
  • SkeeveSkeeve Member Posts: 33
    Skeeve wrote:
    Multiple DB files are definitely beneficial for performance.

    In fact, it makes a huge difference and it is one of the "low hanging fruits" to improve NAV's performance on SQL.

    Have you got any before/after stats on this. I would be very interested to hear numbers if you have them.

    David,

    I just happened to help a client during an upgrade go-live. They were starting out with one file (well, two, but the primary doesn't count as it doesn't hold any actual data).

    So I used the lovely procedure pt_filestats2 to see what's happening storage-wise. The one and only data file had an average wait time of 120ms. Pretty bad.

    So I created a second file of roughly equal size. Now they started to post go-live batches again and I kept running pt_filestats2.

    After just a minute, the average wait on the first file went down to 80ms. At the same time, the wait increased on the second file.

    HOWEVER, the total average wait time of both files was way lower than the 120ms, and it kept falling. With two files, it seems to average out at about 80ms. This server has 8 cores, so the long-term plan is to go to 4 files of equal size, they just can't do it right now because there's not enough space.

    Long story short, even adding only one file to the DB has decreased the average wait time on the DB (all files) from 120ms to 80ms :mrgreen:
  • air_wolfair_wolf Member Posts: 13
    interesting topic.
    I have the worst case scenario: 32bit, 4GB RAM, RAID 5, all files on the same partition, 60GB nav file, a lot of transactions, never tuned anything. performance is below 0, if I can say so (even filtering after primary key in small tables will lock down the system).
    I will switch to the following configuration: 64bit, RAID10, multiple files on separate disks, index tuning.
    before the upgrade I will do some benchmarks against the old databes and after, against the new database. like this we will see the improvment...

    of course, any improvment ideea will be apreciated :)
  • BeliasBelias Member Posts: 2,998
    Skeeve wrote:
    - only 501 records cached for FINDSET (should be 1024)
    i'm not a sql expert, but i'm trying to learn something about it, as i'm a nav developer.
    i'd like to point out something on this statement: Microsoft has downed the recordset from 500 to 50 in nav 2009, and they've done this for the reason that you usually loop through a small set of records (like orders and invoices lines)
    for larger sets, you should then use the old FIND('-')
    -Mirko-
    "Never memorize what you can easily find in a book".....Or Mibuso
    My Blog
  • davmac1davmac1 Member Posts: 1,283
    Not everyone has under 50 lines on a sales order.
    Some companies may have hundreds of lines - sometimes over 1000.
    This really should be a database setting.
  • strykstryk Member Posts: 645
    Belias wrote:
    Skeeve wrote:
    - only 501 records cached for FINDSET (should be 1024)
    i'm not a sql expert, but i'm trying to learn something about it, as i'm a nav developer.
    i'd like to point out something on this statement: Microsoft has downed the recordset from 500 to 50 in nav 2009, and they've done this for the reason that you usually loop through a small set of records (like orders and invoices lines)
    for larger sets, you should then use the old FIND('-')
    OK, this is going to be a little "off topic", but anyway ... That's the story about the "Cache Record Set Size":

    The old FIND('-') declared a SQL cursor which loaded the WHOLE record set into it - regardless if it was 1 record or 1 Million. This resulted in heavy load on the Server AND the Client.
    Thus, FINDSET was introduced, which created sort of packages, first one with 500 records (Cached Record Set size) (then smaller ones, if the result exceeds 500 recs). This created smaller cursors, reducing the load on Server and Client -and especially the "Client" is the important thing!
    With NAV 2009 comes the RTC and the Service Tier. This ST now has to process MULTIPLE users! So, simplified, previously we had one fat client processing ONE cursors at 500 recs. Now the ST would process MULTIPLE cursors at 500 records each, which could again result in overloading the ST. That's why MS reduced the Cache Record Set down to 50 - simply to save RAM.
    Hence, if you are running NAV 2009 with Classic Client, you should set it back to 500. On the other hand, if you are running any "classic" NAV on Citrix/TS it might be feasible to reduce the Cache Record Set e.g. to 100, too - for the same reasons!
    Last but not least, there's always the discussion about "FIND vs. FINDSET". A general advice is: ALWAYS use FINDSET! If your actual result set is exceeding the 500 records, well, so be it. Yes, this will result in cursor-recreation and "repositioning" etc. which is sub-optimal, but normally current hardware could compensate this. The old FIND should only be used in very specific cases anymore, and only if FINDSET doesn't work.
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
  • David_SingletonDavid_Singleton Member Posts: 5,479
    davmac1 wrote:
    Not everyone has under 50 lines on a sales order.
    Some companies may have hundreds of lines - sometimes over 1000.
    This really should be a database setting.

    :-k It is a database setting ... isn't it :?:
    David Singleton
  • David_SingletonDavid_Singleton Member Posts: 5,479
    stryk wrote:
    OK, this is going to be a little "off topic", but anyway ... That's the story about the "Cache Record Set Size":

    The old FIND('-') declared a SQL cursor which loaded the WHOLE record set into it - regardless if it was 1 record or 1 Million. This resulted in heavy load on the Server AND the Client.
    Thus, FINDSET was introduced, which created sort of packages, first one with 500 records (Cached Record Set size) (then smaller ones, if the result exceeds 500 recs). This created smaller cursors, reducing the load on Server and Client -and especially the "Client" is the important thing!
    With NAV 2009 comes the RTC and the Service Tier. This ST now has to process MULTIPLE users! So, simplified, previously we had one fat client processing ONE cursors at 500 recs. Now the ST would process MULTIPLE cursors at 500 records each, which could again result in overloading the ST. That's why MS reduced the Cache Record Set down to 50 - simply to save RAM.
    Hence, if you are running NAV 2009 with Classic Client, you should set it back to 500. On the other hand, if you are running any "classic" NAV on Citrix/TS it might be feasible to reduce the Cache Record Set e.g. to 100, too - for the same reasons!
    Last but not least, there's always the discussion about "FIND vs. FINDSET". A general advice is: ALWAYS use FINDSET! If your actual result set is exceeding the 500 records, well, so be it. Yes, this will result in cursor-recreation and "repositioning" etc. which is sub-optimal, but normally current hardware could compensate this. The old FIND should only be used in very specific cases anymore, and only if FINDSET doesn't work.

    Thanks Jorg, that s a very clear explanation. :D
    David Singleton
  • Alex_ChowAlex_Chow Member Posts: 5,063
    davmac1 wrote:
    Not everyone has under 50 lines on a sales order.
    Some companies may have hundreds of lines - sometimes over 1000.
    This really should be a database setting.

    :-k It is a database setting ... isn't it :?:

    I think on one of the threads, most people wanted a FINDSET(50) or FINDSET(500), or allow you to input the number of records to get beforehand instead of just having a fixed number on the setup.

    I wonder if Microsoft was listening... :-k
Sign In or Register to comment.