SQL 2005 64bit- Nav 5sp1 SLOW

fredp1fredp1 Member Posts: 86
edited 2010-03-24 in SQL Performance
Hi,
I am implementing Nav 5ps1 with Tectura Life Science add on.
Golive is in April 2009.
I have been having slow performnce on a beefy SQL server 2005 box where, when I refresh a Production Order, it was taking 5 minutes to run! ouch.
The same db on the Native Nav db took 10sec on a laptop!
The consultant cannot provide SQL advice, but mentioned that the BOMs are small and should be must faster.(comparison made to other more complex sites)

The hardware has plenty of grunt, Windows 2003 64bit, 12G ram, Xeon Quad core 3GHz. Its a IBM HS21 Blade.
The disks are mirror 73Gb 15Krpm SAS drives for the o/s, while the data sits on a SAN- fibre channel 12x 300G 15K rpm SAS drive RAID5
The san and server only run/host Nav.
SQL Sever 2005 64bit sp2.

I created a SQL maintenance plan today to reindex, defrag, update stats in SQL, and this made a big difference in speed.
The job now take 20secs to run, which is much better, but not as good as the native db on a laptop!

I've just order the NAV/SQL Performance Field Guide today, so if anyone can provide some further help, I'd appreciate it.

I'll keep this thread open and post my findings when I find more improvements.

Thanks for any adivce

Fred
«1

Answers

  • bbrownbbrown Member Posts: 3,268
    First, in the world of SQL, that's not a beefy box.

    1. Get the database off that RAID 5 array. RAID 10 ONLY. I am not a big fan of using a SAN for NAV. I don't think NAV along justifies the purchase of a SAN. You can get much more bang-for-your-buck with DASD. Not to say that there aren't situations where a SAN is the better choice.

    2. Where's the transaction log? Get it on a dedicated RAID 1

    3. You don't mention how large the database is.

    4. Is "Dynamic Low-Level Code" turn on in setup? That can increase the time to refresh an order.
    There are no bugs - only undocumented features.
  • strykstryk Member Posts: 645
    Hi!
    You should also make sure to have the latest version of NAV 5.0 SP1 installed - please refer to Waldo's BLOG: http://dynamicsuser.net/blogs/waldo/archive/2009/01/24/writing-platform-updates-overview-3-70-b-nav2009-rtm.aspx
    (your NAV build no. should be 27368)

    Once setup & configuration is OK, you should make sure that "Index Hinting" is disabled (depends on NAV build).
    Most of the problems as you describe are related to missing or insufficient indexes. You should investigate the problematic process with SQL Profiler and search for "bad queries", e.g. using more than 1000 "Reads" or taking longer than 50 msec "Duration" or "CPU". High "Reads" oftenly indicate "Index Scans" - usually due to lacking the optimal Index. Once you add a sufficient index, the process should speed up ...
    I've just order the NAV/SQL Performance Field Guide today, so if anyone can provide some further help, I'd appreciate it.
    :D Thanks. Hope it will help you to fix some issues!

    Kind regards,
    Jörg
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
  • davmac1davmac1 Member Posts: 1,283
    Are you running serial number tracking and adjusting costs real-time?
    Look at the total number of rows read, inserted, and updated. Nav can do some things that would tax any macine with certain settings.
    Average costing and serial number tracking of items in inventory can cause massive slowdowns,
  • fredp1fredp1 Member Posts: 86
    Hi
    Thanks for your reply,
    Please bear with me as I have only a very limited experience with perfromance tuning with SQL and I've got a lot of questions!
    bbrown wrote:
    1. Get the database off that RAID 5 array. RAID 10 ONLY. I am not a big fan of using a SAN for NAV. I don't think NAV along justifies the purchase of a SAN. You can get much more bang-for-your-buck with DASD. Not to say that there aren't situations where a SAN is the better choice.
    Can you quantify what the performance difference would be between raid 5, 1 and 10.
    Microsoft recommendation was raid 10, but some of the people I spoke to mentioned that bang for buck, raid 10 was a overkill.
    What sort of increase would one expect?
    Can you recommend any DASD storage that you may have used?
    bbrown wrote:
    2. Where's the transaction log? Get it on a dedicated RAID 1
    The transaction log is with the data on the raid 5.
    The mirror is C: for the O/S partitioned with a D: for the programs
    E: is the SAN where the Data and trans log file is kept.
    I'll move the trans log file to the D: and report back the difference in speed.
    bbrown wrote:
    3. You don't mention how large the database is.
    The database is very small, 2Gb, as we have only migrated master data- vendor, customers, g/l opening balaces, items, BOMS, inventory with lot numbers
    The db is actual 4G but is has 50% free space.
    Next month we will do parallel runs so I want to make sure that we do not hit a I/O bottleneck

    bbrown wrote:
    4. Is "Dynamic Low-Level Code" turn on in setup? That can increase the time to refresh an order.
    Yes it is on - the consultant insists that it stays that way.
  • bbrownbbrown Member Posts: 3,268
    If you are having these performance issues with a 2 GB database then you got a serious issue somewhere. You could run this on a laptop and get good performance. The hardware issues likely will not make a difference with such a small database. However, they will become more apparent as the database grows.
    There are no bugs - only undocumented features.
  • fredp1fredp1 Member Posts: 86
    stryk wrote:
    Hi!
    You should also make sure to have the latest version of NAV 5.0 SP1 installed - please refer to Waldo's BLOG: http://dynamicsuser.net/blogs/waldo/archive/2009/01/24/writing-platform-updates-overview-3-70-b-nav2009-rtm.aspx
    (your NAV build no. should be 27368)
    How can you check as help about doesn't say.
    I know we used the Nav5 sp1 dvd then we applied the platform update rollup 956161
    stryk wrote:
    Once setup & configuration is OK, you should make sure that "Index Hinting" is disabled (depends on NAV build).
    Where do I find this?

    .
    Most of the problems as you describe are related to missing or insufficient indexes. You should investigate the problematic process with SQL Profiler and search for "bad queries", e.g. using more than 1000 "Reads" or taking longer than 50 msec "Duration" or "CPU". High "Reads" oftenly indicate "Index Scans" - usually due to lacking the optimal Index. Once you add a sufficient index, the process should speed up ...
    When running Profiler, what events do I include?
    I'll have a look at profiler again, and see if I can see bad queries.
    I ran the Performance tuning wizard today, and it was suggesting a number new indexes with a possible 30% increase in perfromance.
    Can you provide some feedback on how to long to run a trace and do I accept the new indexes that the Performance Tuner wanted to add?

    I can wait to get your book.

    Thanks
  • fredp1fredp1 Member Posts: 86
    davmac1 wrote:
    Are you running serial number tracking and adjusting costs real-time?
    Look at the total number of rows read, inserted, and updated. Nav can do some things that would tax any macine with certain settings.
    Average costing and serial number tracking of items in inventory can cause massive slowdowns,

    Hi,
    We are using Standard costing with Lot numbers. No serial numbers.

    Thanks
  • fredp1fredp1 Member Posts: 86
    bbrown wrote:
    If you are having these performance issues with a 2 GB database then you got a serious issue somewhere. You could run this on a laptop and get good performance. The hardware issues likely will not make a difference with such a small database. However, they will become more apparent as the database grows.

    I' have even turned off the change log to try and track it down.
    Any help I can get would be appreciated!

    I'm going to try and access the same db on a SQL 2005 32bit and SQL 2000 32bit as I've heard some negative feedback on SQL 2005.
    I've run the same procedure in Nav on a desktop Core 2 Duo PC 2Gb ram , SATA drive with Server 2003 64bit and SQL 2005 64bit. It was taking 30sec vs 20sec on the Server with the SAN.

    Kind Regards
    Fred
  • DenSterDenSter Member Posts: 8,304
    fredp1 wrote:
    access the same db on a SQL 2005 32bit and SQL 2000 32bit as I've heard some negative feedback on SQL 2005
    That is not going to tell you anything. With your database on SQL Server you need to first make sure that your server is set up properly. Having a RAID5 to store your data as well as your transaction log, it doesn't matter at all what version of SQL Server you are on, performance is always going to suck.

    Your hardware must be set up properly (RAM, storage, CPU, settings), your OS must be set up properly (memory management, optimized for performance), and your SQL Server installation must be set up properly (degree of parallellism, memory settings, etc). Then your database must be created with the proper file sizes, on the proper drives.

    THEN you can start comparing SQL Server versions. You should use the latest version, at least SQL Server 2005. Don't use SQL Server 2000, support for that version has been discontinued. If you have the choice, use 64bit, memory management is much better on that platform. It's not whether it's 32 or 64 bits that kills performance, it's people logging into the SQL Server and running 32bit apps (like NAV) on your 64 bit box. If you are letting people RDP right into SQL Server, then I bet that is a large part of your problem.
  • fredp1fredp1 Member Posts: 86
    DenSter wrote:
    Your hardware must be set up properly (RAM, storage, CPU, settings), your OS must be set up properly (memory management, optimized for performance), and your SQL Server installation must be set up properly (degree of parallellism, memory settings, etc). Then your database must be created with the proper file sizes, on the proper drives.

    I take your point on RAID 5. I'll setup some new drives for the transaction log file- RAID 1/10 and a new drives for the data- RAID 10 on the SAN.

    Can you share your experience on what exactly to change.
    I'd like to know what to change for the O/S and SQL.
    I'm not sure where to look or what to change.
    Any links with a walk through would be much appreciated.
    DenSter wrote:
    THEN you can start comparing SQL Server versions. You should use the latest version, at least SQL Server 2005. Don't use SQL Server 2000, support for that version has been discontinued. If you have the choice, use 64bit, memory management is much better on that platform. It's not whether it's 32 or 64 bits that kills performance, it's people logging into the SQL Server and running 32bit apps (like NAV) on your 64 bit box. If you are letting people RDP right into SQL Server, then I bet that is a large part of your problem.
    All connections are through the client on the users workstation. There is no RDP on the SQL.
  • tim6661234tim6661234 Member Posts: 12
    All the above advice is is valid but don't forget that the client does the vast majority of processing in NAV so the client machines need to be a a very good specification.

    We recently had a system that took 5 minutes to certify a BOM. When theis was run on the server it took 30 seconds. It was nothing to do with the network just that the server even when running SQL performed better than the client hardware.

    We recomended P4 3.2 GHz with 1GB RAM if on XP but the machines they had were much slower and had 256MB of RAM.

    To get round this we truned "Dynamic Low-Level Code" off. As they didn't use MRP this did not matter but it can be run as a batch job.

    I'd switch it off to see what the results are. It can easily be switched back on and can be run as a batch routine.

    We also found that running the same routine on a laptop under a native database was instant.
  • fredp1fredp1 Member Posts: 86
    Hi

    I've tried turning off the dynamic low level code calc and it didn't make notable difference.
    Where you using Nav V5sp1?

    My understanding is that sp1 is different to previous versions with sql 2005 as it doesn't have a table for the sift.
    It uses a view in SQL 2005 to emulate the SIFT table used in previous versions of SQL and/or Nav. I have read about significant performance degradation until the 'customer' performs some database tuning.

    If anyone has some first hand experience with SQL 2005 and Nav5sp1 in perfromance turining, I'd like to hear from you please.

    Someone mentioned not to use a SAN as it may be slow.
    I moved the data from the SAN to the mirror(where the o/s and programs are located) and it was slower.
    I kept the transaction log file on the mirror and moved the data back to the san and it it was still slower than keeping the data and transaction log file on the san.

    I still try and get the SAN rebuilt with raid 10, but I'm not sure if the improvement will be that great.

    I've got Microsoft involved and one thing that I found interesting is the difference in speed the TRACE flags made.
    Trace flag T4616 needs to be set, but I also found trace flag T4119 set.
    When I removed T4119, the performance of the 'refresh of a production order slowed' down from 55 sec to 80 sec on my server.
    When I originally installed Nav, I only configure the SQL Server startup to have trace flag 4616. I'm not sure what added T4119... but I'm just sharing this information for other to be aware of it.

    Fred
  • David_SingletonDavid_Singleton Member Posts: 5,479
    Hi Fred, and welcome to the wonderful world of Navision performance tuning. I have been doing Navision tuning about 15 years now, and last week I learnt something new on a tune up project. it is a complex task, and we are all learning.

    Firstly there is no silver bullet. if you go around hunting for that single magic problem to fix the issue, then you will never find it. All performance issue come down to the combination of a number of issues, that individually seem insignificant, but together combine to make one bug problem. Even if you do find the mythical silver bullet and fix the system, you will probably find that actually you fixed 20 other minor issues that eventually combined with the last tweak solved the problem, and probably you will think you know, but actually really never know what the true underlying problem was.

    You can make a NAV/SQL combination fly on a SAN; RAID5 can be the perfect choice for performance, you can run lots of flowfields. everything can be made to work, but it takes time patience and experience.

    Good luck.

    The point of this, is that you need to do measurements on every single thing you change, and that needs tools, not "Hmm that looks faster" you need to write down numbers how many milliseconds before and after.

    But most importantly take advise, test it measure it and then decide if it helps or not.
    David Singleton
  • fredp1fredp1 Member Posts: 86
    Hi all,

    Thanks for everyone that replied to my post.

    We got a great solution('our silver bullet') from our implementers... it was some bad code that took to long to process with OUR data.
    Microsoft also help out too.

    The problem was that the data has zero values in routings that caused the code to take longer than it should.
    I goes to show that its so important to have the right tools to pin point the bottleneck. Its also important to be able to interperet the results of the performance stats!
    I beleive they used the code coverage tool to pin point the problem.

    With a typical BOM/Routing it would have been fine, but with your own data, things sometings go wrong.
    So from a 30 second delay it is now less than 3 seconds to release the Prod Order.
    We also uncovered some other performance issues with the code that was fixed too. Now we can focus on testing/configuring Nav for our go live date.

    With the help from the Nav/SQL Performance field guide, its given me a better understanding of reading and measuring the stats.
    I'll end up rebuilding the SAN from Raid 5 to Raid 10.
    The setup will be simple, Raid 10(6 drives 15000rpm) for the data, transaction log file and the tempdb.
    I hope this will be better than the current RAID 5 setup for data/log file and the tempdb on the o/s system disk Raid1.

    Thanks again.
  • krikikriki Member, Moderator Posts: 9,094
    fredp1 wrote:
    The setup will be simple, Raid 10(6 drives 15000rpm) for the data, transaction log file and the tempdb.
    Do you mean to put both data, TL-file and tempDB on those 6-disk RAID10?
    It is better than putting all on a 6-disk RAID5, but not the best for performance.
    Your TL-file should be on dedicated disks!
    If you only have 6 disks, I would make a RAID1 for the TL-file and make a 4-disk RAID10 for the DB and tempDB.
    Best would be an extra RAID1 for TL and another one for temp-DB.
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • BiboBibo Member Posts: 22
    kriki wrote:
    Best would be an extra RAID1 for TL

    For performance reasons would it not be better to put TL on RAID0 instead of RAID 1? Of course with appropriate TL backup strategy.
  • bbrownbbrown Member Posts: 3,268
    I would not consider that a good trade off. I would not consider the performance increase to be worth the data loss if the drive failed.
    There are no bugs - only undocumented features.
  • DenSterDenSter Member Posts: 8,304
    If you back up TL to a different drive every 15 minutes, for some companies that might not be a bad choice. As long as the choice is made clear to the customer, and it's the customer that decides, not some consultant.

    Try to set up your data drives with a number that is easily divisible, so the RAID controller will be able to cut up the data without having to perform big calculations. Instead of a 6 disk RAID 10, I'd suggest putting 2 more disks in there and make it an 8 disk RAID10. It's easier to split 64 byte chunks of data across 4 spindles (4 chunks of 24, easy to split) then it is to split it across 3 (can't divide 64 by 3, so you end up with wasted space).
  • bbrownbbrown Member Posts: 3,268
    I was referring to the changes since the last T-log backup. Of course, this is also dependent on having access to the DB catalog. That's why I like to have the primary and secondary data files on different arrays. If I lose the secondary data file, I can do a tail-log backup and recover changes since the last t-log backup.
    There are no bugs - only undocumented features.
  • DenSterDenSter Member Posts: 8,304
    bbrown wrote:
    I was referring to the changes since the last T-log backup
    I know what you mean, I am just saying that for some companies, losing 15 minutes worth of transactions is acceptable, and for other companies it is not. The point being is that it is the company that should make that decision, not the consultant. The consultant should explain the pros anc cons to the company, and they then make the decision what to do. It mostly comes down to a cost/benefit analysis. Yes you can have a complete recovery plan, which comes at a certain hardware price that somce companies might find cost prohibitive. In that case they might consider what it means to lose 15 minutes of data and decide it's an acceptable risk. Other companies might make a different choice.

    I just don't think that the decision should be based on the consultant saying "that is an unacceptable trade-off", it is simply not the consultant's job to make that decision.
  • bbrownbbrown Member Posts: 3,268
    The consultant is not making a decision. They are simply presenting an opinion based on experience. The ultimate decision is the customers. But isn't it the consultants role to present information and options so the customer is able to make an informed decision?
    There are no bugs - only undocumented features.
  • DenSterDenSter Member Posts: 8,304
    Well you said:
    bbrown wrote:
    I would not consider that a good trade off. I would not consider the performance increase to be worth the data loss if the drive failed.
    Which to me sounds like an absolute. Personally I think it depends on the situation, and in some situations it might be an excellent trade-off, depending on what the customer wants.

    Maybe I'm misunderstanding, but I've seen too many times that systems were set up without actually giving the customer the opportunity to make those decisions.
  • bbrownbbrown Member Posts: 3,268
    If these decisions were left to the customers, 90% of the systems I deal with would be running on RAID 5 if even that.
    There are no bugs - only undocumented features.
  • DenSterDenSter Member Posts: 8,304
    No I don't agree, properly informed customers would not make that choice. If a customer decides to use RAID 5 after being properly informed then they deserve all the problems that come with that decision.

    Your assumption is that consultants know better than customers, which is not something I would say. You would be surprised at how many systems are set up VERY poorly where the consultant made all the decisions, so that argument does not hold water in my book. For years a big part of my job was to fix setup mistakes that my coworker consultants had made because they were not properly informed. Setting up RAID 5 was THE most common of those mistakes.
  • DenSterDenSter Member Posts: 8,304
    Why are we even arguing about this, we probably agree about most of this type of thing :mrgreen:
  • fredp1fredp1 Member Posts: 86
    Hi everyone,

    We ended up created a Raid 10 volume for the TL and the Temp db.
    The data is on a raid 1 volume.
    All of the volumes are on the SAN. The SAN is just for Nav!

    The major improvement that we had was a fix in some bad code.
    I know I have to do some defrag for the db and the tl .. so when I do that I'll do some benchmarks.

    My gut feeling tells me that there is a performance problem when logged on as a windows users vs a database user.
    Again i'll do some test and raise a new post.

    Thank you kindly for everyone's advise.
  • fredp1fredp1 Member Posts: 86
    HI,

    Our partner came back with a solution. There was some inefficient code when using windows login vs database login. (It was something to do with windows groups)
    Because this partner also had an add-on that allows some very fine control of permissions, including field and Inventory type posting permissions, the problem was unique to their code unit.
    It looks like the system is performing with a "snappy" response. i.e. the address details of a customer/vendor are populated without delay.

    Thanks to all that replied.

    Fred
  • HannesHannes Member Posts: 19
    fredp1 wrote:
    Hi all,

    Thanks for everyone that replied to my post.

    We got a great solution('our silver bullet') from our implementers... it was some bad code that took to long to process with OUR data.
    Microsoft also help out too.

    The problem was that the data has zero values in routings that caused the code to take longer than it should.
    I goes to show that its so important to have the right tools to pin point the bottleneck. Its also important to be able to interperet the results of the performance stats!
    I beleive they used the code coverage tool to pin point the problem.

    With a typical BOM/Routing it would have been fine, but with your own data, things sometings go wrong.
    So from a 30 second delay it is now less than 3 seconds to release the Prod Order.
    We also uncovered some other performance issues with the code that was fixed too. Now we can focus on testing/configuring Nav for our go live date.

    With the help from the Nav/SQL Performance field guide, its given me a better understanding of reading and measuring the stats.
    I'll end up rebuilding the SAN from Raid 5 to Raid 10.
    The setup will be simple, Raid 10(6 drives 15000rpm) for the data, transaction log file and the tempdb.
    I hope this will be better than the current RAID 5 setup for data/log file and the tempdb on the o/s system disk Raid1.

    Thanks again.

    Hello!
    I'm interested to know if rebuilding your SAN Raid 5 to Raid 10 made a difference? - Any information really appreciated. I'm having an issue where the company that hosts the database doesn't want to go down the path of changing it's SAN configuration from RAID 5 to Raid 10. - They are running on NAV 4, and probably SQL2005 (not sure though). We have been advising them to try RAID 10 but they don't want to do that due to cost.
  • fredp1fredp1 Member Posts: 86
    Sorry, I didn't do any comparisions with raid 5 vs raid 10.

    Maybe some else has some measurements?
    Fred
  • strykstryk Member Posts: 645
    Hi!

    Well, the problem with RAID5 is writing the "Parity Value". This is additional workload, taking more time.
    As RAID10 performs a "pure" striping the writing is much faster. Ther is not much difference in reading data.
    See also: http://weblogs.sqlteam.com/billg/archive/2007/06/18/RAID-10-vs.-RAID-5-Performance.aspx

    RAID5 is often "sold"/used because it's cheaper: RAID5 requires just 3 HDD, RAID10 needs 4 HDD minimum.
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
Sign In or Register to comment.