Locking Issue again

xenonxenon Member Posts: 30
edited 2008-10-08 in SQL Performance
Hello experts. Trust me, I have searched but I'm not sure that I've found (or understood) the answer I'm looking for. :?

I have Navision 4.0Sp3, running on 64-bit SQL Server 2005, with about 60 users. I am having problems (as most seem to) with records (or tables) being locked and causing user problems. Most often it seems to be the "Reservation Entry Table" as this, I assume, is continually being written to. For example, every time a Sales Line is entered, a Production Order is released and so on, it writes to the reservation table. This causes even more frustration that just poor performance - if you are entering a sales line and you get the dead lock error, pressing OK on the error returns you to the sales order with the line deleted and you have to type it all in again, only to get another lock!

Initially I was convinced this is a setup / optimisation problem as there must be a solution otherwise the system is next to useless. I never had this issue with other ERP systems I have used. However, seeing all the posts on here about locks makes me wonder whether it is a system weakness.

I am and end-user, not a reseller / partner.

If anyone could give me any pointers I would really appreciate it. ](*,)

Thank you!

Comments

  • kapamaroukapamarou Member Posts: 1,152
    Since you are an end user then things are tough. (I suppose you are not a developer also).

    In order to avoid deadlocks you need to do several things. From the customer's part you will need to optimize your system in order to keep transactions as short as possible. Then you'll need a developer to investigate the situation (analyze the code) and see if there are locking order violations. Your problems can be solved but it requires some work from both your IT department and your provider...
  • hs_mannhs_mann Member Posts: 17
    Hi

    First of all you should check the Lock Timeout Duration in Database Setup in Navision.
    By Default it is set to 10 Seconds. Increasing this may help as the system will wait longer before throwing the error message.

    Also check the Always RowLock property. It should be ticked for SQL Server as otherwise system tries to do TABLELOCK.


    Harjot
  • xenonxenon Member Posts: 30
    Hi

    I am the IT department, but not a Navision developer. I have already set the lock timeout to 20s and set to try and row lock. Maybe that's all I can do at my level.
  • kapamaroukapamarou Member Posts: 1,152
    Has your provider informed you to perform (in non-working hours... :D ) database optimizations? (File -> Database -> Information -> Tables -> Optimize). Search the forum and read the help on optimization because it could help a lot...
  • krikikriki Member, Moderator Posts: 9,094
    [Topic moved from 'NAV/Navision' forum to 'SQL Performance' forum]

    Your partner should also do some SQL tuning on your DB. This includes disabling certain indexes/SIFT-levels so the server has to write less info and thus speeds up a lot (this HAS to be done in NAV and NOT in SQL!!!!!).

    Some questions:
    -how big is your DB?
    -How much memory has been assigned to SQL?

    As you are in the IT department, you can do some things:
    -Make sure your transaction log is on a dedicated (fast = RPM15000) drive (NO OTHER FILES SHOULD BE THERE!). If there are other things, move them to other drives. Your transaction log should be on a RAID1 or RAID10 and definitly NOT on a RAID5.
    -Make sure the TL is not fragmented and never shrink the file to avoid fragmentation and loss of time when it has to grow again.
    -Each night (if possible) do a rebuild index. If this is not possible, then at least recalculate the statistics each night.
    -Put the fillfactor of the server to 95%.
    -Make sure that the server is NOT swapping memory to disk
    -Put DB-properties (in SQL) autocreate statistics and autoupdate statistics to FALSE (but only if you can regularly do a rebuild index).
    -Install latest hotfix of NAV 4.0SP3 (before doing this, test it in a testenvironment!).
    -Install SQL2005 SP2 with at least hotfix 4 (best take the latest) (before doing this, test it in a testenvironment!).
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • xenonxenon Member Posts: 30
    Thanks for the help thus far:

    I have a SQL maintenance plan to optimize and rebuild indexes, I run the Navision "optimize tables" from time-to-time

    The db is 10Gb

    The transaction log is on a pair of mirrored (RAID1) 15k drives; no other files
    The database files are on a RAID10 15k volume

    - how do I make sure the TL is not fragmented? Simply by defragging the volume with the database services stopped?

    Statistics recalculated and index rebuilt each sunday. I could change to each night but will it matter if users are on the system? We have some users on night shift.

    - what is the "fill factor" of the server and how do I set it?

    The Server is not swapping memory

    I can set the autocreate stats to FALSE pending the answer to above(!)

    Latest hotfix for Nav is on.

    Not applied SQL SP2 yet, but I will.

    Many thanks for the help so far - appreciated.
  • DenSterDenSter Member Posts: 8,304
    Many performance issues are being addressed here, and some of the sugestions being made are good ones, but I want to make sure you don't forget about the deadlocks. There was a comment about releasing a production order when a sales line is entered, and that is probably a customization. I think someone should go in and check the code, debug the process that deadlocks, and rewrite the code to lock resources differently.
  • xenonxenon Member Posts: 30
    I didn't mean the same user was releasing a prod order at the same time as entering a SO - I just meant that as some users are entering Sales Orders another user elsewhere on the system is releasing a production order or posting production - all of which write to the reservation table.
  • DenSterDenSter Member Posts: 8,304
    Right gotcha :mrgreen: Still though, in addition to addressing the performance suggestions, don't forget to have the deadlock (or deadlocks) itself (or themselves) debugged and see if there's a code issue that could be solved.
  • krikikriki Member, Moderator Posts: 9,094
    - how do I make sure the TL is not fragmented? Simply by defragging the volume with the database services stopped?
    => With defrag, you can control if the TL is fragmented. If it is, defragment it after you stopped the DB-services.

    -Statistics recalculated and index rebuilt each sunday. I could change to each night but will it matter if users are on the system? We have some users on night shift.
    => you first have to check how long a rebuild takes. If you know you have a window of that time when no one works, you can do it in that moment.
    => If not, you cannot launch the rebuild, because it blocks the tables. You might try with SQL defragging that does NOT block users, but it doesn't update the SQL statistics either. In this case, you should also run the SQL statistics each night in that window and a rebuild once a week.
    => Depending on what you can do, you can disable the autostats or not.

    - what is the "fill factor" of the server and how do I set it?
    The fillfactor defines how full the pages are. (consider the telephone book) If a page is 100% full, reading is at its best performance, but inserting a record takes a lot of time, because you need to insert a page in the book and that takes some time. If a page is 1% full, inserting new records is no problem at all, but reading will be very slow (like turning a page before you can see the next phone number). My experience is that 95% is better than 90% for a NAV DB (MS advices 90% but the default of SQL is 100% [this is the same as 0%]).
    => you can find it on the properties of the SQL server => database settings => Default index fill factor. You can change it without restarting the server. The rebuild will then use this value (if not changed in the maintenance plan).



    All these performance improving things will also diminish the deadlock problems. If after this (and the index+SIFT tuning), there are still deadlocks, there will be less and one can start searching for them and solve them (in general, the searching is not so easy and it is slow).

    PS : a deadlock is NOT the same as a locktimeout! the message you get is completely different.
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • DenSterDenSter Member Posts: 8,304
    kriki wrote:
    All these performance improving things will also diminish the deadlock problems.
    If the deadlock is because individual processes are relatively slow yes, but I don't think that is a reason to not look at the code. There is also a possibility that the code is written in such a way that it causes deadlocks by the order in which resources are locked. You can write a bit of code that deadlocks in a split second, even on the fastest supercomputers. This is why in deadlock situations I always say the process that deadlocks needs to be analyzed itself, in addition to increasing performance.
  • krikikriki Member, Moderator Posts: 9,094
    DenSter wrote:
    kriki wrote:
    All these performance improving things will also diminish the deadlock problems.
    If the deadlock is because individual processes are relatively slow yes, but I don't think that is a reason to not look at the code. There is also a possibility that the code is written in such a way that it causes deadlocks by the order in which resources are locked. You can write a bit of code that deadlocks in a split second, even on the fastest supercomputers. This is why in deadlock situations I always say the process that deadlocks needs to be analyzed itself, in addition to increasing performance.
    True, but in case I have a problem with performance (and generally it is the case), I keep that to the last. This generally takes a lot of time to analyse/fix/test and all that time the customers don't see a result (and mostly they want a positive result fast [at least in Italy]). In less then a day I can increase performance quite a bit, so they are happy because they have increased performance and less deadlocks because of it.

    If I have a system that generally performs well, but you have deadlocks, then I check the procedures involved to find the problem and fix it.
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • xenonxenon Member Posts: 30
    Thanks for all the help thus far - some good tips. On further investigation I find my Maintenance Plan to optimise index / stats has stopped running, with a server out of memory error (32Gb RAM....). I now see I need the latest cumulative hotfix for SQL to stop this.

    I'll try out the suggestions and report back. 8)
  • DenSterDenSter Member Posts: 8,304
    kriki wrote:
    True, but in case I have a problem with performance (and generally it is the case), I keep that to the last.
    In this case though, the original question was about a deadlock. When the issue is "my system doesn't run very fast, and occasionally I have a deadlock" then yes by all means don't look at resolving deadlocks. But when the issue is "I am having deadlocks all over the place" you must address them specifically right away. You would still do performance tweaks of course, because that could solve (or alleviate) the deadlocks, but you can't ignore them in that case.
  • xenonxenon Member Posts: 30
    Update.

    Sincere thanks to everyone who responded. I have implemented all of the suggested changes and things are much improved. It's difficult to exactly which enhancement made the biggest difference - with this machine being in a production environment I didn't really have time to mess about - I just took it off-line for a while and implementted the suggested changes.

    With regard to the "always rowlock" shouldn't SQL 2005 do that natively? i.e. the SQL server itself will try and rowlock unless told otherwise?

    Anyway, I can have a bit of peace and quiet from the users now! =D>
  • David_SingletonDavid_Singleton Member Posts: 5,479
    xenon wrote:
    ... It's difficult to exactly which enhancement made the biggest difference - with this machine being in a production environment I didn't really have time to mess about - I just took it off-line for a while and implementted the suggested changes....

    That is a HUGE mistake, and one that is going to come back and bite you in the future.

    You do not (and did not) have a performance problem with your system. You had many small issues that needed to be resolved (not one big one). You have fixed some of them and probably also made some worse.

    In addition, the biggest issue (the customized code) you probably have not even addressed.

    10gig for 80 users in Manufacturing is a small database, so either you are just starting out, and its going to grow, or there are some major design issues with the code.

    I fully understand the issue of "fix it now" but really you need to identify what was fixed and what was not, so that as the DB grows and the problems come back again you are able to solve them the next time.

    Sorry to be the bearer of bad news when all seems to be going well, but do be prepared for the future.
    David Singleton
  • xenonxenon Member Posts: 30
    Hmm. The squeaky wheel gets the grease?

    I can't see how simply doing the optmisations is a *HUGE* mistake, given that they should be carried out anyway. Everyone agrees that regular rebuilding of indexes and stats is a good thing, and that's really all I've done. The only difference I've made to the actual database itself is changing the fill to 95%

    If running database optimisations is "going to make matters worse" I'd be grateful if you could elaborate.

    How do you know the customised code is the biggest issue? I didn't even refer to any customised code.
  • David_SingletonDavid_Singleton Member Posts: 5,479
    xenon wrote:
    Hmm. The squeaky wheel gets the grease?

    I can't see how simply doing the optmisations is a *HUGE* mistake,

    noooo you completly missed the point. :(

    The huge mistake is that you don't know what worked. I am sure you did all the right steps, BUT allways you MUST make small changes and measure their effect. Then document each individual change so you know what worked and what didn't.

    In your case you should have made one change each night over a week, so that you know which change works. That way next time you have issues, you know where to start, instead of just guessing.
    David Singleton
  • xenonxenon Member Posts: 30
    I didn't completely miss the point, I understand exactly what you're saying but, in summary, I did this:

    Set the fill to 95%
    Re-indexed the database
    Re-built the stats.

    That's it. I couldn't really be more granular. Not only that, setting the fill to 95% would have no effect without doing the other two steps anyway.

    If I had done stuff like moved my tranasction logs to another drive, changed a load of codeunits, upgraded to SQL 2008 and so on, all at once, then I agree it would be impossible to say what the root cause and solution were or even if that 'solution' simply masked the problem for a time.

    But all I did was use the normal optimise routines (which were on a maintenance plan which had stopped running because of the memory leak fixed in the roll-up)
  • xenonxenon Member Posts: 30
    As a further question, David / anyone - during the update statistics task, the transaction log grew from the 200Mb (approx) that it was to 28Gb, nearly four times the database size.

    Question:

    1) Is that to be expected
    2) I'm right in thinking NOT to shrink the TL
    3) Now it has grown to that size (of which 99% is now free following a backup) will it have sufficient space to run the update stats next time or will it get bigger still?

    Thanks #-o
  • kinekine Member Posts: 12,562
    Best is to switch recovery mode to bulk-logged to prevent this when the Job is started. After the job is finished, you can switch back to the Full recovery mode.

    Of course, you can shrink the file to some size which will cover needs of common DB usage. The free space will be reused in the file...
    Kamil Sacek
    MVP - Dynamics NAV
    My BLOG
    NAVERTICA a.s.
Sign In or Register to comment.