700 GB Database size Nav - BIG SQL DB - millions records

kamalbaaklinikamalbaaklini Member Posts: 37
edited 2015-03-26 in NAV Three Tier
Hello, we have 700 GB Database size of Dynamics Nav 2009 R2 using SQL server of course.

80+ millions of records in "Value entry" and "Item Ledger entry" and related tables.

My Problem is when Running the "Adjust Cost Item Entries Batch" it is taking too long to finish (16 Hrs) and blocking everything on the system . So we had to run this during weekend.

Question 1: any idea to make these batch run faster??
Question 2: Anyone reached this size of database in Dynamics NAV??

Thank you

Comments

  • bbrownbbrown Member Posts: 3,268
    Which table ("Item Ledger Entry" or "Value Entry") does the 80+ million refer to? I'd be very surprised to see a database where those table have the same number of records. Although not entriely impossible. Just not common.

    Without more details I can't really provide you any specific solutions, other than to search the performance related postings on this site and then post more specific details and questions on your situation. From my experience with a large database (while not quite 700 GB), that 16 hours to run Adjust Cost just seems way out of whack. But again, that's just a general observation based only on known information.

    The details of the large database to which I refer:

    Size: 520 GB used (645 GB total data file size)
    Item Ledger Entry: 11 million records
    Value Entry: 137 million records
    GL Entry: 264 million records
    Ledger Entry Dimensions: 1.7 Billion records

    Typical Adjust Cost runtime = 20 minutes (run daily)

    In terms of what's been done, I'd have to say nothing out of the ordinary. By that I mean nothing that is not discussed on this site many times, or in sources referenced on this site. I'm sure this is not the only example of a large database out there.
    There are no bugs - only undocumented features.
  • ara3nara3n Member Posts: 9,255
    edited 2013-08-13
    What costing method? and are you using stockeping unit?

    Are you using manufacturing?
    Ahmed Rashed Amini
    Independent Consultant/Developer


    blog: https://dynamicsuser.net/nav/b/ara3n
  • thegunzothegunzo Member Posts: 274
    Hi

    I have a database with "only" 40+ millions of records in "Value Entry". Over the years I have had to do a few changes and addons to the inventory adjustment.

    One for example is a batch report based on the Item table
    Item - OnPreDataItem()
    PostingDate := TODAY;
    
    Item - OnAfterGetRecord()
    "Cost is Adjusted" := FALSE;
    MODIFY;
    
    ItemApplnEntry.LOCKTABLE;
    IF NOT ItemApplnEntry.FINDLAST THEN
      EXIT;
    ItemLedgEntry.LOCKTABLE;
    IF NOT ItemLedgEntry.FINDLAST THEN
      EXIT;
    AvgCostAdjmtEntryPoint.LOCKTABLE;
    IF AvgCostAdjmtEntryPoint.FINDLAST THEN;
    ValueEntry.LOCKTABLE;
    IF NOT ValueEntry.FINDLAST THEN
      EXIT;
    
    Item2 := Item;
    Item2.SETRECFILTER;
    InvtAdjmt.SetProperties(FALSE,FALSE);
    InvtAdjmt.SetFilterItem(Item2);
    InvtAdjmt.MakeMultiLevelAdjmt;
    COMMIT;
    
    Item - OnPostDataItem()
    UpdateItemAnalysisView.UpdateAll(0,TRUE);
    

    Some others are
      Walk the application path and check the adjustment Reopen all application Application batch Repair cost amount by walking the application path where the standard adjustment does not do the job
    ________________________________
    Gunnar Gestsson
    Microsoft Certified IT Professional
    Dynamics NAV MVP
    http://www.dynamics.is
    http://Objects4NAV.com
  • kamalbaaklinikamalbaaklini Member Posts: 37
    edited 2013-08-13
    Hi thank you, I am little bit relaxed seeing someone with somehow similar DB size.

    My DB is 551 GB used and DB file size 703GB
    Item Ledger Entry: 55 million records
    Value Entry: 86 million records
    GL Entry: 34 million records
    Ledger Entry Dimensions: 303 Million records

    Item Table 98000 where only 40000 are active

    using FIFO Costing method

    would you plz share with me the hardware specifications you are using RAM and how many disks and files for SQL , how many CPU...

    thxxxx
    x
    bbrown wrote:
    Which table ("Item Ledger Entry" or "Value Entry") does the 80+ million refer to? I'd be very surprised to see a database where those table have the same number of records. Although not entriely impossible. Just not common.

    Without more details I can't really provide you any specific solutions, other than to search the performance related postings on this site and then post more specific details and questions on your situation. From my experience with a large database (while not quite 700 GB), that 16 hours to run Adjust Cost just seems way out of whack. But again, that's just a general observation based only on known information.

    The details of the large database to which I refer:

    Size: 520 GB used (645 GB total data file size)
    Item Ledger Entry: 11 million records
    Value Entry: 137 million records
    GL Entry: 264 million records
    Ledger Entry Dimensions: 1.7 Billion records

    Typical Adjust Cost runtime = 20 minutes (run daily)

    In terms of what's been done, I'd have to say nothing out of the ordinary. By that I mean nothing that is not discussed on this site many times, or in sources referenced on this site. I'm sure this is not the only example of a large database out there.
  • kamalbaaklinikamalbaaklini Member Posts: 37
    ara3n wrote:
    What costing method? and are you using stockeping unit?

    Are you using manufacturing?


    Not using SKU and not Manufacturing

    using LS Retail Addon where the Sales Entries are coming from...
  • davmac1davmac1 Member Posts: 1,283
    Why not share your hardware specs with us?
    Version of SQL Server
    RAM assigned to SQL Server
    disk drive configuration

    Have you tuned your indexes?
    Have you run resource monitor while this process is running to see which resources are most heavily used?
    Are you running this task on the SQL Server?
    If not, what is your network traffic load.
    How many active items do you have?
  • bbrownbbrown Member Posts: 3,268
    [could you plz share with me the hardware specifications you are using RAM and how many disks and files for SQL , how many CPU...

    Processors: (4) Quad-Core AMD 3.10 Ghz
    Memory: 192 GB (server supports 2 SQL Instances. SQL Max Memory is set to 90 GB on each)
    Windows O/S: WIndows 2008 R2 - Enterprise
    SQL: SQL 2008 Enterprise

    Disk Arrays (all expect are located on the SAN. I don't list the O/S array which is on the server itself)

    Array 1: 4x146 GB RAID 10 SQL System databases
    Array 2: 14x146 GB RAID 10 SQL Data 1
    Array 3: 14x146 GB RAID 10 SQL Data 2
    Array 4: 14x146 GB RAID 10 SQL Data 3
    Array 5: 8x146 GB RAID 10 SQL Transaction log (live DB only)
    Array 6: 4x146 GB RAID 10 SQL Transaction log (others)
    Array 7: 5x450 GB RAID 5 Backups

    The NAV database has 3 files in "Data File Group 1" with each on a separate array (2,3,4). Those arrays are also hosting databases on the other SQL instance. Although those see light to moderate use.

    There is also a second identical server to which the live DB is mirrored.

    Don't look at this and think hardware is the whole solution. It's just part of the puzzle. And yes, this config might be considered a bit extreme in some ways. But that's another topic. We've done some index tuning, run index and statistics updates daily. We pay close attention to the performance impact when designing modifications. And we have used ADO in a few areas to help boost performance.
    There are no bugs - only undocumented features.
  • skullaskulla Member Posts: 140
    You can replicate the adjust cost entries batch job in SQL and run that as SQL job, we did this for few of our clients and the performance increased drastically. Although we do not recommend updating or creating from SQL but in this scenario we did it this to improve the performance.
  • davmac1davmac1 Member Posts: 1,283
    I assume these are all 64 bit editions?

    One problem I ran into with NAV and SQL Server is where it turned out I had a filter that covered 254,000 rows.
    It built a huge tempdb, and in this case it never finished, I had to kill the SQL process. I don't think it got beyond the initial find('-').
    That is why you should look at all the performance tools and see what is going on.
    It could be something as simple as a bad index and SQL Server is performing table scans for each entry processed.
  • bbrownbbrown Member Posts: 3,268
    davmac1 wrote:
    ...It could be something as simple as a bad index and SQL Server is performing table scans for each entry processed.

    Speaking of bad indexes, one of the places we tuned the system I've mentioned was the part where it retrieves the "Value Entries" for a specific ILE to determine the current posted cost and calculate the adjustment. The indexes where just wrong for such a large table.
    There are no bugs - only undocumented features.
  • kamalbaaklinikamalbaaklini Member Posts: 37
    davmac1 wrote:
    Why not share your hardware specs with us?
    Version of SQL Server
    RAM assigned to SQL Server
    disk drive configuration

    Have you tuned your indexes?
    Have you run resource monitor while this process is running to see which resources are most heavily used?
    Are you running this task on the SQL Server?
    If not, what is your network traffic load.
    How many active items do you have?


    hardware is:

    64gb upgrading soon to 126
    5 disks and 5 files files FOR SQL
    RUNNING ON sqlserver machine
    40000 active items



    can please explain how to use : resource monitor while this process is running to see which resources are most heavily used?
  • bbrownbbrown Member Posts: 3,268
    The hardware information is too vague. What are the disk and how are the configured. See the hardware info I posted for a guide to the level of details we would like to have.
    There are no bugs - only undocumented features.
  • kamalbaaklinikamalbaaklini Member Posts: 37
    bbrown wrote:
    The hardware information is too vague. What are the disk and how are the configured. See the hardware info I posted for a guide to the level of details we would like to have.


    1 Array of 5 disks RAID5 300gb each 15K
    1 Array of 4 disks Raid5 600GB each 10k

    OS system diffrent HardDrive
  • bbrownbbrown Member Posts: 3,268
    Well, based on information so far, it would seem that hardware is at least part of the isssue. RAID 5 is probably the worst thing to use. Especially with small arrays like these. But a system of this size will likely still need some performance tuning in the areas of database and code. Even with good (or even over-spec'd) hardware.
    There are no bugs - only undocumented features.
  • davmac1davmac1 Member Posts: 1,283
    Resource monitor has a link from the task manager.
    You can also run perfmon.
    Jorg Stryk has a book called "The NAV/SQL Performance Guide" - you can see if that is still available.
    There are a variety of SQL monitoring and performance tools that you can find with the help of Google.
  • Miklos_HollenderMiklos_Hollender Member Posts: 1,598
    skulla wrote:
    You can replicate the adjust cost entries batch job in SQL and run that as SQL job, we did this for few of our clients and the performance increased drastically. Although we do not recommend updating or creating from SQL but in this scenario we did it this to improve the performance.

    This sounds incredibly difficult. Adjust Costs is very difficult to understand - even if you understand FIFO logic and the whole cost forwarding principle - 90% of people I know are baffled by the whole idea, they are used to doing FIFO backward at year end, not forward - there are so many exceptions and special cases in that codeunit that I don't think it is really possible for anyone to fully understand it.

    Frankly in this case I would just go with Standard Costing and making one backwards-FIFO report at year end (i.e. take current stock, match it backwards in time to purchase invoices, easy) to determine closing stock value and book that manually...
  • kamalbaaklinikamalbaaklini Member Posts: 37
    bbrown wrote:
    The hardware information is too vague. What are the disk and how are the configured. See the hardware info I posted for a guide to the level of details we would like to have.


    1 Array of 5 disks RAID5 300gb each 15K
    1 Array of 4 disks Raid5 600GB each 10k

    OS system diffrent HardDrive


    ok now i have the following:

    OS on separate Disks
    and 3 disks x2 on RAID10

    Array of 5 Disks RAID 5 shared with other BI application

    planning to add Array 4x 2 disks on Raid 5
  • kamalbaaklinikamalbaaklini Member Posts: 37
    bbrown wrote:
    Well, based on information so far, it would seem that hardware is at least part of the isssue. RAID 5 is probably the worst thing to use. Especially with small arrays like these. But a system of this size will likely still need some performance tuning in the areas of database and code. Even with good (or even over-spec'd) hardware.


    yep

    I am increasing the Vdisks and put the SQL log file into a dedicated one. It is much better performance now.
    still to work on the keys...
  • bbrownbbrown Member Posts: 3,268
    ...I am increasing the Vdisks...


    This is the first time you've mentioned this was a virtual server. Is the hardware you mentioned, the host?
    There are no bugs - only undocumented features.
  • kamalbaaklinikamalbaaklini Member Posts: 37
    bbrown wrote:
    ...I am increasing the Vdisks...


    This is the first time you've mentioned this was a virtual server. Is the hardware you mentioned, the host?


    its is not virtual server!!
    Vdisk is naming used in Raid setup: other naming is DiskArray
  • Miklos_HollenderMiklos_Hollender Member Posts: 1,598
    The weird thing is that the whole point of Landsteinar Retail - or at least it was when it was called so and I implemented it around 2005 - was to NOT have to create an Item, Value, Customer etc. Ledger Entry for every POS transaction but to have them summed up for example one per shop per item. And you would analyse more detailed stuff like sales per hour or sales per cassa from its own entry tables, not the standard ones. I guess this is configurable.

    I would recommend getting in contact with LS consultants to how to change settings so that your ledgers are more summed up, and also writing batch jobs (and/or checking the standard date compress batch jobs) to sum them up a bit.

    It's simply not normal to have such sizes in my opinion. I find our own 12GB already too much, so that I can't just copy-paste entries into Excel and make an ad-hoc inventory value report, it takes too long. This is not good.
  • David_SingletonDavid_Singleton Member Posts: 5,479
    kamalbaaklini,

    actually your implementation is more common than you may think. We have a couple of LS Retails systems in the Gulf that are larger than yours (~1Tb) there is a lot that can be done, faster hardware, tuning etc. But in the end it comes down the the sheer volume of data that needs to be processed to Adjust Costs. That means either modifying the Adjust Cost routine, or reducing (by compression) the number of records processed.

    But you can feel safe in knowing that you are not unique, and there are others out there with similar LS Retail systems of about the same size as yours.
    David Singleton
  • vgvozdevvgvozdev Member Posts: 29
    Hi,

    We finished implementations of our solution for company with ~2TB database few month ago. >200M item entries and growing 200K-500K entries a day. Do not worry - NAV can handle much bigger databases.
    http://www.microsoft.com/en-us/dynamics/customer-success-stories-detail.aspx?casestudyid=710000002119

    Try to understand what item takes long time to run. NAV have bug in cost calculation that causes to recalculate all entries for item in some situations (this was fixed in 2013 R2). If you have some commonly used items with many entries you will see that some item calculates longer then others.
    Valentin Gvozdev
    Adaptive Business Solutions
    http://justjewelrysoftware.com
  • kamalbaaklinikamalbaaklini Member Posts: 37
    vgvozdev wrote:
    Hi,

    We finished implementations of our solution for company with ~2TB database few month ago. >200M item entries and growing 200K-500K entries a day. Do not worry - NAV can handle much bigger databases.
    http://www.microsoft.com/en-us/dynamics/customer-success-stories-detail.aspx?casestudyid=710000002119

    Try to understand what item takes long time to run. NAV have bug in cost calculation that causes to recalculate all entries for item in some situations (this was fixed in 2013 R2). If you have some commonly used items with many entries you will see that some item calculates longer then others.

    That is big relief to know that there are other situation like ours. really :)
    regarding the Fix of cost calculation in 2013 R2, is it in the Adj. cost calculation routine batch? is it possible to adopt it in 2009 R2 version?
  • rmv_RUrmv_RU Member Posts: 119
    vgvozdev wrote:
    Hi,
    We finished implementations of our solution for company with ~2TB database few month ago. >200M item entries and growing 200K-500K entries a day. Do not worry - NAV can handle much bigger databases.
    http://www.microsoft.com/en-us/dynamics/customer-success-stories-detail.aspx?casestudyid=710000002119
    Sounds great! We are focused on same issue - allow multiposting wherever possible. Do you divide "Entry No." ranges by non-sharable objects (locations for example) or use Autoincrement property?
    Looking for part-time work.
    Nav, T-SQL.
  • kamalbaaklinikamalbaaklini Member Posts: 37
    bbrown wrote:
    Well, based on information so far, it would seem that hardware is at least part of the isssue. RAID 5 is probably the worst thing to use. Especially with small arrays like these. But a system of this size will likely still need some performance tuning in the areas of database and code. Even with good (or even over-spec'd) hardware.

    is RAID 0+1 array of 8 disks (4x2) is considered good? or not enough?
    so i can start building my arrays with such profile
  • Miklos_HollenderMiklos_Hollender Member Posts: 1,598
    kamalbaaklini,

    actually your implementation is more common than you may think. We have a couple of LS Retails systems in the Gulf that are larger than yours (~1Tb) there is a lot that can be done, faster hardware, tuning etc. But in the end it comes down the the sheer volume of data that needs to be processed to Adjust Costs. That means either modifying the Adjust Cost routine, or reducing (by compression) the number of records processed.

    But you can feel safe in knowing that you are not unique, and there are others out there with similar LS Retail systems of about the same size as yours.

    David, it's been 7 years since I touched Landsteinar or LS Retail, but even back then the whole idea was while anyone could make a nice touchscreen form that posts customer and item ledger entries and call it a POS solution, the whole point of a _real_ POS solution is that it has its own entry tables, and then the item and customer entries are generated from it by a - if remember, configurable - compression such as 1 entry per item per cassa per day. I.e. avoiding making thesse standard tables huge. One could configure the retail entry tables to a different hard disk or something and then for all practical purposes, from the viewpoint of the standard tables, have a small system. This was the general idea even then...
  • kamalbaaklinikamalbaaklini Member Posts: 37
    NOW THE DATABASE REACHED 1.2 TERRA BYTES

    WORKING FINE, AFTER ADDING DISKS AND FILES AND REMOVED SOME INDEXES.

    BUT BECAUSE OF OUR BUSINESS EXPANSION AND WORKING SHIFT OF 24HRS, I CANNOT FIND ENOUGH TIME GAP TO RUN THE ADJUST COST BATCH :( (CURRENLTY RUNNING 3 HRS PER DAY, AND 10 HRS IN WEEKEND TO COVER ALL ITEM RANGE)

    WE HAVE NOW 4 YEARS OF DATA, MY PLAN NOW IS TO START IN A NEW FRESH DATABASE...
Sign In or Register to comment.