cancel
Showing results for 
Search instead for 
Did you mean: 

speed up filling DMC_INDXCL

Former Member
0 Kudos

Hello,

We are trying to replicate a transparent table with 2.7 Bilion table with reading type 5.

We folowed the blog

http://scn.sap.com/community/replication-server/blog/2013/09/26/how-to-improve-the-initial-load-by-r...

and did the following:

The problem is that the job which fills the table DMC_INDXCL is very slow (500M records per 24 hours…) so it will take 5-6 days only to fill the index table. Is there a way to make it faster??

Thanks,

Amir

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hello Amir,
the intention of the record you entered in table IUUC_PERF_OPTION is to facilitate a parallelization of the job to fill table DMC_INDXCL. If multiple jobs are running to do this (say, 6 jobs), you should be done within one day. However, it seems that this did not work in your case? In this case, we would need to understand what went wrong. Did you find any error messages in application logs (in the SLT system, use transaction SLG1 for object DMC to look for error messages)?
One reason why this might fail might be mission authorizations. Another reason might be that we here exceed the number 2^31  which is a limit in some cases (depending on the DB release and / or SAP basis release - see note 1766433 (“Open SQL restrictions for very large tables”) for details.

In this case, it might be worth while considering yet another approach to parallelize this step. Blog http://scn.sap.com/community/replication-server/blog/2014/02/25/how-to-filter-on-the-initial-load-pa... describes an approach to parallelize the REPLICATION, but you can use this also to parallelize the INITIAL LOAD (more precisely, this first step, prior to the actual load, to fill table DMC_INDXCL). You can proceed as described in the blog section "Process: Parallelize Replication" with only one deviation: VALIDITY needs to be set to 2 (initial load) instead of 3 (replication). The prerequisite would be that you know, for the first key field of the table (after the client field) how the values are distributed, in order to define subsets.
Kind regards,
Guenter Weber

Former Member
0 Kudos

Hi Gunter,

We did exceed the 2^31 records limit but our kernel version is higher.

What ive found now when checking the dumps in the source system is that a dump happened when the job filling the table DMC_INDXCL started,

Enclose the dump,

Thanks,

Amir

Former Member
0 Kudos

Hello Amir,
indeed this is very suspicious, we of course would not expect this job to terminate with a shortdump. Please open a service ticket for this issue and enclose the complete shortdump. Only if we can see all details about this shortdump / sql exception we can understand what actually happened
Kind regards,

Guenter

Former Member
0 Kudos

Hi Guenter,

You were right about the 2^31 limitation. The job was cancelled with communication error after 2.1474Bilion records which is exactly 2^31!!!

Is there a way to continue filling the index table after this dump occurred?

Thanks!

Amir

Former Member
0 Kudos

Hi Amir,
unfortunately there is no way to make this processing continue from where it stopped. The only other approach I now could imagine is what I mentioned before, see http://scn.sap.com/community/replication-server/blog/2014/02/25/how-to-filter-on-the-initial-load-pa...  In case the first key field of your huge table is a document number, and you have some knowledge about the value distribution (for example, multiple number ranges, or just one number range, of which you know how many numbers have been used so far), you can manually set up value ranges - which is the very thing which that report which failed with the shortdump should have done. If the first key field is an organizational unit, like company code, and you know how many records you have for each of the organizational units, you could likewise define such a "manual" parallelization.
Kind regards,
Guenter

Answers (0)