on 10-22-2014 8:01 AM
Hello,
We are trying to replicate a transparent table with 2.7 Bilion table with reading type 5.
We folowed the blog
and did the following:
The problem is that the job which fills the table DMC_INDXCL is very slow (500M records per 24 hours…) so it will take 5-6 days only to fill the index table. Is there a way to make it faster??
Thanks,
Amir
Hello Amir,
the intention of the record you entered in table IUUC_PERF_OPTION is to facilitate a parallelization of the job to fill table DMC_INDXCL. If multiple jobs are running to do this (say, 6 jobs), you should be done within one day. However, it seems that this did not work in your case? In this case, we would need to understand what went wrong. Did you find any error messages in application logs (in the SLT system, use transaction SLG1 for object DMC to look for error messages)?
One reason why this might fail might be mission authorizations. Another reason might be that we here exceed the number 2^31 which is a limit in some cases (depending on the DB release and / or SAP basis release - see note 1766433 (“Open SQL restrictions for very large tables”) for details.
In this case, it might be worth while considering yet another approach to parallelize this step. Blog http://scn.sap.com/community/replication-server/blog/2014/02/25/how-to-filter-on-the-initial-load-pa... describes an approach to parallelize the REPLICATION, but you can use this also to parallelize the INITIAL LOAD (more precisely, this first step, prior to the actual load, to fill table DMC_INDXCL). You can proceed as described in the blog section "Process: Parallelize Replication" with only one deviation: VALIDITY needs to be set to 2 (initial load) instead of 3 (replication). The prerequisite would be that you know, for the first key field of the table (after the client field) how the values are distributed, in order to define subsets.
Kind regards,
Guenter Weber
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello Amir,
indeed this is very suspicious, we of course would not expect this job to terminate with a shortdump. Please open a service ticket for this issue and enclose the complete shortdump. Only if we can see all details about this shortdump / sql exception we can understand what actually happened
Kind regards,
Guenter
Hi Amir,
unfortunately there is no way to make this processing continue from where it stopped. The only other approach I now could imagine is what I mentioned before, see http://scn.sap.com/community/replication-server/blog/2014/02/25/how-to-filter-on-the-initial-load-pa... In case the first key field of your huge table is a document number, and you have some knowledge about the value distribution (for example, multiple number ranges, or just one number range, of which you know how many numbers have been used so far), you can manually set up value ranges - which is the very thing which that report which failed with the shortdump should have done. If the first key field is an organizational unit, like company code, and you know how many records you have for each of the organizational units, you could likewise define such a "manual" parallelization.
Kind regards,
Guenter
User | Count |
---|---|
95 | |
11 | |
10 | |
9 | |
9 | |
7 | |
6 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.