Additional Blogs by SAP
cancel
Showing results for 
Search instead for 
Did you mean: 
former_member207826
Participant
0 Kudos

There is already an SAP Note about this scenario, however this has been the root-cause of so many OSS Messages, that I find an SDN Blog Entry may be useful.

 

What also makes it relevant is that this limitation still surprises some consultants. Some don’t believe it is the root cause, until they stop the load and watch the BW system all of a sudden start working properly once more.

 

SAP Note # 892513 states that you should not have more than 1000 data packages in each loading request in BW. It also instructs not to have more than 10 million records in each request.

 

892513 - Consulting: Performance: Loading data, no of pkg, req size.

 

I can say though that this SAP Note is not dramatic enough, compared to some situations I have seen as an SAP Support Engineer. Maybe I can help by illustrating how a BW system can end up if the request has too many data packages and ALL the loads start to fail:

 

 

 

In case it didn't seem chaotic enough, this video may give you a better idea of what a monster request can cause to a BW System:

 

 

 

I have seen a situation where all new requests being run in BW were yellow, showing no progress. Everything was okay in the system and nothing could be corrected. The culprit was a 2LIS_0X_XXXXX extraction that had been running for around 4 days, stacking up many data packages. It took 3 days until it started to make the other requests get stuck and make the performance unbearable. There are many resources this kind of load can occupy: RFCs, work processes, locks, etc, slowly messing up with the load scheduling strategy.

 

I have recently also analyzed a situation where there was a load running for more than 7 days and it had red status. Nobody cared about it because they thought it wasn’t running anymore. But that wasn’t the case: the monster load was already around its 14,000th Data Package, with exclusive locks on the monitor tables. Any other load that was triggered wouldn't make any progress. The extraction jobs would finish with success in the ECC side, but everything else would be stuck after that. I wouldn’t have figured this also if I hadn’t taken a look at SM12 and confirmed on SM66 there was a work process inserting records in the PSA table.

 

 

What can you do if you have a request that large running and it’s affecting other loads?

 

Unfortunately, you will have to kill it!

 

If you are able to estimate the number of records that are currently present  in the source, you may decide if you want to give it a try and wait for it to finish. However, note that the performance degrades in a curve! The last 10 data packages may take way much longer than the first 100.  You will also have other loads and deltas on the way, stacking up or getting bigger.

 

The following graphic is also attached to SAP Note #892513and shows in a very simple (and optimistic example) how the performance degrades in a curve. Note how the difference of times increases significantly for each additional 500 data packages. Again, this example is not so dramatic.

 

                                                                                         
 

Paket #

 
 

Difference   of Pakets

 
 

Difference   of times

 
 

Again   Packet #

 
 

Time   per packet

 
 

5

 
 

505

 
 

500

 
 

3:59

 
 

505

 
 

00:28,7

 
 

1005

 
 

500

 
 

3:57

 
 

1005

 
 

00:28,5

 
 

1505

 
 

500

 
 

4:10

 
 

1505

 
 

00:30,1

 
 

2005

 
 

500

 
 

5:15

 
 

2005

 
 

00:37,9

 
 

2505

 
 

500

 
 

7:22

 
 

2505

 
 

00:53,1

 
 

3005

 
 

500

 
 

10:02

 
 

3005

 
 

01:12,3

 
 

3305

 
 

300

 
 

7:39

 
 

3305

 
 

01:31,9

 

 

 

 

 

 

After the load is killed, how can you load the data once more?

 

Each scenario tends to be different. Especially because each extraction allows different types of selections. But here are a few ideas:

  • The first option to try is increasing the package size.
  • If the Package size is large enough, you may split the load in different selections. This should decrease the total number of records to be loaded, and consequently decrease the number of data packages.
  • If you are running a DTP load and the problem is actually the number of requests in the source, you may choose the option  “Get All New Data Request By Request”. This will make the DTP create a separate request for each request from the source.
  • If you didn’t face this problem when loading to an object, and now have issues when loading it to a subsequent data target via DTP, you may create a semantic group and change the Package Size to a larger value, in order to reduce the number of data packages.

 

 

You may need to perform gigantic loads once in a while, and they may not cause any harm sometimes. But be aware and always monitor all the loads and process chains in the BW system while it's running.

1 Comment