cancel
Showing results for 
Search instead for 
Did you mean: 

jdbc (sql) to idoc (sap) high volume transfer

Former Member
0 Kudos

Dear all,

I have a jdbc to idoc scenario . I am executing a stored procedure on msql database to extract line items for GL postings in SAP .

Stored procedure seems to pick up the data alright and in transaction SXMB_MONI i see one successful message from sql to SAP . I cannot open this message payload as the size is too big.

When i go to transaction IDX5 i see that around 33,000 idocs generated for SAP .

The problem is that PI is not sending all these idocs to SAP at once. It sends them 200 - 300 at a time and takes a few hours to transfer all those idocs from PI to SAP.

Is there any way or setting in PI to transfer all these idocs to SAP at once ?

Thanks,

Teresa

Accepted Solutions (1)

Accepted Solutions (1)

prateek
Active Contributor
0 Kudos

There are two things here:

1. Speed with which PI processes the messages

2. Capacity of your SAP system to receive messages.

First point based largely upon your hardware configuration and your system tuning. With help of a tuning guide, a Basis guy can pretty much handle this. The second point is more important as changes oN SAP system are not that easy. SAP (ECC/R3) have specific number of threads which could be utilized to receive idocs. Ask you basis team if they can modify the settings.

Regards,

Prateek Raj Srivastava

Former Member
0 Kudos

Hi Prateek,

Thanks for the info. Do you have any suggestions about what kind of parameters need to be tuned in PI and SAP. I asked the BASIS guy here and he does not know any of them. May be i can lead him in the right direction.

Thanks,

Hari

prateek
Active Contributor
0 Kudos

For PI Part, the JDBC specific settings are:

1. You may increase the thread count for JDBC related queues. This has to be done in accordance with SAP Note 1084161.

2. There is parameter in JDBC communication channel called Maximum Concurrency. It signifies that one communication channel can make how many connections to database. This is 1 by default and could be increased to some values like 3-4.

3. In the Visual Admin/ NWA, there is a parameter called as queueParallelism.maxReceivers which defines the number of parallel worker threads for one receiver channel instance. This should be done following SAP Note 1136790. This can be done along with the first point.

For generic PI related performance check, refer to SAP Note 812158 and referenced documents within.

I am not an expert for ECC related settings, but I know of this blog

Regards,

Prateek Raj Srivastava

Answers (3)

Answers (3)

rajasekhar_reddy14
Active Contributor
0 Kudos

Nothing to with JDBC adapter settings in your case only problem with PI--->SAP Connectivty.

We are able to process 10k IDocs (vice versa in my current project),so better to check with Basis team and request them to monitor the performance.

Most of the SAP PI and SAP ECC integration gives very best performance, but in your case something going wrong ,may be some basis setting required.

Regards,

Raj

Former Member
0 Kudos

Hi Raj,

I asked the BASIS team here and they are not aware of any parameter. Can you give me some direction as to where I can get some information either a blog or article or a book?

Thank you ,

Teresa

baskar_gopalakrishnan2
Active Contributor
0 Kudos

Why don't consider pulling data from database using stored procedure in multiple chunks in terms of smaller volume records each time? This way you will get data which generate lesser number of idocs each time (proportional to numbe of records in table) and message size would be lesser too. This would not overload the integration engine during processing

Former Member
0 Kudos

Hi Baskar,

We are already extracting smaller volume of data from sql to avoid the java heap space error. I am sure PI can push out more idocs in a certain time frame than just 200 or 300.

Thanks,

Teresa

Former Member
0 Kudos

Hi

I had IDOC scenario with 70.000 IDOCS and it was taking 12 minutes to process.

First thing we experience was lock table size problem, check you queues if there is any failed messages. it had default setting as 3600 (around that) and we had lock table errors in the queue.

You need to have that parameter increased. that helps.

Regards.