cancel
Showing results for 
Search instead for 
Did you mean: 

Job Process could not communicate with the data flow

SandeshK
Participant
0 Kudos

Hello Experts,

Lately I am facing an issue in the batch jobs of BODS Production. We have a common dataflow which is being used by nearly 12 jobs. And this particular dataflow keeps failing irregularly in any one of those jobs. If I re-execute the job with the same data of source, the dataflow does not fail.

Below is the error log we get:

Data flow < > received a bad system message. Message text from the child process is

  <ネ ==========================================================

  Collect the following and send to Customer Support:

  1. Log files(error_*, monitor_*, trace_*) associated with this failed job.

  2. Exported ATL file of this failed job.

  3. DDL statements of tables referenced in this failed job.

  4. Data to populate the tables referenced in the failed job. If not possible, get the last few rows (or sample of them) when

  the job failed.

  5. Core dump, if any, generated from this failed job.

  ==========================================================>. The process executing data flow < > has

  died abnormally.

The job process could not communicate with the data flow < >  process. For details, see previously logged error <50406>.

Source in the dataflow is HANA table and Target in the dataflow is also a HANA table.

Please let me know how to identify the root cause of this issue. I am unable to understand what this error actually says. Is it some problem with the Job server memory or HANA DB memory?

Dataflow's Cache is Pageable and DOP is set to 4.

Thanks,

San

Accepted Solutions (0)

Answers (3)

Answers (3)

former_member254877
Participant
0 Kudos

Hello San,

Try to optimize your job with all the performance optimization techniques which described above..

along with that make sure to clean up all the error log files and release some space in the installation directory.

jean_machado
Explorer
0 Kudos

Hi San,

I suggest trying to optimize your job with Push- Down Operations, SQL Optimized, advanced tuning options and specifying a pageable cache. Follow link Performance Optimization Guide: http://help.sap.com/businessobject/product_guides/sbods42/en/ds_42_perf_opt_en.pdf

Hugs

former_member187605
Active Contributor
0 Kudos

Most probably job server memory. Check the logs for:


For details, see previously logged error <50406>.

If that doesn't help,

Collect the following and send to Customer Support:

  1. Log files(error_*, monitor_*, trace_*) associated with this failed job.

  2. Exported ATL file of this failed job.

  3. DDL statements of tables referenced in this failed job.

  4. Data to populate the tables referenced in the failed job. If not possible, get the last few rows (or sample of them) when

  the job failed.

  5. Core dump, if any, generated from this failed job.

SandeshK
Participant
0 Kudos

Hi Dirk,

How can I know the memory statistics of job server at the time when the job has run. Is there any checkbox I need to check before I run the job?

Thanks for your help.

- San

former_member187605
Active Contributor
0 Kudos

You'll have to monitor at OS level.

You may reduce the risk of hitting memory limits by using pageable cache. You can do so by

  • setting Cache Type to Pageable in data flow properties
  • not caching the comparison table in Table_Comparison transforms
  • specifying NO_CACHE for lookup functions
  • ...

But, obviously, those measures may all have a negative impact on performance.

And note that the error may have nothing to do with your DS job structure and contents, but just be the result of a software bug.

0 Kudos

We've been facing the same issue for 2-3 months now.  Sometimes jobs just hang and after a few hours we get the same error message. 

Did you find a solution for your problem?

Pawel

SandeshK
Participant
0 Kudos

Hello Pawel,

We are still in pursue of finding the root cause for this issue. I shall update once we find a concrete solution to it. Also update here if you could find a solution.

Thanks,

San

SandeshK
Participant
0 Kudos

Hello Dirk,

The dataflow 's cache type is set to "Pageable" only. And only this DF has been set to DOP: 4 and this DF contains code for Firm Name Cleansing and Global Address Cleansing. Could this DOP value also effect the memory limits? Is it better to change the DOP to default ?

- San

former_member187605
Active Contributor
0 Kudos

Not sure. But you'll have to try in order to isolate the problem.