on 09-23-2015 12:11 PM
Hello Experts,
Lately I am facing an issue in the batch jobs of BODS Production. We have a common dataflow which is being used by nearly 12 jobs. And this particular dataflow keeps failing irregularly in any one of those jobs. If I re-execute the job with the same data of source, the dataflow does not fail.
Below is the error log we get:
Data flow < > received a bad system message. Message text from the child process is
<ネ ==========================================================
Collect the following and send to Customer Support:
1. Log files(error_*, monitor_*, trace_*) associated with this failed job.
2. Exported ATL file of this failed job.
3. DDL statements of tables referenced in this failed job.
4. Data to populate the tables referenced in the failed job. If not possible, get the last few rows (or sample of them) when
the job failed.
5. Core dump, if any, generated from this failed job.
==========================================================>. The process executing data flow < > has
died abnormally.
The job process could not communicate with the data flow < > process. For details, see previously logged error <50406>.
Source in the dataflow is HANA table and Target in the dataflow is also a HANA table.
Please let me know how to identify the root cause of this issue. I am unable to understand what this error actually says. Is it some problem with the Job server memory or HANA DB memory?
Dataflow's Cache is Pageable and DOP is set to 4.
Thanks,
San
Hello San,
Try to optimize your job with all the performance optimization techniques which described above..
along with that make sure to clean up all the error log files and release some space in the installation directory.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi San,
I suggest trying to optimize your job with Push- Down Operations, SQL Optimized, advanced tuning options and specifying a pageable cache. Follow link Performance Optimization Guide: http://help.sap.com/businessobject/product_guides/sbods42/en/ds_42_perf_opt_en.pdf
Hugs
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Most probably job server memory. Check the logs for:
For details, see previously logged error <50406>.
If that doesn't help,
Collect the following and send to Customer Support:
1. Log files(error_*, monitor_*, trace_*) associated with this failed job.
2. Exported ATL file of this failed job.
3. DDL statements of tables referenced in this failed job.
4. Data to populate the tables referenced in the failed job. If not possible, get the last few rows (or sample of them) when
the job failed.
5. Core dump, if any, generated from this failed job.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You'll have to monitor at OS level.
You may reduce the risk of hitting memory limits by using pageable cache. You can do so by
But, obviously, those measures may all have a negative impact on performance.
And note that the error may have nothing to do with your DS job structure and contents, but just be the result of a software bug.
User | Count |
---|---|
93 | |
10 | |
10 | |
9 | |
9 | |
7 | |
6 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.