cancel
Showing results for 
Search instead for 
Did you mean: 

Non SAP Job issue

Former Member
0 Kudos

Hi,

In our CPS system we have non SAP jobs (Database Job) scheduled to run. In One of the DB job, one step at the DB back end servers its failing but the in CPS its showing completed status and Job steps Final Status Handlers is been maintained correctly. Please help me out to find the what exactly is causing the issue and how CPS gets the log when the job gets error at the Backend server ( DB Job).

Regards,

Shwetha Houde

Accepted Solutions (0)

Answers (1)

Answers (1)

nanda_kumar21
Active Contributor
0 Kudos

hi Shwetha,

What type of job definition it is? I'm assuming it is of type JDBC.

Does the DB job inside the job chain step fail?

If not there should be proper error handling should be used in the query/stored procedure of that particular DB job.

If its of type bash or cmd, then use proper error handling code+logic in those job definitions.

Job step final status handler cannot know whether the backend DB job failed or not. The job within that step should reflect the status from the backend DB.

Thanks

Nanda

Former Member
0 Kudos

Hi Nanda,

Job definition script type is CMD and parameters for this job is set for pErrorString & PDATABASE and source is maintained as "SQLCMD -S %PDATABASE% -E -d Attendance -Q "exec Attendance.corpuser.spATTUpdLeave '01-DEC-2014','09-DEC-2014','BATCH','A'"

When this job definition with DB packages (Attendance.corpuser.spATTUpdLeave )is scheduled to run and at the back end  server when executes for more than 30 min its should be killed that how its been configured at DB server. After 30 min this execution is getting killed at DB Server but at CPS the job chain is showing as completed. Kindly help us in understanding how CPS will pick the killed transaction status from the back end server and is there any mistake in job scheduling or any issue at DB server package level Attendance.corpuser.spATTUpdLeave

Regarding the below, please suggest how and where it needs to be taken care.

If its of type bash or cmd, then use proper error handling code+logic in those job definitions

gmblom
Active Contributor
0 Kudos

Hello,

Return codes from higher then 0 are always treated as an error and will force the job to error. In this case, the SQLCMD executable probably returns 0 in this case. So I would start there.

Regards Gerben

Former Member
0 Kudos

Thanks Blom and Nanda for the quick reply ..:), will check with DB team on how the error is handled and keep you updated.

We have found one more thing that same job when it was run at 2:00am yesterday, the particular step was error due to 30 min exceeding at DB server side and ERROR  was reflecting in CPS. But the same job when its run today at 2:00am its showing completed in CPS where in at DB server the executing is killed.

Is there issue related to CPS or need to check on the code from DB server end.

Regards,

Shwetha Houde

nanda_kumar21
Active Contributor
0 Kudos

What is the difference in logs between those two jobs? that might give you more clarity.

an alternate and a better option i can think of, is that in the job definition - edit mode, go to the runtime tab and set the maximum runtime as 30 minutes and select the option to kill the job. this way the job on CPS side will definitely fail if it crossed 30 minutes.

Optionally you can make use of events to send a notifications.

thanks

Nanda