cancel
Showing results for 
Search instead for 
Did you mean: 

Cube Compression problems

Former Member
0 Kudos

Hi

We are having problems on our BW system when trying to compress requests.

At the moment we're compressing a single request at a time, but more often than not end up using secondary logs, and when those are all used up, the job is cancelled, in which case a database rollback happens. During rollback, the system is unavailable for the 2-3 hours, so batch jobs are put on hold.

This is not ideal.

Is there a way to cancel a compress job, so that a database rollback doesn't happen?

Also, should we be reasonably expecting a compress of 13 million records to complete in an acceptable amount of time, or is this unreasonable.

Basis have already given us the maximum amount of logs, so we can't ask for more. We will also look at breaking the requests up into smaller pieces.

Regards,

Andrew

Accepted Solutions (0)

Answers (5)

Answers (5)

Former Member
0 Kudos

Hi Andrew,

We had similar issues in the past with regard to Compression, following activities to mention;

1. In the past Compression was mainly handled via a process chain on daily basis, this has caused many issues, so we split compression jobs into parallel runs however this did not resolve our issues, Then,

2.  We executed Compression only on Weekends, in order for this to happen created a condition in process chain, PSA deletions to happen on 2-3 weeks time frame ( best practice )

3. 10 Million records can be easily compress, but will dump out if its more than that and Basis have to roll back which is time consuming, extending memory is another solution but its system Dependant.

4. Have to minimize business impact not to schedule compression jobs during working hours as there is a direct impact to reporting process.

Hope this helps

WIDYL

timkorba
Participant
0 Kudos

You can definitely work with the basis team to clean the logs up more often during this large job.  You can also compress smaller groups of records manually.  Then set up a daily compression job that compresses data that is 7 days or older.  This should remove the long run times unless you are loading 13 million records every 7 days.  Hope this helps.

0 Kudos

Hi,

My first opition is please try to check with basis people they only do it in data base side.

1.ask basis people try to check the DB stats, i belive any space issue. if not in your end try to load data based on limitted filter(i mean 1 year ) data for every load interval and compress it.

if still same problem basis people only find the root cause.

if my answer is useful please give some points.

Regards,

satya.

abhishek_shanbhogue2
Contributor
0 Kudos

Hi Andrew

When your compression jobs are running then can you ask your BASIS & DB team to closely monitor the system as I believe the logs are getting piled up inthe system

Thanks

Abhishek Shanbhogue

RafkeMagic
Active Contributor
0 Kudos

is it possible to have your system admins "clear" the secondary logs a bit faster?

compressing 13*10^6 records should not be that unreasonable... our latest "big" compression was 36*10^6 and that didn't take too long (and our system is currently "under"sized)

Former Member
0 Kudos

How can the secondary logs be cleared ?

I think they (Basis) will come back to me and say that if our job committed more frequently, those logs would be released / cleared and we wouldn't hit this problem. Trouble is we don't control the SAP standard compress functionality, so not sure how often it is / isn't committing.