cancel
Showing results for 
Search instead for 
Did you mean: 

Performance Issues with BW Compression

Former Member
0 Kudos

Hey All,

I've begun my QA testing with Compression, and I'm noticing it takes a long time to compress. Not sure if this is normal, but here are some timings.

Compress 50 days of requests: 31 hours (definitely not normal)

Compress 10 days of requests: 3 hours

Compress 20 days of requests: 6 hours

Compress 1 day of requests: 30 minutes

And the cube we're compressing isn't even our biggest cube. Is there a way to look at the performance and why it is taking so long, or is this normal (other than the 31 hours)????

Please help... Points will be rewarded!!!!!

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Exactly. If you plan your jobs, you can work them out.

Also, try and explore other performance enhancing options.

Ravi Thothadri

Former Member
0 Kudos

Well, we have some really well-defined aggregates in place, for query performance. But, we expected to restored a decent amount of disk space. But, we may have to settle, if we don't have that window of opportunity to do compression.

Former Member
0 Kudos

Just curious , what is the compression ratio ..?

i mean by how much does the volume come down when the records are moved from F to E tables.

also is the time spend on compression of the Cube or on the aggregates or on the deletion of records in the F table..?

what is the ratio of aggregate volume to the cube volume...?

You might want to set an trace on the compression session to get answers on where it is spending most of its time

Compression is equivalent to executing a query on the F table and summarising the results with the request ID column , appending the results to E table and deleting the compressed request from F table.

Thanks.

Former Member
0 Kudos

Hi Suresh,

We can tell how much time is spent on what step through the background log in the process chain.

For example, I compressed 10 days of requests and took a total of 3 hours. The first 2 hours were spent doing the compression steps, and the last hour was spent removing those records from the F table.

The compression ratio is at about 23%. For example, the cube first had 185 million records as part of the F table, and now the E table has 92 million and the F table has 65 million, for a total of 157 million.

Former Member
0 Kudos

Hi,

How to calculate the compression ratio in a cube?

Former Member
0 Kudos

the size of the uncompressed Unicode tables without their indexes in a database system divided by the amount of memory needed for these tables will give the compression ratio.

Answers (4)

Answers (4)

Former Member
0 Kudos

I suggest you should consider compressing often and as a job in a process chain.

You seem to have a heavy load of data and requests to be compressed.

Also, work with your basis to schedule these jobs at off peak when there are no other jobs running.

Ravi Thothadri

Former Member
0 Kudos

The plan was to have it at the end of each of our transactional data loads. For example, our Billing Conditions Process Chain would have a local subchain at the end that calls the Compression Process Chain for Billing Conditions. We would do that for all 6 transactional loads. But, if it adds 30 minutes to an hour, to each load, then it is not possible.

So, I will have to sit with Basis, and figure out what window we have to run the Compression Process Chains on a daily basis.

Thanks!!!

Former Member
0 Kudos

The practice is to to periodic compression; Now you have 50 days of data to be compressed and it is not too bad , with my experience.

Ravi Thothadri

Former Member
0 Kudos

In total, we have around 100 days of data to compress for our cubes. We will have to break it down in chunks, and compress around 5 or 10 days of requests at one time. This will take time.

1 day of requests takes between 30 minutes and 1 hour to compress. We have 6 loads that run nightly. It is almost impossible to add this amount of time to each transactional load, because the data needs to be available during a certain time in the morning.

If we cannot do the compression at the end of each transactional load, due to a time constraint, then I don't know when we'll find the time to do it.

Former Member
0 Kudos

HI what is the version of the system.

check this OSS note for that 375132.

Also Compression takes time depending upon the data as it tranfers the data from the F table to E table and deletes the request.

well 1 day request is normal if it takes 30 minutes.

But haven't compressed 50 days request so not sure about it.

Thanks

Former Member
0 Kudos

BW 3.5 is the version we currently have.

Former Member
0 Kudos

Certainly compression time is dependent on the volume and no of requests to be compressed. It may long time, if you have any activate aggregates on the cube.

Ravi Thothadri

Former Member
0 Kudos

The aggregates are already active on the cube, so there's no need to do a rollup.