on 06-13-2007 3:08 PM
Hey All,
I've begun my QA testing with Compression, and I'm noticing it takes a long time to compress. Not sure if this is normal, but here are some timings.
Compress 50 days of requests: 31 hours (definitely not normal)
Compress 10 days of requests: 3 hours
Compress 20 days of requests: 6 hours
Compress 1 day of requests: 30 minutes
And the cube we're compressing isn't even our biggest cube. Is there a way to look at the performance and why it is taking so long, or is this normal (other than the 31 hours)????
Please help... Points will be rewarded!!!!!
Exactly. If you plan your jobs, you can work them out.
Also, try and explore other performance enhancing options.
Ravi Thothadri
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Just curious , what is the compression ratio ..?
i mean by how much does the volume come down when the records are moved from F to E tables.
also is the time spend on compression of the Cube or on the aggregates or on the deletion of records in the F table..?
what is the ratio of aggregate volume to the cube volume...?
You might want to set an trace on the compression session to get answers on where it is spending most of its time
Compression is equivalent to executing a query on the F table and summarising the results with the request ID column , appending the results to E table and deleting the compressed request from F table.
Thanks.
Hi Suresh,
We can tell how much time is spent on what step through the background log in the process chain.
For example, I compressed 10 days of requests and took a total of 3 hours. The first 2 hours were spent doing the compression steps, and the last hour was spent removing those records from the F table.
The compression ratio is at about 23%. For example, the cube first had 185 million records as part of the F table, and now the E table has 92 million and the F table has 65 million, for a total of 157 million.
I suggest you should consider compressing often and as a job in a process chain.
You seem to have a heavy load of data and requests to be compressed.
Also, work with your basis to schedule these jobs at off peak when there are no other jobs running.
Ravi Thothadri
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
The plan was to have it at the end of each of our transactional data loads. For example, our Billing Conditions Process Chain would have a local subchain at the end that calls the Compression Process Chain for Billing Conditions. We would do that for all 6 transactional loads. But, if it adds 30 minutes to an hour, to each load, then it is not possible.
So, I will have to sit with Basis, and figure out what window we have to run the Compression Process Chains on a daily basis.
Thanks!!!
The practice is to to periodic compression; Now you have 50 days of data to be compressed and it is not too bad , with my experience.
Ravi Thothadri
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
In total, we have around 100 days of data to compress for our cubes. We will have to break it down in chunks, and compress around 5 or 10 days of requests at one time. This will take time.
1 day of requests takes between 30 minutes and 1 hour to compress. We have 6 loads that run nightly. It is almost impossible to add this amount of time to each transactional load, because the data needs to be available during a certain time in the morning.
If we cannot do the compression at the end of each transactional load, due to a time constraint, then I don't know when we'll find the time to do it.
HI what is the version of the system.
check this OSS note for that 375132.
Also Compression takes time depending upon the data as it tranfers the data from the F table to E table and deletes the request.
well 1 day request is normal if it takes 30 minutes.
But haven't compressed 50 days request so not sure about it.
Thanks
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Certainly compression time is dependent on the volume and no of requests to be compressed. It may long time, if you have any activate aggregates on the cube.
Ravi Thothadri
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
86 | |
10 | |
10 | |
10 | |
7 | |
6 | |
6 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.