cancel
Showing results for 
Search instead for 
Did you mean: 

Still "uncompressed" package after Lite Optimize.

Former Member
0 Kudos

Hi,

I'm having problems with duplicate keys, when I export data from SAP BPC to a flatfile.

I run a "lite optimize", and it runs without problems, but when I look at the planning cube in SAP BW, I can see that the last 3-4 packages are still uncompressed.

When I try running a "full optimize" the system informs me that it is not nessecary, but I still have 3-4 uncompressed packages, which leaves me with duplcate records in my flatfile export.

I'm I missing a parameter setting somewhere - or should I manually compress the planning cube in SAP BW ? I rather not use "standard bw" functionality in a BPC setup, since it is my understanding that it is not adviced to use standard BW functionality on BPC cubes ?

Thank you,

Joergen Dalby

Accepted Solutions (1)

Accepted Solutions (1)

former_member200327
Active Contributor
0 Kudos

Hi Joergen,

What version and SP of BPC are you using? Recently LO Process Chain got changed to leave last 2 requests uncompressed. This is because of some DB are not working well with empty F tables.

So, you can compress it in BW, but then your first load can have performance issues.

Regards,

Gersh

Former Member
0 Kudos

Hello Gersh,

thank you for your answer. Very helpfull.

In my case we load "full" loads, so before we load into BPC, we delete data with the same criteria. Meaning we typically delele version "0" (actual) in BPC and the do a new full load. So this should not be a problem with empty F-tables ?

But is it advisable to "manually" compress the BPC Planning cube directly in BW ?

Thank you,

Joergen

former_member190501
Active Contributor
0 Kudos

Hi,

Goto RSPC and search for process chain /CPMB/LIGHT_OPTIMIZE.

Double click on Collapse then system would open another window with details of process.

Make sure Number of requests that you do not want to collapse = 0.

And try again.

Hope it works...

regards,

Raju

former_member200327
Active Contributor
0 Kudos

Hi Joergen,

When you say "full load" it seems like you reload a full version, not the whole cube. So, if you reload the whole cube then full compression is not a problem, but if you reload just one version then performance can still be an issue. You should check if it's an issue for the DB you using. It's known to be an issue for ORACLE DB.

What Raju suggested will compress your whole cube, i.e. make F table empty. After that it will depend of what kind of loads you have: if those are large number of small loads, then you are not going to feel performance degradation, but if it's one big load it can slow down significantly because DB is not going to use index on F table.

Regards,

Gersh

Answers (3)

Answers (3)

Former Member
0 Kudos

Making "Number of requests that you do not want to collapse" equal to ZERO is wrong instead it should be very minimum number like 2.


The reason is when you make it 0 their will be no data in F Table as everything will be moved to E Table, when no data or 0 record in F table STATS will not be having any effect. So it will degrade the performance drastically..


Check the note 1565292 - Full/Lite optimize cause performance issue.


It is our BPC performance improvement experience . And also remember this process chain is generic and system will leverage this chain for all the environment.


Thanks,

Kali

Former Member
0 Kudos

Hi Joergen Dalby,

Please run the Full optimization. Because we also got the same issue few days back but we run the Full Optimization. After that our problem is solved.

Regards,

Srinivasan.

Former Member
0 Kudos

Hello srinivasan singari,

unfortunately a "full optimization" is not working. BPC overrules my optimization and says it's not needed..

/Joergen

raghu_ram
Active Contributor
0 Kudos

Hi Joergen,

You can handle the duplicate records in BPC using the Append package.

Regards,

Raghu