cancel
Showing results for 
Search instead for 
Did you mean: 

ODS Partitioning ?

Former Member
0 Kudos

Hi -

I am looking for guidelines regarding ODS/DSO Partitioning.

We are dealing with the LO extractor which brings billing conditions into BW/BI (We are running BI 7). We expect very large volumes of data from this extractor.

We plan to implement a design the follows EDW standards. First level write-optimized DSO, second level standard ODS, and finally into a cube. Most of our transformation will occur between the first and second level DSO - including any filtering.

As we are working out the details of our design, we are trying to determine if we should utilize logical partioning for the first and second level DSO Objects.

Are there any guidelines around maximum ODS Size? Does anyone have reccomendations regarding ODS Partitioning?

I have searched the forum and found plenty of information on logical partitioning of Cubes but have found nothing regarding ODS objects.

Points will be awarded.... Thanks for your help.

Accepted Solutions (0)

Answers (4)

Answers (4)

Former Member
0 Kudos

Thanks for the replies. I know how to partition and ODS (physical partitioning as you suggested). I also know that logical partitioning as I am discussing is commonly done on a cube to improve reporting performance.

My question.. Should I logically partition an ODS if I expect very large data volume. If I do not partition, will performance of loading to the ODS or loading from the ODS suffer? If I only have one ODS and it grows to say 1 TB - will it be unmanageable? What is the largest size of ODS I shoud have - does SAP have any guidelines for this.

Points still to be awarded.

Thanks.

Former Member
0 Kudos

ODS is just relational table and as long as your DB supports it, there is no any limitation of size. Performance of the loading to or from the ods will improve if the loading to or from the ods is has some filter on. Let's say you have 3 Million records and you want only to load specific ranges of number, during this time the loading performance will be improved by partitioning, but complete load will not be improved by partitioning since reading the data will be sequentially take place. Activation of the data will also significantly improve by partitioning since the system needs to find the existing data to overwrite with the new data.

thanks.

Wond

Former Member
0 Kudos

Wond -

Thanks, your answer is helpful. I guess in terms of loading from the ODS - I will be loading a delta to the cube so the actual load will come from the change log - it will not be by selection. The change log wont be large so, except for the initial load - it should manageable.

Regarding activiation. When activation occurs, determining if an existing record exists would be done via the semantic key... correct? There would be an index on the key of the ODS - so the search would not be a sequential read through the entire table - it would be and index search. So, would activation really suffer as you suggested?

Lets say I decide to partition it - what number of records or size per ODS would I want to achieve... again, I would look for a guideline from SAP or others from experience to say... If you are going to go throug the work to partition your ODS you want to keep them under X records or X Size.

Any ideas?

Some points awarded.... some points remain!! Thanks.

Former Member
0 Kudos

Friends -

This is still unresolved. Looking for advice.

Thanks.

Former Member
0 Kudos

The limit is more in the number of requests loaded than the size of the table.

In case of a large number of requests in an ODS refer to note 620361 "Data loading erformance/Admin. data target"

This system limitation for the maximum number of requests for one ODS/cube is also described in the Notes 543212 and 892513.

To summarize, don't let your ODS get more than 10K requests.

Former Member
0 Kudos

Which BW version are you on?

Cheers!,

Suyash

Former Member
0 Kudos

hi,

mostly Partitioning will taken place in cube or ods after the huge number of data availability. so better first load the data from your data source, after some time

partition the ods regarding time period.

hope this help you, if helpful provide points

regards

harikrishna.N

Former Member
0 Kudos

Normally, Cube is partition for reporting performance, but here you are using Write-Optimized DSO and you are pushing the data to the cube and I am assuming that you are generating report from the cube. Then what is the need of ODS partitioning?

thanks.

Wond

Former Member
0 Kudos

Wond -

I do not know that there is a need for partitioning the ODS. That is what I am asking... The ODS could grow up to 1 TB in a short amount of time... I am not sure that a Write DSO or Standard ODS can handle this data volume. So, I am looking for people experience with large objects or any SAP guidelines regarding the same.

In addition, although the main reporting will be done on the cube - which will be summarized. There will be the chance that we will have queries that jump to the standard ODS for detail. We will obviously not have any reporting on the write DSO.

Thanks for your reply.

Former Member
0 Kudos

Double click on specific ODS.

Extras> Partitioning

You need to have 0calmonth/0fiscper in your ODS.

This is DB Partitioning.

This is prior to loading data into infoprovider.

can do after load also but then need to move the data.

Hope this helps.

Thanks

Mona