cancel
Showing results for 
Search instead for 
Did you mean: 

Multi-Temperature Data Management = a bad joke for BW!

Former Member
0 Kudos

Dear SCN readers,
I tried to find the best community for my topic (no "NLS / Archive" community exists) and finally decided this to be, for the reason of "HANA readiness"...

First of all, please apologize having not read previous topics, but since I did not find any "1893890"-related topics via the search function at all, I assume it has never been discussed?!

Probably all of you know this picture of hot, warm and cold data:
http://help.sap.com/saphelp_nw74/helpdata/en/f6/99e81daef24dfa8414b4e104fd76b7/content.htm

I heard that for migration of SAP BW 7.3x to HANA, there are several options, but best prerequisite obviously is to reduce the database size as much as possible. Independent from that, every BIA has a storage limit and therefore sooner or later NLS Archiving comes into the picture.

Since I gathered already several bad experience with archiving data from DSOs and Infocubes to Nearline Archive (NLS), I also found many topics which seem to not have been designed sophisticatedly there. I will not go into details here (if there is already a thread about the "BW developer's understanding which basic functionality a data storage should have" feel free to share the link, I'd love to contribute there.

Ok, coming to my main reason for this posting... it is this note, and the experiences recently made:
http://service.sap.com/sap/support/notes/1893890

Quote: "Solution: ... If data is read from an NLS, navigation attribute selections must be avoided as much as possible. This prevents master data from being read with some very complex SQL statement. Even after you implement the correction, avoiding navigation attribute selections may bring about a performance improvement."

@ SAP: Honestly? This is your recommended solution for the "cold" data?
Do you mean to re-model the whole BW data model to avoid using navigational attributes? Are you kidding?

This note is from February 2014. btw. Is there any update in the meanwhile, which I did not find?

In my case the data is not just "cold", it is literally "dead" data!

Yes, my main query selections are based on navigational attributes. (90% of authorization variable selections are on attributes)
And even if SAP can't imagine, there are scenarios where 5 to 25 million master data records exist for different Infoobjects.
And also Infocubes may contain a volume of 400 million records for only one year of data.

I archived about 100 Mio records and the report is not giving any result anymore - EVEN if the selected data is not in the archive!
From my point of view it is a MUST that the "data selection" recognizes if NLS is to be read or not - fast!
That means, regardless of the time characteristic used in archive and not in the query, I expect a column-optimized NLS to identify quickly that no data matches and not to cause the whole report to fail.

In that way, and this note still not resolved, it is eventually not possible to use NLS archiving to decrease the database size!

This affects the HANA migration tremendously, as I assume - right?
The only option would be selective deletion... and I guess you can imagine how happy functional departments are with that.

Would the migration to HANA revive my reports when reading archived data?
Or what is the upper limit in Mio. of records or GB until the same report fails in HANA, too?

Besides that, I would like to know the following from the participants here:

- What is your average data volume archived in "big" reporting Infoproviders (records / GB)?
- Do you have (different) performance experiences with literally "big data" volume in NLS and reporting (on attributes)?
- Is there any solution for the issues described above? Seems SAP will never provide one. 😞

FYI: The variable to read "with archive" on demand in certain queries is no solution, as the report then will never finish at all anyway.

Thanks for reading and looking forward to your comments,
Martin

Accepted Solutions (0)

Answers (3)

Answers (3)

Former Member
0 Kudos

This message was moderated.

RolandKramer
Active Contributor
0 Kudos

Why not reading the Blog -

then you don´t have to search furthermore for bugs ...

Best Regards Roland

RolandKramer
Active Contributor
0 Kudos

Hi,

the answer is simple: SAP-NLS -

Only with SAP-NLS you will be able on one Hand to shrink the main DB, despite if it is SAP HANA or any DB and also support all SAP supported OS Version where you can run SAP IQ.

Of course the Document - SAP First Guidance - SAP-NLS Solution with SAP IQ | SCN is first of all the SAP IQ Implementation plus everything you need for the SAP IQ Software Lifecycle Management. the Application specific Part can be found for Example here - Configuring Sybase IQ as a Near-Line Storage Solution - Configuration - SAP Library

We are in the Process of automate the SAP IQ standalone DB installation. The first prototype is used in the TechEd 2015 Session DMM267.

I agree with you complains of the Multi-Temperature Approach. It is obsolete since the Sybase IQ came on Board the Real-time Data Platform. The Data in SAP IQ is not cold, it is hot/frozen and with SAP BW 7.50 it is also changeable (straggler management).

So where is than the Benefit of dynamic tiering (DT)? Obviously you already came to a conclusion ...

To get rid of the mentioned Problem, which is btw. even with the latest SAP BW 7.40 on HANA together with SAP IQ connected with ODBC (SDA) Versions still a challenge.

PBS, our Add-On Partner for the SAP-NLS Solution already came up with a Solution a few Years ago, with is called "snapshot approach". Despite if you are using SAP HANA or not, it will solve your Business Problem.

I assume the writers of the SAP Note 1893890 are not aware of this. Furthermore the Note is not changed since 2014 anyway ("paper is patient"). You should follow the recommendations of SAP Note 2165650.

Best Regards

Roland Kramer, PM BW/In-Memory and SAP-NLS

Former Member
0 Kudos

Hi Roland,
first of all: Thank you very much for the open and honest words...

Even if I like reading "... the Multi-Temperature Approach. It is obsolete ..." and the "mentioned problem" is still a challenge in "SAP BW 7.40 on HANA", it won't help me right now.

Let me ask the following:
Have you or colleagues tested the "SAP NLS based on SAP IQ" solution with literally "big data", or is your recommendation just a statement coming from some Sales slides?

From my experience (Training & OSS notes), SAP is not really aware of the daily/monthly/yearly volumes of transactional data their customers have!

In my case one Infocube contains ~ 2,4 billion records (estimated) and we have master data infoobjects with 5 to 25 million records.
That means if I need to select e.g. one year of data (480 mio. recs) and get several attributes of that, due to variable selection (authorization), what is the average response time with SAP IQ?
Can you provide any use cases for that? Any performance results with which a scaling is possible for my data volumes?

So, coming back to SAP Note 1893890 : Is there a way to get somehow an update here, after ~ 2 years?
I mean it is valid for 7.30 to 7.40, and since I'm on 7.31 the one you mentioned (SAP Note 2165650) it is not applicable to me.
Moreover Note 2165650 (or therein: 2063449) does not contain any statement about query usage on NLs with navigational attributes, at all.

As of now I have to assume that the same query will not revive in BW 7.40 on HANA, either!

Do you also have a different overview about this "snapshot approach"?
The link provides no real technical details... is the solution just creating indices on master data tables?
I'd like to understand how it would be able to solve my problem of "selecting navigational attributes" in huge data.

Thanks a lot,
Martin

former_member93896
Active Contributor
0 Kudos

Hi Martin,

please have a look at SAP BW on SAP HANA & SAP HANA Smart Data Access and especiallySAP BW on HANA & HANA Smart Data Access - BEx Query Execution. This document describes the cases where joins can be pushed down to the remote source - which in this scenario would be SAP IQ NLS (=semi-join).

Using navigation attributes works fine if they are used to filter the data. The remaining master data SIDs would be transferred to SAP IQ to process the join directly in IQ.

A different scenario which will always be challenging in federated landscapes is using navigational attributes to group/aggregate data (without much fitering). This is shown in the last chapter of the document and this is what the SAP Notes you mentioned are refering to. In such case one can end up with very large tables on both sides of the federation and performance will take a hit (compared to having all data locally). Since it depends mostly on customer data models and volumes and there are also different runtime requirements/expectations, we can't give more detailed recommendations. It might work for some, but not for others. If such scenario is critical for your implementation, then I recommend doing a proof of concept with your particular data, model, and queries.

Best,
Marc Bernard
SAP Product Management EDW (BW/HANA)

RolandKramer
Active Contributor
0 Kudos

Hi,

In the meantime I collected some information's about SAP-NLS together with the SDA usage

Increasing the SAP-NLS Performance

Best Regards Roland

Former Member
0 Kudos

Thanks to all for the document links.

As long as BW 7.40/HANA is not in place it won't help me, but I will keep it in mind for later....

Best regards,

Martin

lbreddemann
Active Contributor
0 Kudos

Moving this to as NLS is only relates to SAP BW.