Intro: I will try to keep this post updated for you if your comments show I have missed something important or your approach is very different from mine. I would appreciate this input.
In this “how-to” post I am aiming to cover additional parts that arise while using HANA Live models in a sidecar scenario to your main (mature) SAP installation on a different DB. This blog post is of lesser interest for those cases, where an existing ECC / CRM system is migrated to HANA DB due to the fact that required tables are readily available under SAP_ECC / SAP_CRM schema. All you need to do is – install the HANA Live content and start using / modifying it.
Just a short recap, the scenario I am trying to cover here is the least-risk (and, possibly, cost) approach to leverage advantages of HANA with minor changes to existing ECC/CRM landscape, when an additional HANA instance is connected to the “main” system, running on traditional relational database. It may be an own HANA box, or any of the cloud solutions on the market.
I do not try to cover HANA Live part as such, there is a course HA900 for that and a wonderful post by mohammad.safiullah, too.
Just worth to mention, that HANA Live is a set of HANA Virtual Data Models (read HANA-views), that cover Reporting / Analytics needs directly on HANA system without the need to batch-load the data out of the HANA DB. To my personal opinion, HANA Live is a great delivery by SAP, which already contains quite some pre-built models (at the moment of writings this blog about a 1000 of them for ECC and same for CRM). So, like with Business Content for BW, you are not starting from scratch, but already have something to impress your stakeholders with :smile:
Unlike Business Content, it is an all-or-nothing installation and new versions overwrite any modifications you have made without asking you upfront, therefore it is always advised to follow Copy --> Modify approach.
The goal of this step is to understand, which tables you really need (remember, we will need to setup their replication from a current DB of ECC/CRM system to the HANA box), which data out of those tables we need, both field and content-wise.
I ended up with a combined list of like 230 tables. If you think full-blown approach is better, you may use a list of all 550 tables required for ECC, use Note 1781992 to obtain it. Similar note exists for CRM, too.
The system that will “shadow” the selected ECC / CRM tables is SLT (aka SAP Landscape Transformation, aka Replication Server). It is often installed on the same box for DEV and QAS environments, but a dedicated SLT system for PROD use.
"Table space assignment" --> Own table space is recommended for easier monitoring of the table sizes of the logging tables (Section 5.6) of the Installation Guide.
Do not underestimate the:
These are the key things not to miss out, but we trust BASIS have done their job right :wink:
You may want to start with a small table to find out if your ideas work and then do with bigger. For monitoring, user Transaction LTRC.
Most of the “Advanced replication settings”, addressed below, are found in Transaction LTRS.
Note, that loading using SLT happens like this:
If you want to restrict the fields for transfer (e.g. for those huge tables where only 20 fields are used in HANA models out of 250 available in SAP), right-click on “Table Settings” and start from here.
* A tip: you can cross-check the “Original” structure of a particular table in a metadata table DD03L versus the “Desired” structure needed for HANA Live model perspective (e.g. parsing an XML of a model that uses a given table). A simple VLOOKUP in XLS will get you to the fields you want to exclude and you can do that en-masse.
Then you can click on “Mass change” and add them all.
If you need to restrict the data (filter), use transaction code LTRC, right-click “Rule Assignment”, “Add a Table” and create a field or event based rule.
More on those, consult section 5 of the guide found attached to a Note 1733714 - Guide for Advanced Replication Settings. If you have followed training HA300, it is all there, too :wink:
While loading, you should notice, that it will first read all the data records from SAP to SLT, then do the filtering in SLT and transfer. For a better solution, you should check the next chapter.
If you want to avoid the extensive data volume transfer between the systems (trust me, you probably do want), read this blog post by tobias.koebler, where it is explained, how to use a table DMC_ACSPL_SELECT so filtering is executed in the source.
To illustrate the difference – here we go: both source tables on the picture below are 24 mil records each, their processing time has a difference 1:10 times. Note, that I used an SLT filter on MSEG (87% of time on read because it reads everything) and SQL filter on BKPF (17% of time on reading).
Try imagine the difference on a 1 bill records table :wink:
Now you have your tables in, populated with data in the desired HANA Schema. That means that you can now Redeploy the inactive HANA Live content and those models of your interest should turn active and running them displays data, that can be consumed by Excel, Analysis, Lumira or any of the other tools that support HANA Views. If some of the views are not active, you can come back to the Explorer and do another loop of analysis, what tables are needed, cross-checking them against the actual content of the SAP_ECC / SAP_CRM schema.
If your project is long-running and there is a chance of QAS system-refresh during the course of you project, bear in mind that you might run into a need to reload the data (re-initialize). To avoid such a situation, this blog post by sharanchama comes handy.
More on SLT-related Data Provisioning, especially if N:1 scenarios are used, a very useful reference is a blog post by the SLT guru tobias.koebler. Check out his other posts as they are very handy.
-------------------------------------------------
Dmitry has covered a variety of hands-on development & Architect roles on HANA, BW, BO and ABAP.
He has contributed to some major international organizations success, advising on Architecture and various mixes and matches of SAP-driven Business Intelligence technologies to suit the client’s needs best in the long run.
You can follow Dmitry’s posts on SCN and website of www.bi-consulting.eu.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
37 | |
10 | |
6 | |
4 | |
3 | |
3 | |
3 | |
3 | |
2 | |
2 |