This blog gives a short overview on the LSA. A basic understanding of the LSA is necessary in order to understand the LSA Data Flow Templates.
The LSA proposes a standard set of layers. The services offered by the various layers are described below.
Depending on customer preferences and on operational conditions and the like, there might be additional layers. Not all layers necessarily host persistent InfoProviders (InfoProviders that store data in database tables).
Note that the LSA is very strict in terms of where the necessary transformations should be performed during the staging process:
This applies to all LSA data flow templates offered.
The graphic below provides a rough overview of the data flow activities (from left to right):
The Acquisition Layer is the Inbox of the Data Warehouse:
The data is passed from the Acquisition Layer to the Harmonization and Quality Layer, which is responsible for transforming the extracted data in accordance with common semantics and values. The final result is stored in Propagation Layer DataStore objects. What happens in this layer depends largely on the quality and integration level of the extracted data. There is often no explicit data storage in this layer. This is especially true with data flows that receive data from SAP DataSources, as the data derived from the SAP sources is frequently already in good shape.
Please note: No business scenario-driven transformation are allowed here. This would reduce or prevent the reusability of the data in the Propagation Layer.
The Data Propagation Layer offers persistent data (stored in DataStore objects), which is compliant with the defined company quality, integration and unification standards. The Data Propagation Layer meets the ‘extract once deploy many’ and ‘single version of truth’ requirements (reusability).
The data mart related layers get their data from the Data Propagation Layer. All business related requirements are modeled in these layers.
In the Business Transformation Layer, data transformations take place, which serve to fulfill the Data Mart scope. Dedicated DataStore objects in the Business Transformation Layer might be necessary if we have to join or merge data from various Propagator Layer DataStore objects.
Please note: Only apply business transformation rules on reusable Propagation Layer DataStore Objects.
As the name implies, the Reporting layer contains the reporting related InfoProviders (Architected Data Marts). The Reporting Layer Objects can be implemented as InfoCubes with or without BW Accelerator or sometimes as DataStore Objects.
To ensure greater flexibility, the queries should always be defined on a MultiProvider.
For large BWs, especially for global ones, the LSA suggests introducing standardized semantic partitioning of transactional data on the Data Acquisition Layer throughout your BW Data Warehouse or large parts of it. Standardized means that all InfoProviders (or areas thereof) are semantically partitioned using the same partitioning strategy. The semantic partitioning of data has to be stable over time. Splitting transactional data in BW by markets or group of markets can serve as an example from the consumer product industry for a strategic semantic partitioning of a BW EDW. The LSA calls the resulting parts of the BW Domains. Organizational characteristics, like 0COMP_CODE and 0CO_AREA, serve as semantic partitioning criteria that implement these market related Domains.
The semantic partitioning modeling pattern is supported in BW 7.3 by the Semantically Partitioned Object (SPO). SPOs allow automated definition and (to a large degree) automated maintenance of partitions and partition criteria values. More information: http://help.sap.com/saphelp_nw73/helpdata/en/d1/468956248e4d9ca351896d54ab3a78/frameset.htm
Please note: In addition to the described SPOs derived from Domain partitioning strategy, there might be additional partitions for certain InfoProviders (by time for example).
The picture shows a global BW, partitioning the transactional data into three Domains.
Depending on the sources system landscape, this results in splitting and/or collecting data flows on the Data Acquisition Layer.
Split of data flows according to LSA Domains:
Collect data flows according to LSA Domains:
Please note that there are no domains in the Corporate Memory Layer.
InfoSources play a decisive role in the LSA. The larger a BW becomes, the more important it is to use InfoSources to ensure maintainability of the data flows and overall flexibility. Starting with BW 7.0, InfoSources are nolonger required for modeling a data flow. The transformation rules can be implemented directly between InfoProviders. With larger BWs and semantic partitioning of transactional data flows into LSA Domains as shown above, this will result in a large number of transformation rules, which can be confusing. Any change to the semantically partitioned InfoProviders results in a change to all transformation rules. Using an inbound InfoSource in front of an InfoProvider, and an outbound InfoSource after an InfoProvider, makes it possible to bundle the flow logic. The transformation logic is always located between the outbound InfoSource of the source InfoProvider and the inbound InfoSource of the target InfoProvider. This is illustrated in the graphic below:
Note that there are never transformations between inbound InfoSource and InfoProvider and InfoProvider and outbound InfoSource. This flow logic is guaranteed and implemented automatically using SPOs.
The LSA framework, defined by layer and domains, is reflected by the naming of the BW metadata, or by the technical names of the InfoProviders to be exact.
The naming of InfoProviders is limited by the DataStore object naming limitation (8 digits) and the restrictions on the SPO naming.
As far as the actual naming is concerned, customers have a significant level of freedom. The InfoProviders of LSA data flow templates are named as follows:
Note: The naming used in the LSA data flow templates should serve as an example. Naming should always be checked in accordance with customer requirements before starting implementation.
Example:
P | L | S | H | D | 0 | U | 0 |
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
1 P Propagation Layer InfoProvider
2-5 LSHD Area filled from Sales Order Header DataSource
6 0 1st InfoProvider with respect to area in this layer
7 U Domain US
8 0 No further logical/semantic partitioning
Note: Letters qualifying the partitions of an SPO are only enabled using the SPO BadI. For more information: Semantically Partitioned Objects (SPOs) built from BAdI
Two InfoSources are always generated automatically when you activate a SPO: One InfoSource in front of the SPO partitions - the inbound InfoSource - and another InfoSource to manage the data flow to the targets – the outbound InfoSource.
The InfoSource name is derived from the name of the SPO:
Example: SPO name is PLSHD0
Data Transfer Processes (DTPs) can be generated for SPOs if the transformations between source InfoProvider and SPO are active. The DTP description comprises the technical name of the source InfoProvider and the technical name of the target SPO partition, PLSHD0U0 -> DSISY002 for example.