Additional Blogs by SAP
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

Background

This blog gives a short overview on the LSA. A basic understanding of the LSA is necessary in order to understand the LSA Data Flow Templates.

LSA Data Layer - Structuring the Data Flows

The LSA proposes a standard set of layers. The services offered by the various layers are described below. 

  • There are four EDW layers: the Data Acquisition layer, the Corporate Memory layer, the Harmonization & Quality layer and the Data Propagation layer
  • There are three Architected Data Mart related layers: the Business Transformation layer, the Reporting (Architected Data Marts) layer and the Virtualization layer

 

 

Depending on customer preferences and on operational conditions and the like, there might be additional layers.  Not all layers necessarily host persistent InfoProviders (InfoProviders that store data in database tables).

Note that the LSA is very strict in terms of where the necessary transformations should be performed during the staging process:

  • Between Acquisition Layer and Propagation Layer (in the Quality & Harmonization Layer) only non-business purpose specific transformations are used. These transformations convert data into the data warehouse data model. As a result, the (EDW) Propagation Layer offers data with corporate compliant semantics and values, which are reusable by different data marts.
  • Business -specific  transformations, which only serve a certain data mart scope, only take place on the (EDW) Propagation Layer  (in the Business Transformation Layer).

This applies to all LSA data flow templates offered.

The graphic below provides a rough overview of the data flow activities (from left to right):

Data Acquisition Layer

The Acquisition Layer is the Inbox of the Data Warehouse:

  • Fast inbound & outbound flow of data to targets
  • Accepts data from extraction with as little overhead as possible – no early checks, merges, transformations (1:1)
  • Adds additional information to each record, like origin (source system), load time, origin organizational data (like company code). This makes standardized administration and addressing of all records possible
  • Provides abstraction of Data Warehouse from sources of data
  • Provides short term history of extracted data for immediate/short term data inspection

Harmonization and Quality Layer

The data is passed from the Acquisition Layer to the Harmonization and Quality Layer, which is responsible for transforming the extracted data in accordance with common semantics and values. The final result is stored in Propagation Layer DataStore objects. What happens in this layer depends largely on the quality and integration level of the extracted data. There is often no explicit data storage in this layer. This is especially true with data flows that receive data from SAP DataSources, as the data derived from the SAP sources is frequently already in good shape.

Please note: No business scenario-driven transformation are allowed here. This would reduce or prevent the reusability of the data in the Propagation Layer. 

Data Propagation Layer

The Data Propagation Layer offers persistent data (stored in DataStore objects), which is compliant with the defined company quality, integration and unification standards. The Data Propagation Layer meets the ‘extract once deploy many’ and ‘single version of truth’ requirements (reusability).

Business Transformation Layer

The data mart related layers get their data from the Data Propagation Layer. All business related requirements are modeled in these layers.

In the Business Transformation Layer, data transformations take place, which serve to fulfill the Data Mart scope. Dedicated DataStore objects in the Business Transformation Layer might be necessary if we have to join or merge data from various Propagator Layer DataStore objects.

Please note:  Only apply business transformation rules on reusable Propagation Layer DataStore Objects.

Reporting layer

As the name implies, the Reporting layer contains the reporting related InfoProviders (Architected Data Marts). The Reporting Layer Objects can be implemented as InfoCubes with or without BW Accelerator or sometimes as DataStore Objects.

Virtualization Layer

To ensure greater flexibility, the queries should always be defined on a MultiProvider.

LSA Data Domains – Collect & Split Data Flows

For large BWs, especially for global ones, the LSA suggests introducing standardized semantic partitioning of transactional data on the Data Acquisition Layer throughout your BW Data Warehouse or large parts of it. Standardized means that all InfoProviders (or areas thereof) are semantically partitioned using the same partitioning strategy. The semantic partitioning of data has to be stable over time. Splitting transactional data in BW by markets or group of markets can serve as an example from the consumer product industry for a strategic semantic partitioning of a BW EDW.  The LSA calls the resulting parts of the BW Domains. Organizational characteristics, like 0COMP_CODE and 0CO_AREA, serve as semantic partitioning criteria that implement these market related Domains.

The semantic partitioning modeling pattern is supported in BW 7.3 by the Semantically Partitioned Object (SPO). SPOs allow automated definition and (to a large degree) automated maintenance of partitions and partition criteria values. More information: http://help.sap.com/saphelp_nw73/helpdata/en/d1/468956248e4d9ca351896d54ab3a78/frameset.htm 

Please note: In addition to the described SPOs derived from Domain partitioning strategy, there might be additional partitions for certain InfoProviders (by time for example).

 

The picture shows a global BW, partitioning the transactional data into three Domains.

Depending on the sources system landscape, this results in splitting and/or collecting data flows on the Data Acquisition Layer.

Split of data flows according to LSA Domains:

Collect data flows according to LSA Domains:

 

Please note that there are no domains in the Corporate Memory Layer.          

           

The Role of InfoSources in the LSA

InfoSources play a decisive role in the LSA. The larger a BW becomes, the more important it is to use InfoSources to ensure maintainability of the data flows and overall flexibility. Starting with BW 7.0, InfoSources are nolonger required for modeling a data flow. The transformation rules can be implemented directly between InfoProviders.  With larger BWs and semantic partitioning of transactional data flows into LSA Domains as shown above, this will result in a large number of transformation rules, which can be confusing.  Any change to the semantically partitioned InfoProviders results in a change to all transformation rules. Using an inbound InfoSource in front of an InfoProvider, and an outbound InfoSource after an InfoProvider, makes it possible to bundle the flow logic. The transformation logic is always located between the outbound InfoSource of the source InfoProvider and the inbound InfoSource of the target InfoProvider. This is illustrated in the graphic below:

 

Note that there are never transformations between inbound InfoSource and InfoProvider and InfoProvider and outbound InfoSource. This flow logic is guaranteed and implemented automatically using SPOs.

 

LSA Template Naming Conventions

The LSA framework, defined by layer and domains, is reflected by the naming of the BW metadata, or by the technical names of the InfoProviders to be exact.

LSA Template Naming Conventions for InfoProviders

The naming of InfoProviders is limited by the DataStore object naming limitation (8 digits) and the restrictions on the SPO naming.

As far as the actual naming is concerned, customers have a significant level of freedom. The InfoProviders of LSA data flow templates are named as follows:

  • Layer: byte 1
  • 1  digit abbreviation to qualify the layer the InfoProvider is located in
  • 4 digit abbreviation to qualify the data content of the InfoProvider
    • A DataSource related identifier for all EDW layers related InfoProviders or
    • A Business scenario related identifier for all Data Mart Layers related InfoProviders
  • There might be more than one InfoProvider in a specific Layer for a specific area
  • 1  digit abbreviation to qualify the Domain the InfoProvider is located in (U for US for example)
  • 1  digit abbreviation to qualify further partitioning of an InfoProvider (1  for 2011 for example)
  • Area: byte 2 to 5
  • Sequence number: byte 6
  • Domain: byte 7
  • Sub-Partition: byte 8

Note: The naming used in the LSA data flow templates should serve as an example. Naming should always be checked in accordance with customer requirements before starting implementation.

Example:   

P

L

S

H

D

0

U

0

1

2

3

4

5

6

7

8

 

1      P           Propagation Layer InfoProvider

2-5  LSHD    Area filled from Sales Order Header DataSource

6      0           1st InfoProvider with respect to area in this layer

7      U          Domain US

8      0           No further logical/semantic partitioning

 Note:  Letters qualifying the partitions of an SPO are only enabled using the SPO BadI. For more information:  Semantically Partitioned Objects (SPOs) built from BAdI

 

LSA Template Naming Conventions for InfoSources

Two InfoSources are always generated automatically when you activate a SPO: One InfoSource in front of the SPO partitions - the inbound InfoSource - and another InfoSource to manage the data flow to the targets – the outbound InfoSource.

The InfoSource name is derived from the name of the SPO:

Example:   SPO name is PLSHD0

  • PLSHD0_I       name of InfoSource before  SPO (I - Inbound )
  • PLSHD0_O     name of InfoSource after  SPO (O -  Outbound )

LSA Template Naming Conventions for DTPs

Data Transfer Processes (DTPs) can be generated for SPOs if the transformations between source InfoProvider and SPO are active. The DTP description comprises the technical name of the source InfoProvider and the technical name of the target SPO partition, PLSHD0U0 -> DSISY002 for example.

1 Comment