Currently Being Moderated

SAP NetWeaver Business Warehouse 7.4 - End to End Use Case Data Warehousing

This document gives an overview over all Business Process Groups, Business Processes and Process Steps that are necessary to implement the Data Warehouse End to End Use Case within the SAP NetWeaver Business Warehouse component. The special focus is hereby set on all relevant aspects of a DWH implementation where the SAP Business Warehouse application runs powered by SAP HANA DB.


1. Installation of the BW system and the underlying database

 

  • Install the BW Application server. Choose the database type  for BW system (standard RDBMS database for which BW supports a portation DBSL or MaxDB based Foundation for MPP databases e.g. Teradata)
    Follow the installation steps mentioned in the installation guide for application and database server.
  • For SAP NW BW 7.4 it is strongly recommended to run the solution based on the SAP HANA Database. SAP NW BW powered by SAP HANA guarantees performance gains and simplifies the implementation process.
    Find all information regarding upgrade and migration under: http://www.saphana.com/docs/DOC-3294

 

2. Data Modeling

 

  • Business Content Analyzer
    SAP NW BW typically provides a huge predefined data  model that reflects many SAP ERP application models over all industries. The Business Content Analyzer helps to get an overview of all parts of it and helps to decide which  Business Content/ Technical Content Objects have to be activated.
    Before you can work with BI Content objects you have to activate the BI Content. You can copy them from the delivery version (D version) to the active version (A version).  You can choose whether you want to copy, match, or not install the BI Content objects.
  • Create Data Model According to the new BW powered by SAP HANA adapted LSA++ (Layered Scalable Architecture for BW on SAP HANA)
    Create the necessary LSA Layers (e.g. Data Acquisition Layer, Propagation Layer, Reporting Layer) consisting of InfoCubes and DataStore objects for the persistency and the according objects for the dataflow model in between.
    Documentation on LSA++: http://help.sap.com/saphelp_nw73/helpdata/en/22/ad6fad96b547c199adb588107e6412/content.htm?frameset=/en/ad/6b023b6069d22ee10000000a11402f/frameset.htm
    If your BW data model requirements have not been met from the BW Business Content, create your own InfoCubes and DataStore objects using delivered or your own InfoObjects.

3. Set up the source system

 

  • Set-Up System Environment for Partner Solutions
    • Set up NLS Connection
      In order ro realize a solid data aging strategy, you can decide to use the BW Near-Line Storage solution with which you can archive data and also keep it available for query access and ETL purposes.
    • Set up BWA Connection
      In an SAP HANA environment the functionality of the BW Accelerator is not necessary anymore and therefore not supported any more.

 

  • Set-up Connection to SAP Applications Using SAP DataSources and standard extractors.
    • Activate Business content Extractors in SAP BW.After your InfoProviders are ready, create the extractors which would be used for loading the data in the BW system. Activate the DataSources for extracting the data from SAP applications

 

  • Set-up Connection to Process Integration (SAP NetWeaver PI)
    • Activate Business Content for PI services/proxies
      To be able to extract data from third-party systems, activate the content required for setting up PI proxies

 

  • Set-up Connection Using SAP Business Objects Data Services (Data Integrator)
    • Create Data services Data Stores to initiate ETL processes
      One more way of extracting data from third party is by using the SAP Business Objects DataServices. Set up the connection between the BW and the DataServices and then create the DataStores (sources of data) in it.

 

4. Metadata Management

 

  • Analyze Metadata from non-SAP source system Define Extraction
    Load metadata information into the SAP Business Objects MetaData Manager. To be able to analyze and manage the metadata coming from various source system, use the SAP Business Objects Metadata Manager. You can harmonize and maintain the metadata of various systems using this product

5. Define Extraction

 

  • Enhance SAP source system business content extractors according to your needs (appends for extract structures and BADIs for extractors)
    Before creating a new DataSource, you should consider whether your needs can be met by enhancing a standard Business Content DataSource. You can enrich the delivered DataSources to meet your needs. For example, the DataSource can be enhanced by adding additional fields, enhancing the extract structure etc.

 

  • Create ETL and data quality strategy for the source data.
    Create Data services workflows to establish ETL processes. You can use SAP Business Objects' Data Quality to cleanse, harmonize and enhance the quality of your data. The various transforms in the Data Quality like address cleanse can be used to enhance your BW data

6. Define Transformations

 

  • Define KPIs for analytic applications
    Create BW Transformations. Identify the various KPI's that would be required to be seen in your reporting and dashboards. Based on that you can define which fields from the source systems should be mapped to your InfoProviders. After the extraction and the InfoProviders have been set up, define the transformation, i.e. the mapping between the source fields and the target fields (InfoProviders) in the system. You can use various types of transformations to meet your needs

 

  • Define the necessary technical and quality related transformations in Data Services. Create transforms/ queries in Data Services(Address cleansing, parsing, matching, geo-coding etc.)
    To use the data quality features of SAP Business Objects Data Quality, create the data flows in the DQ system using DQ transforms. For example, you can use address cleansing transform to your data flow to cleanse the address data according to a standard directory.

 

7. Define Data Flow

 

  • Standard Data Flow
    Create InfoPackages, Data Transfer Processes and the Process Chains.
    After the target InfoProviders, transformations and extractors have been set-up, define the ETL for the InfoProviders. For this, use the delivered objects or create your own. Define the layers in the LSA, based on your need of operation, strategic and transactional data. For these layers define the dataflows using the Data Transfer Processes and the InfoPackages.

 

  • Enabling direct access of data
    Define Transient/ Remote InfoProviders. Data can be accessed without being saved in the BI system using VirtualProviders

 

  • Real time Data acquisition
    Create Daemons for RDA. Data can be made available in real-time in the operational data store using real-time data acquisition technology. It is now possible to load master data with RDA

 

  • Define HybridProviders for BWA
    Define HybridProviders. Create HypbridProviders which combine real-time information  with historic data on your just created InfoCubes and DSOs. Automatic generation of data flow  within HybridProvider depending on which underlying objects have  been chosen can be done.
    Option1 (with InfoCube and VirtualProvider): Direct access to source system leveraging a Virtual InfoProvider with a function module.
    Option 2 (with InfoCube and DataStore object): DataStore object can be fed by a Real-Time Data Acquisition (RDA) data flow

 

  • Copy Data Flow
    Copy the Data Flow to InfoProvider. You copy an existing SAP NetWeaver BW 7.x data flow, i.e. data sources, data targets, and transformations by using a wizard-like user interface. You can copy a data flow to a source system which is only available in the productive system landscape by assigning a dummy source system in the development environment.

 

  • Migrate Data Flow
    Migrate the Data Flow to InfoProvider. Migrate data flows with 3.x objects (3.x InfoSource, 3.x DataSource, transfer rules, update rules) including adaption of InfoPackage, process variants, process chains, VirtualProviders (‘Remote Cubes‘).Restore of 3.x data flow (from snapshots of the objects made during migration) is also supported.

 

  • Generate Data Flow
    Automatically Generate a Data Flow. You can generate a data flow automatically, including all related objects. Type of data target (InfoProvider) can be configured individually and InfoObjects can automatically be derived and generated from DataSource definition or existing objects can be leveraged

8. Define Data Distribution

 

  • Define Data Distribution to other systems
    Create Open -Hub Destination (files, third-party, database table).
    This allows you to distribute data from a BI system to non-SAP data marts, analytical applications, and other applications. It ensures controlled distribution across multiple systems. Database tables (in the database for the BI system) and flat files can act as open hub destinations. You can extract the data from a database to non-SAP systems using APIs and a third-party tool

 

9. Scheduling and Monitoring

 

  • Schedule Process Chains
    Schedule the execution of the process chains as immediately or using a metachain and set time intervals. Use process chains to schedule the data processing and observe it using a monitor. Process chains automate the complex schedules in BW with the help of the event-controlled processing, visualizing the processes by using network graphics, and centrally controlling and monitoring the processes.
  • Control operations and processes of the BW system using the BW administration cockpit
    Install and configure the BI Administration cockpit. The BI Administration Cockpit allows you to navigate to the relevant BI systems, transactions and queries to analyze system performance and resolve issues without your needing to explicitly log on to any system. You can configure the BW Administration by installing the portal and taking it forward from there. You should activate the required technical content first

 

10. Performance Optimization

 

  • Create aggregates and caching strategies
    Create the aggregates for the InfoCubes either based on the proposals or create them yourself. You can store a dataset of an InfoCube redundantly and persistently in a consolidated form on the database in the form of aggregates. This which allows for quick access to InfoCube data during reporting. The OLAP processor can access these aggregates automatically during query execution.

 

  • Business Warehouse Accelerator for system performance
    Create the BWA Index for the cube that you want to speed-up. Create indexes for the InfoCubes that you want to enable for the fast TREX based access through the BWA

 

  • Partition the InfoProviders (DSO and InfoCubes)
    Partition the InfoCubes/ DSOs based on a criterion. Use semantic partitioning of InfoCubes and DataStore objects  along criteria such as calendar year or region  to manage high volume data and data logistics thus adding to performance. Automated generation of corresponding modeling objects like data transfer processes and transformations as well as a process chain takes place

 

  • In case your BW system is powered by SAP HANA the before mentioned steps are obsolete. Furthermore migrated or newly created InfoCubes and DataStore objects are converted resp. created in an so called In-memory optimized version (SAP-HANA-DSO, SAP-HANA-InfoCubes) that optimizes the persistencies of such objects in regards to load-, query- and volume performance.

Documentation on the SAP HANA-optimized InfoCube: http://help.sap.com/saphelp_nw73/helpdata/en/e1/5282890fb846899182bc1136918459/frameset.htm 
Documentation on the SAP HANA-optimized DataStore object: http://help.sap.com/saphelp_nw73/helpdata/en/32/5e81c7f25e44abb6053251c0c763f7/frameset.htm

 

11. Data Management regarding Near-line Storage and data aging

 

  • Since BW 7.3 SP09, BW is supporting an SAP owned NLS implementation for Sybase IQ 15.4. This allows to load read-only data from BW InfoProviders timeslice wise into Sybase IQ with full transparent support for reporting and ETL capabilities.
    This approach completes a multi temperature strategy for BW data that proposes for hot –will say heavily used-  data to be stored  in the SAP HANA memory, whereby warm, frequently used data should stay in the SAP HANA filesystem under the control of the non-active concept for BW tables and finally cold and infrequently read data has to stay in Sybase IQ (details under http://scn.sap.com/docs/DOC-39944 )

 

  • For the use of SAP Partner solutions for near-line storage and archiving for aging data the following steps have to be implemented.
    • Setting up connection with the NLS product of your partner
      After the connection NLS connection has been set-up, decide which data needs to be archived. Define a Data Archiving Process for this InfoProvider
    • Define Data Archiving Processes for the particular InfoProviders
      You have the option of using the classic concept of ADK archiving also for archiving your data

12. User Management

 

  • Define the authorizations for the BW Objectsi.
    • Set up Standard Authorizations
    • Authorization refers to an authorization object and defines one or more values for each field that is contained in the authorization object. Authorization objects for the various areas are delivered for this.
    • Set up Analysis Authorizations
      This type of authorization can be used for all users who want to display transaction data from authorization-relevant characteristics or navigation. This authorization uses the features of reporting and analysis of BI into consideration. This helps in protecting especially critical data in a much better way.

Delete Document

Are you sure you want to delete this document?