1 2 3 26 Previous Next

SAP Business Warehouse

381 Posts

So I made this algorithm that you can use to check if your members selected from characteristics return no data:

Algorithm to determine if members selected from SAP BW characteristics will result in no data being extracted - IP.com

1      HANA based BW Transformation

What is a SAP HANA push down in the context of BW transformations? When does a push down occur? What are the prerequisites for forcing a SAP HANA push down?

 

Before I start to explain how a SAP HANA based BW transformation could be created and what prerequisites are necessary to force a push down I will provide some background information on the differences between an ABAP and SAP HANA executed BW transformation.

 

A HANA based BW transformation executes the data transformation logic inside the SAP HANA database. Figure 1.1 shows on the left-hand side the processing steps for an ABAP based transformation and on the right-hand side for a SAP HANA based transformation.

 

This blog provides information on the push-down feature for transformations in SAP BW powered by SAP HANA. The content here is based on experiences with real customer issues. The material used is partly taken from the upcoming version of the SAP education course PDEBWP - BW Backend und Programming.


This blog is planned as part of a blog series which shares experiences collected while working on customer issues. The listed explanations are primarily based on releases between BW 7.40 SP09 and BW 7.5 SP00.

 

The following additional blogs are planned / available:

  • HANA based Transformation (deep dive)
  • DTP Source - Target Dependencies
  • Analyzing and debugging HANA based BW Transformations
  • SAP HANA Analysis Process
  • General recommendation
  • New features delivered by 7.50 SP04
    • Routines
    • Error Handling

 

A HANA based BW transformation is a “normal” BW transformation. The new feature is that the transformation logic is executed inside the SAP HANA database. From a design time perspective, in the Administrator Workbench, there is no difference between a HANA based BW transformation and a BW transformation that is executed in the ABAP stack. By default the BW runtime tries to push down all transformations to SAP HANA. Be aware that there are some restrictions which prevent a push down. For example a push-down to the database (SAP HANA) is not possible if a BW transformation contains one or more ABAP routines (Start-, End-, Expert- or Field-Routine). For more information see Transformations in SAP HANA Database.

 

Restrictions for HANA Push-Down

Further restrictions are listed in the Help Portal. However, the documentation is not all-inclusive. Some restrictions related to complex and "hidden" features in a BW transformation are not listed in the documentation. In this context “hidden” means that the real reason is not directly visible inside the BW transformation.

The BAdI RSAR_CONNECTOR is a good example for such a “hidden” feature. A transformation using a customer specific formula implementation based on this BAdI cannot be pushed down. In this case the processing mode is switched to ABAP automatically.

The BW workbench offers a check button in the BW transformation UI to check if the BW transformation is “SAP HANA executable” or not. The check will provide a list of the features used in the BW transformation which prevent a push down.

 

SAP is constantly improving the push down capability by eliminating more and more restrictions In order to implement complex customer specific logic inside a BW transformation it is possible to create SAP HANA Expert Script based BW transformations. This feature is similar to the ABAP based Expert-Routine and allows customers to implement their own transformation logic in SQL Script. A detailed description of this feature is included later on.

 

SAP Note 2057542 - Recommendation: Usage of HANA-based Transformations provides some basic information and recommendations regarding the usage of SQL Script inside BW transformations.

 

1.1      HANA Push-Down

What is a SAP HANA push down in the context of BW transformations? When does a push down occur? What are the prerequisites for forcing a SAP HANA push down?

Before I start to explain how a SAP HANA based BW transformation could be created and what prerequisites are necessary to force a push down I will provide some background information on the differences between an ABAP and SAP HANA executed BW transformation.

A HANA based BW transformation executes the data transformation logic inside the SAP HANA database. Figure 1.1 shows on the left-hand side the processing steps for an ABAP based transformation and on the right-hand side for a SAP HANA based transformation.

 

Figure_1_1.png

Figure 1.1: Execution of SAP BW Transformations


An ABAP based BW transformation loads the data package by package from the source database objects into the memory of the Application Server (ABAP) for further processing. The BW transformation logic is executed inside the Application Server (ABAP) and the transformed data packages are shipped back to the Database Server. The Database Server writes the resulting data packages into the target database object. Therefore, the data is transmitted twice between database and application server.

 

During processing of an ABAP based BW transformation, the source data package is processed row by row (row-based). The ABAP based processing allows to define field-based rules, which are processed as sequential processing steps.

 

For the HANA based BW transformation the entire transformation logic is transformed into a CalculationScenario (CalcScenario). From a technical perspective the Metadata for the CalcScenario are stored as a SAP HANA Transformation in BW (see transaction RSDHATR).

 

This CalcScenario is embedded into a ColumnView. To select data from the source object, the DTP creates a SQL SELECT statement based on this ColumnView (see blog »Analyzing HANA based BW transformation«) and the processing logic of the CalcScenario applies all transformation rules (defined in the BW transformation) to the selected source data. By shifting the transformation logic into the CalcScenario, the data can be transferred directly from the source object to the target object within a single processing step. Technically this is implemented as an INSERT AS SELECT statement that reads from the ColumnView and inserts into the target database object of the BW transformation. This eliminates the data transfer between Database Server and Application Server (ABAP). The complete processing takes place in SAP HANA.


1.2      Create a HANA based BW Transformation

The following steps are necessary to push down a BW transformation:

  • Create a SAP HANA executable BW transformation
  • Create a Data Transfer Process (DTP) to execute the BW transformation in SAP HANA


1.2.1       Create a standard SAP HANA executable BW transformation

A standard SAP HANA executable BW transformation is a BW transformation without SAP HANA specific implementation, which forces a SAP HANA execution.

The BW Workbench tries to push down new BW transformations by default.

The activation process checks a BW transformation for unsupported push down features such as ABAP routines. For a detailed list of restrictions see SAP Help -Transformations in SAP HANA Database.  If none of these features are used in a BW transformation, the activation process will mark the BW transformation as SAP HANA Execution Possible see (1) in Figure 1.2.

 

Figure_1_2.png

Figure 1.2: First simple SAP HANA based Transformation

 

When a BW transformation can be pushed down, the activation process generates all necessary SAP HANA runtime objects. The required metadata is also assembled in a SAP HANA Transformation (see Transaction RSDHATR). The related SAP HANA Transformation for a BW transformation can be found in menu Extras => Display Generated HANA Transformation, see (2) in Figure 1.2.

 

From a technical perspective a SAP HANA Transformation is a SAP HANA Analysis Process (see Transaction RSDHAAP) with a strict naming convention. The naming convention for a SAP HANA Transformation is TR_<< Program ID for Transformation (Generated)>>, see (3) in Figure 1.2. A SAP HANA Transformation is only a runtime object which cannot not been explicit created or modified.

 

The tab CalculationScenario is only visible if the Export Mode (Extras => Export Mode On/Off) is switched on. The tab shows the technical definition of the corresponding CalculationScenario which includes the transformation logic and the SQLScript procedure (if the BW transformation is based on a SAP HANA Expert Script).

 

If the transformation is marked as SAP HANA Execution Possible, see (1) in Figure 1.2 the first precondition is given to push down and execute the BW transformation inside the database (SAP HANA). That means if the flag SAP HANA Execution Possible is set the BW transformation is able to execute in both modes (ABAP and HANA) and the real used processing mode is set inside the DTP. To be prepared for both processing modes the BW transformation framework generates the runtime objects for both modes. Therefore the Generated Program (see Extras => Display Generated Program) for the ABAP processing will also be visible.

 

The next step is to create the corresponding DTP, see paragraph 1.2.4 »Create a Data Transfer Process (DTP) to execute the BW transformation in SAP HANA«.

 

1.2.2       Create a SAP HANA transformation with SAP HANA Expert Script

 

If the business requirement is more complex and it is not possible to implement these requirements with the standard BW transformation feature, it is possible to create a SQLScript procedure (SAP HANA Expert Script). When using a SAP HANA Expert Script to implement the business requirements the BW framework pushes the transformation logic down to the database. Be aware that there is no option to execute a BW transformation with a SAP HANA Expert Script in the processing mode ABAP, only processing mode HANA applies.

 

From the BW modelling perspective a SAP HANA Expert Script is very similar to an ABAP Expert Routine. The SAP HANA Expert Script replaces the entire BW transformation logic. The SAP HANA Expert Script has two parameters, one importing (inTab) and one exporting (outTab) parameter. The importing parameter provides the source data package and the exporting parameter is used to return the result data package.

 

However, there are differences from the perspective of implementation between ABAP and SQLScript. An ABAP processed transformation loops over the source data and processes them row by row. A SAP HANA Expert Script based transformation tries to processes the data in one block (INSERT AS SELECT). To get the best performance benefit of the push down it is recommended to use declarative SQLScript Logic to implement your business logic within the SAP HANA Expert Script, see blog »General recommendations«.

 

The following points should be considered before the business requirements are implemented with SAP HANA Expert Script:

  • ABAP is from today's perspective, the more powerful language than SQL Script
  • Development support features such as syntax highlighting, forward navigation based on error messages, debugging support, etc. is better in the ABAP development environment.
  • SQL script development experience is currently not as widespread as ABAP development experience
  • A HANA executed transformation is not always faster

 

From the technical perspective the SAP HANA Expert Script is a SAP HANA database procedure. From the BW developer perspective the SAP HANA Expert Script is a SAP HANA database procedure implemented as a method in an AMDP (ABAP Managed Database Procedure) class.

 

The AMDP class is be generated by the BW framework and can only be modified within the ABAP Development Tools for SAP NetWeaver (ADT), see https://tools.hana.ondemand.com/#abap. The generated AMDP class cannot not be modified in the SAP GUI like Class editor (SE24) or the ABAP Workbench (SE80). Therefore it is recommended to implement the entire dataflow in the Modeling Tools for SAP BW powered by SAP HANA, see https://tools.hana.ondemand.com/#bw.  The BW transformation itself must still be implemented in the Data Warehousing Workbench (RSA1).

 

Next I’ll give a step by step introduction to create a BW transformation with a SAP HANA Expert Script.

 

Step 1: Start a SAP HANA Studio with both installed tools:

  • ABAP Development Tools for SAP NetWeaver (ADT) and
  • Modeling Tools for SAP BW powered by SAP HANA

 

Now we must switch into the BW Modeling Perspective. To open the BW Modeling Perspective go to Window => Other .. and select in the upcoming dialog the BW Modeling Perspective, see Figure 1.3.

 

Figure_1_3.png

Figure 1.3: Open the BW Modeling Perspective

 

To open the embedded SAP GUI a BW Project is needed. It is necessary to create the BW Project before calling the SAP GUI. To create a new BW Project open File => New => BW Project. To create a BW Project a SAP Logon Connection is required, choose the SAP Logon connection and use the Next button to enter your user logon data.

 

Recommendations: After entering your logon data it is possible to finalize the wizard and create the BW Project. I recommend to use the Next wizard page to change the project name. The default project name is:

 

     <System ID>_<Client>_<User name>_<Language>

 

I normally add at the end a postfix for the project type such as _BW for the BW Project. For an ABAP project later on I will use the postfix _ABAP. The reason I do that is both projects are using the same symbol in the project viewer and the used postfix makes it easier to identify the right project.

 

Once the BW Project is created we can open the embedded SAP GUI. The BW Modeling perspective toolbar provides a button to open the embedded SAP GUI, see Figure 1.4.

 

Figure_1_4.png

Figure 1.4: Open the embedded SAP GUI in Eclipse

 

Choose the created BW Project in the upcoming dialog. Next start the BW Workbench (RSA1) within the embedded SAP GUI and create the BW transformation or switch into the edit mode for an existing one.

 

To create a SAP HANA Expert Script open Edit => Routines => SAP HANA Expert Script Create in the menu of the BW transformation. Confirm the request to delete the existing transformation logic. Keep in mind that all implemented stuff like Start- End- or Field-Routines and formulas will be deleted if you confirm to create a SAP HANA Expert Script.


In the next step the BW framework opens the AMDP class by calling the ABAP Development Tools for SAP NetWeaver (ADT). For this an ABAP project is needed. Select an existing ABAP Project or create a new one in the dialog.

 

A new window with the AMD class will appear. Sometimes it is necessary to reload the AMDP class by pressing F5. Enter your credentials if prompted.


The newly generated AMDP class, see Figure 1.5, cannot not directly be activated.


Figure_1_5.png

Figure 1.5: New generated AMDP Class


Before I explain the elements of the AMDP class and the method I will finalize the transformation with a simple valid SQL statement. The used SQL statement, as shown in Figure 1.6, is a simple 1:1 transformation and is only used as an example to explain the technical behavior.


Figure_1_6.png

Figure 1.6: Simple valid AMDP Method

 

Now we can activate the AMDP class and go back to the BW transformation by closing the AMDP class window. Now it is necessary to activate the BW transformation also. For a BW transformation with a SAP HANA Expert Script the flag SAP HANA Execution possible is set, see Figure 1.7.

 

Figure_1_7.png

Figure 1.7: BW Transformation with SAP HANA Script Processing


As explained before, if you use a SAP HANA Expert Script the BW transformation can only been processed in SAP HANA. It is not possible to execute the transformation on the ABAP stack. Therefore the generated ABAP program (Extras => Display Generated Program) is not available for a BW transformation with the processing type SAP HANA Expert Script.


1.2.2.1       Sorting after call of expert script


Within the BW transformation the flag Sorting after call of expert script, see Figure 1.8, (Edit => Sorting after call of expert script) can be used to ensure that the data is written in the correct order to the target.


Figure_1_8.png

Figure 1.8: Sorting after call of expert script


If the data is extracted by delta processing the sort order of the data could be important (depending on the type of the used delta process).

 

By default, the flag is always set for all new transformations and it’s recommended to leave it unchanged.

 

For older transformations, created with a release before 7.40 SP12, the flag is not set by default. So the customer can set the flag if they need the data in a specific sort order.

 

Keep in mind that the flag has impact at two points:

  • The input/output structure of the SAP HANA Expert Script is enhanced / reduced by the field RECORD
  • The result data from the SAP HANA Expert Script will be sorted by the new field RECORD, if the flag is set, after calling the SAP HANA Expert Script


The inTab and the outTab structure of a SAP HANA Expert Script will be enhanced by the field RECORD if the flag is set. The added field RECORD is a combination of the fields REQUESTSID, DATAPAKID and RECORD from the source object of the transformation, see Figure 1.9.


Figure_1_9.png

Figure 1.9: Concatenated field RECORD


The RECORD field from the outTab structure is mapped to the internal field #SOURCE#.1.RECORD. Later on in a rownum node of the CalculationScenario the result data will be sorted by the new internal field #SOURCE#.1.RECORD, see Figure 1.10.


Figure_1_10.png

Figure 1.10: CalculationScenario note rownum


1.2.2.2       The AMDP Class


The BW transformation framework generates an ABAP class with a method called PROCEDURE. The class implements the ABAP Managed Database Procedure (AMDP) marker interface IF_AMDP_MARKER_HDB. The interface marks the ABAP class as an AMDP class. A method of an AMDP class can be written as a database procedure. Therefore the BW transformation framework creates a HANA specific database procedure declaration for the method PROCEDURE, see Figure 1.11:


Figure_1_11.png

Figure 1.11: Method PROCEDURE declaration


This declaration specifies the method to the HANA database (HDB), the language to SQLSCRIPT and further on defines that the database procedure is READ ONLY. The read only option means that the method / procedure must be side-effect free. Side-effect free means that only SQL elements (DML) could be used to read data. Elements like DELETE, UPDATE, INSERT used on persistent database objects are not allowed. These data modification statements can also not be encapsulated in a further procedure.


You cannot directly read data from a database object managed by ABAP like a table, view or procedure inside an AMDP procedure, see (1) in Figure 1.12. A database object managed by ABAP has to be declared before they can used inside an AMDP procedure, see (2). For more information about the USING option see AMDP - Methods in the ABAP documentation.


Figure_1_12.png

Figure 1.12: Declaration of DDIC objects


The AMDP framework generates wrapper objects for the declared database object managed by ABAP .  The view /BIC/5MDEH7I6TAI98T0GHIE3P69D1=>/BIC/ATK_RAWMAT2#covw in (3) was generated for the declared table /BIC/ATK_RAWMAT2 in (2). The blog Under the HANA hood of an ABAP Managed Database Procedure provides some further background information about AMDP processing and which objects are generated.


AMDP Class modification

Only the method implementation belongs to the BW transformation Meta data and only this part of the AMDP class would been stored, see table RSTRANSCRIPT.


Currently the ABAP Development Tools for SAP NetWeaver (ADT) does not protect the source code which should not been modified, like in an ABAP routine. That means all modifications in the AMDP class outside the method implementation will not be transported to the next system and will be overwritten by the next activation process. The BW transformation framework regenerates the AMDP class during the activation process.


Later on I’ll provide some general recommendations in a separate blog which are based on experiences we collected in customer implementations and customer incidents. The general recommendation will cover the following topics:

  • Avoid preventing filter push down
  • Keep internal table small
  • Initial values
  • Column type definition
  • Avoid implicit casting
  • Use of DISTINCT
  • Potential pitfall at UNION / UNION ALL
  • Input Parameter inside underlying HANA objects
  • Internal vs. external format
  • ...


1.2.3       Dataflow with more than one BW transformation


The push down option is not restricted on data flows with one BW transformation. It is also possible to push down a complete data flow with several included BW transformations (called stacked data flow). To get the best performance benefits from the push down it is recommended to stack a data flow by a maximum of three BW transformations. More are possible but not recommended.

 

The used InfoSources (see SAP Help: InfoSource and Recommendations for Using InfoSources) in a stacked data flow can be used to aggregate data within the data flow if the processing mode is set to ABAP. If the processing mode set to SAP HANA the data will not be aggregated as set in the InfoSource settings. The transformation itself does not know the processing mode, therefore you will not get a message about the InfoSource aggregation behavior. The used processing mode is set in the used DTP.

 

That means, the BW transformation framework prepares the BW transformation for both processing modes (ABAP and HANA). During the preparation the framework will not throw a warning regarding the lack of aggregation in the processing mode HANA.


By using the check button for the HANA processing mode, within the BW transformation, you will get the corresponding message (warning) regarding the InfoSource aggregation, see Figure 1.13

 

Figure_1_13.png

Figure 1.13: HANA processing and InfoSources


CalculationScenario in a stacked data flow

The corresponding CalculationScenario for a BW transformation is not available if the source object is an InfoSource. That means the tab CalculationScenario is not available in the export mode of the SAP HANA transformation, see Extras => Display Generated HANA Transformation. The source object for this CalculationScenario is an InfoSource and an InfoSource cannot be used as data source object in a CalculationScenario. The related CalculationScenario can only be obtain by using the SAP HANA Transformation from the corresponding DTP. I’ll explain this behavior later on in the blog »HANA based Transformation (deep dive)«.

 

1.2.4       Create a Data Transfer Process (DTP) to execute the BW transformation in SAP HANA


The Data Transfer Process (DTP) to execute a BW transformation provides a flag to control the HANA push-down of the transformation. The DTP flag SAP HANA Execution, see (1) in Figure 1.14, can be checked or unchecked by the user. However, the flag in the DTP can only be checked if the transformation is marked as SAP HANA Execution Possible, see (1) in Figure 1.2. By default the flag SAP HANA Execution will be set for each new DTP if

  • the BW transformation is marked as SAP HANA execution possible and
  • the DTP does not use any options which prevent a push down.

 

Up to BW 7.50 SP04 the following DTP options prevent a push down:

  • Semantic Groups
  • Error Handling - Track Records after Failed Request


The DTP UI provides a check button, like the BW transformation UI, to validate a DTP for HANA push down. In case a DTP is not able to push down the data flow (all involved BW transformations) logic, the check button will provide the reason.

 

Figure_1_14.png

Figure 1.14: DTP for the first simple SAP HANA based Transformation

 

In the simple transformation sample above I’m using one BW transformation to connect a persistent source object (DataSource (RSDS)) with a persistent target object (Standard DataStore Object (ODSO)). We also call this type a non-stacked dataflow - I’ll provide more information about non-stacked and stacked data flows later. The related SAP HANA Transformation for a DTP can be found in menu Extras => Display Generated HANA Transformation, see (2) in Figure 1.14. In case of a non-stacked data flow the DTP uses the SAP HANA Transformation of the BW transformation, see (3) in Figure 1.14.

 

The usage of a filter in the DTP does not prevent the HANA push down. ABAP Routines or BEx Variables can be used as well. The filter value(s) is calculated in a pre-step and added to the SQL SELECT statement which reads the data from the source object. We will look into this later in more detail.

 

1.2.5       Execute a SAP HANA based transformation

 

From the execution perspective, regarding the handling, a HANA based transformation behaves comparable to an ABAP based transformation, simply press the 'Execute' button or execute the DTP form a process chain.

 

Later on I will provide more information about packaging and parallel processing.

 

1.2.6       Limitations

 

There is no option to execute a transformation with a SAP HANA Script on the ABAP application server. With BW 7.50 SP04 (the next feature pack) it is planned to deliver further option to use SAP HANA Scripts (Start-, End- and Field-Routines are planned) within a BW transformation.

One of the most common issues with the BW data loads is the incorrect data from the source system. For occasional failures we edit the PSA records instead of using a routine since it doesn't needs development work and transports.  If we need to correct multiple records then it will be pain to correct them one-by-one. In this blog, I will show how to correct multiple records at once.

Example:

  1. You have loaded the data and it failed with incorrect data. You have checked the PSA records and noticed there are multiple records with the same issue.

1.png

 

   2. You can filter the records which have incorrect data and select all the records and click on ‘Edit’ button.

2.png

 

    3. A blank record opens up on a pop-up screen.  Enter the correct data and save.

3.png

   4. Now you can check that the data is corrected for all the records you have selected.

4.png

 

Don't forget to notify the  owners/analysts to correct the data in the source system

There are scenarios when Transformation End Routine is a good fit. In my blog will demonstrate how to simplify Transfer Rules by means of:

 

Reducing Coding

 

In my case I load PO GR data and lookup multiple characteristic values from PO Item level. Instead of repetitive coding similar lookup / mapping for each characteristic in individual Transfer Rule I did once in End Routine. It saved me not only coding efforts, but also increased performance by reducing a number of lookups.

 

End Routine 1.jpg

 

End Routine 2.jpg

 

 

Increasing Reusability

 

During PO GR data load I calculate Delivery Duration based on Due Date, Delivery Duration based on GR data and over / under variances of two durations. I did not like the idea to repeat durations calculation logic in variance transformation rules. Instead I used results of duration calculations in end routine to calculate variances.

 

End Routine 3.jpg

End Routine 4.jpg

Scenario: If I execute DTP in current month, it always picks only "Current month – 1" data.

 

Example:

 

If today's date is 04/22/2016, based on the system date, it will calculate previous month First day and Last day. i.e. it will fetch 03/01/2016 to  03/31/2016.

 

If today's date is 01/22/2016, based on the system date, it will calculate previous month First day and Last day. i.e. it will fetch 12/01/2015 to 12/31/2015.

 

 

Occasionally we need to filter the Date Characteristic InfoObject to extract only “Previous Month” data. Here the filter selection is not on SAP Content InfoObject, the filter selection is on Custom InfoObject.

 

If it is SAP Content InfoObject, we may have few SAP Customer exit/variables to use directly in DTP, but in this example I’m using Custom InfoObject which is created Data Type as DATS.

 

In DTP select the InfoObject and choose Create Routine and write/add the below Code in DTP Routine.

 

* Global code used by conversion rules
*$*$ begin of global - insert your declaration only below this line  *-*
* TABLES: ...


   DATA:   dt_range  TYPE STANDARD TABLE OF rsdatrange,

        btw LIKE STANDARD TABLE OF rsintrange,
        wdt_range
TYPE rsdatrange
.


*$*$ end of global - insert your declaration only before this line   *-*

 

 

*$*$ begin of routine - insert your code only below this line        *-*
     
data: l_idx like sy-tabix.
     
read table l_t_range with key
      fieldname
= ' '.
      l_idx
= sy-tabix.
*....

  CALL FUNCTION 'RS_VARI_V_LAST_MONTH' 
   * EXPORTING
   * SYSTIME          = ' '
  TABLES
   p_datetab 
= dt_range
   p_intrange
= btw.

READ TABLE dt_range INTO wdt_range INDEX 1.

      l_t_range
-fieldname = '/BIC/<Your_InfoObject_Name>'.
      l_t_range
-option = 'BT'.
      l_t_range
-sign = 'I'.
      l_t_range
-low = wdt_range-low.
      l_t_range
-high = wdt_range-high.

APPEND l_t_range
.

 

*  IF l_idx <> 0.
*    MODIFY l_t_range INDEX l_idx.
*  ELSE.
*    APPEND l_t_range.
*  ENDIF.

*$*$ end of routine - insert your code only before this line         *-*

     Sometimes data in Source System is not checked for quality. For example, input data is not checked for non printable characters e.g. tabulation, carriage return, linne feed etc. If user copy and paste data into input fields from email or web page then non printable characters can be entered into the system causing BW data loading issues (not permitted characters). In case of master data quality issue must fixed immediately otherwise problem will become worse with every transaction where incorrect master data is used. In case incorrect of just information fields that are stored in DSO at document level, then data can be fixed in transfer rules.

     What it takes is to correct data in transfer rule start routine using regular expression.

REGEX1.jpg

Prior to executing REPLACE statement HEWORD SOURCE_PACKAGE field contains hex 09 (tabulation) character.

REGEX2.jpg

Once REPLACE statement is executed, non printable character is gone.

REGEX3.jpg

REGEX4.jpg

List of issues faced during the flat file generation from APD.

 

1> Header name for key figures displaying with technical names in flat file.

 

2> Negative sign of key figures like amount and quantity displaying post values in flat file.

      Which will result in wrong Total Amount in flat file.

      i.e.     Amount

                 $1000 –

 

3> Leading zeros has been added in to the key figures of APD.

 

4> Values are getting rounded off. No decimal places displayed in the flat file.

 

Solution:

 

First create Z info objects as per your header field names length.

  1. i.e. ZCHAR20 for field name length 20

 

Assign these Z info objects in your target field of APD Routine as below,

             

Capture.PNG

 

Write following login in routine tab,

 

DATA: ls_source TYPE y_source_fields,
ls_target
TYPE y_target_fields.


ls_target
-Char1 = 'ABC'.
ls_target
-Char2 = 'XYZ'.

APPEND ls_target to et_target.

data : lv_value type p length 16 DECIMALS 2(Add decimal places as per your need)

LOOP AT it_source INTO ls_source.
    

           *    MOVE-CORRESPONDING ls_source TO ls_target.
                    ls_target
-Char1 = ls_source-Char1.
                    ls_target
-Char2 = ls_source-Char2.
          
*    ls_target-KYF_0001 = ls_source-KYF_0001.
                    
clear : lv_value.
                    
if ls_source-KYF_0001 is not initial.
                        lv_value
= ls_source-KYF_0001.
                              
if lv_value is not initial.
                                  ls_target
-KYF_0001 = lv_value.
                                   
if lv_value lt 0.
                                        
SHIFT ls_target-KYF_0001 RIGHT DELETING TRAILING '-'.
                                        
SHIFT ls_target-KYF_0001 LEFT DELETING LEADING ' '.
                                        
CONCATENATE '-' ls_target-KYF_0001 INTO ls_target-KYF_0001.
                                   
endif.
                              
endif.
                      
else.
                             ls_target
-KYF_0001 = '0.00'.
                      
endif.

 

 

Note: Here Char1, Char2 is your info object technical name.

           ABC, XYZ is field name which you want to display in header field of flat file.

    Virtual Cube Function Module can be very easily implemented using CL_RSDRV_REMOTE_IPROV_SRV class services (there is an example in class documentation). I like its simplicity, but unfortunatelly it can not handle complex selections. In my blog, I will explain how to keep Virtual Cube Function Module implementation simple and in the same time handle complex selections enhancing service class.
      Below is Function Module that implementation Virtual Cube reading from SFLIGHT table
*---------------------------------------------------------------------*
*      CLASS lcl_application  DEFINITION
*---------------------------------------------------------------------*
CLASS lcl_application DEFINITION.
 
PUBLIC SECTION.
   
CLASS-METHODS:
      get_t_iobj_2_fld
RETURNING VALUE(rt_iobj_2_fld) TYPE
                    cl_rsdrv_remote_iprov_srv
=>tn_th_iobj_fld_mapping.

ENDCLASS.

*---------------------------------------------------------------------*
*      CLASS lcl_application  IMPLEMENTATION
*---------------------------------------------------------------------*
CLASS lcl_application IMPLEMENTATION.
*---------------------------------------------------------------------*
* get_t_iobj_2_fld
*---------------------------------------------------------------------*
METHOD get_t_iobj_2_fld.

  rt_iobj_2_fld
= VALUE #( ( iobjnm = 'CARRID'    fldnm = 'CARRID' )
                         
( iobjnm = 'CONNID'    fldnm = 'CONNID' )
                         
( iobjnm = 'FLDATE'    fldnm = 'FLDATE' )
                         
( iobjnm = 'PLANETYPE' fldnm = 'PLANETYPE' )
                         
( iobjnm = 'SEATSOCC'  fldnm = 'SEATSOCC' )
                         
( iobjnm = 'SEATSOCCB' fldnm = 'SEATSOCC_B' )
                         
( iobjnm = 'SEATSOCCF' fldnm = 'SEATSOCC_F' ) ).

 
ENDMETHOD.
ENDCLASS.

FUNCTION z_sflight_read_remote_data.
*"----------------------------------------------------------------------
*"*"Local Interface:
*"  IMPORTING
*"    VALUE(INFOCUBE) LIKE  BAPI6200-INFOCUBE
*"    VALUE(KEYDATE) LIKE  BAPI6200-KEYDATE OPTIONAL
*"  EXPORTING
*"    VALUE(RETURN) LIKE  BAPIRET2 STRUCTURE  BAPIRET2
*"  TABLES
*"      SELECTION STRUCTURE  BAPI6200SL
*"      CHARACTERISTICS STRUCTURE  BAPI6200FD
*"      KEYFIGURES STRUCTURE  BAPI6200FD
*"      DATA STRUCTURE  BAPI6100DA
*"----------------------------------------------------------------------

  zcl_aab=>break_point( 'Z_SFLIGHT_READ_REMOTE_DATA' ).

 
DATA(iprov_srv) = NEW
    cl_rsdrv_remote_iprov_srv
( i_th_iobj_fld_mapping = lcl_application=>get_t_iobj_2_fld( )
                                i_tablnm             
= 'SFLIGHT' ).

  iprov_srv
->open_cursor(
    i_t_characteristics
= characteristics[]
    i_t_keyfigures     
= keyfigures[]
    i_t_selection     
= selection[] ).

  iprov_srv
->fetch_pack_data( IMPORTING e_t_data = data[] ).

 
return-type = 'S'.

ENDFUNCTION.
This how BW Query is defined which sends complex selection to Virtual Cube Function Module.
Service Class 2.jpg
Service Class 3.jpg
As you can see the Query reads number of seats occupied in Airbus Airplanes Types (global restriction) for All Carriers, Lufthansa and American Airlines in each 2015 and 2016 years.  Following selection is sent to Virtual Cube Function Module
Service Class 4.jpg
Expression 0 correspinds to global restriction and expressions 1 through 6 correspond to restricted key figures (All Carriers 2015, All Carriers 2016, Lufthansa 2015, Lufthansa 2016, American Airlines 2015 and American Airlines 2016).
Service class in our Virtual Cube Function Module used in such a way that generates wrong SQL Where clause expression. It is not a problem with Service Class as such, but the way it is used.
Service Class 6.jpg
BW Query results are wrong (All Carries data is a sum of Lufthansa and American Airlines. e.g. other carriers data is missing).
Service Class 7.jpg
The problem is that generated SQL Where clause expression does not follow the rule below:
E0  AND (  E1 OR E2 OR E3 ... OR EN ),
where E0 corresponds to the global restrictions and E1, E2, E3 ... EN to other restrictions.
The problem can easily be fixed enhancing CL_RSDRV_REMOTE_IPROV_SRV service class. What it takes is to:

 

Service Class 8.jpg

Creation of BUILD_WHERE_CONDITIONS_COMPLEX method
Service Class 9.jpg
METHOD build_where_conditions_complex.
DATA: wt_bw_selection TYPE tn_t_selection.
DATA: wt_where TYPE rsdr0_t_abapsource.

* E0 AND ( E1 OR E2 OR E3 ... OR EN )
 
LOOP AT i_t_selection INTO DATA(wa_bw_selection)
   
GROUP BY ( expression = wa_bw_selection-expression )
             
ASCENDING ASSIGNING FIELD-SYMBOL(<bw_selection>).
   
CLEAR: wt_bw_selection,
          wt_where.
   
LOOP AT GROUP <bw_selection> ASSIGNING FIELD-SYMBOL(<selection>).
      wt_bw_selection
= VALUE #( BASE wt_bw_selection ( <selection> ) ).
   
ENDLOOP.
    build_where_conditions
( EXPORTING i_t_selection = wt_bw_selection
                           
IMPORTING e_t_where    = wt_where ).
   
CASE <bw_selection>-expression.
   
WHEN '0000'.
     
IF line_exists( i_t_selection[ expression = '0001' ] ).
       
APPEND VALUE #( line = ' ( ' ) TO e_t_where.
     
ENDIF.
     
APPEND LINES OF wt_where TO e_t_where.
     
IF line_exists( i_t_selection[ expression = '0001' ] ).
       
APPEND VALUE #( line = ' ) AND ( ' ) TO e_t_where.
     
ENDIF.
   
WHEN OTHERS.
     
IF <bw_selection>-expression > '0001'.
       
APPEND VALUE #( line = ' OR ' ) TO e_t_where.
     
ENDIF.
     
APPEND VALUE #( line = ' ( ' ) TO e_t_where.
     
APPEND LINES OF wt_where TO e_t_where.
     
APPEND VALUE #( line = ' ) ' ) TO e_t_where.
     
IF ( line_exists( i_t_selection[ expression = '0000' ] ) ) AND
       
( NOT line_exists( i_t_selection[ expression = <bw_selection>-expression + 1 ] ) ).
       
APPEND VALUE #( line = ' ) ' ) TO e_t_where.
     
ENDIF.
   
ENDCASE.
 
ENDLOOP.

ENDMETHOD.
BUILD_WHERE_CONDITIONS_COMPLEX method contains logic to build selection accorindg to the rule. It is calling original
BUILD_WHERE_CONDITIONS method using it as buling block. New LOOP AT ... GROUP BY ABAP Syntax is used to split selection table into individual selections converting then them into SQL Where clause expressions and combining them into final expression as per the rule.

 

 

 

Implemention of Overwrite-exit for OPEN_CURSOR method
CLASS lcl_z_iprov_srv DEFINITION DEFERRED.
CLASS cl_rsdrv_remote_iprov_srv DEFINITION LOCAL FRIENDS lcl_z_iprov_srv.
CLASS lcl_z_iprov_srv DEFINITION.
PUBLIC SECTION.
CLASS-DATA obj TYPE REF TO lcl_z_iprov_srv. "#EC NEEDED
DATA core_object TYPE REF TO cl_rsdrv_remote_iprov_srv . "#EC NEEDED
INTERFACES  IOW_Z_IPROV_SRV.
 
METHODS:
  constructor
IMPORTING core_object
   
TYPE REF TO cl_rsdrv_remote_iprov_srv OPTIONAL.
ENDCLASS.
CLASS lcl_z_iprov_srv IMPLEMENTATION.
METHOD constructor.
  me
->core_object = core_object.
ENDMETHOD.

METHOD iow_z_iprov_srv~open_cursor.
*"------------------------------------------------------------------------*
*" Declaration of Overwrite-method, do not insert any comments here please!
*"
*"methods OPEN_CURSOR
*"  importing
*"    !I_T_CHARACTERISTICS type CL_RSDRV_REMOTE_IPROV_SRV=>TN_T_IOBJ
*"    !I_T_KEYFIGURES type CL_RSDRV_REMOTE_IPROV_SRV=>TN_T_IOBJ
*"    !I_T_SELECTION type CL_RSDRV_REMOTE_IPROV_SRV=>TN_T_SELECTION .
*"------------------------------------------------------------------------*
 
DATA:
    l_t_groupby   
TYPE rsdr0_t_abapsource,
    l_t_sel_list 
TYPE rsdr0_t_abapsource,
    l_t_where     
TYPE rsdr0_t_abapsource.

  core_object
->build_select_list(
   
exporting
      i_t_characteristics
i_t_characteristics
      i_t_keyfigures     
i_t_keyfigures
   
importing
      e_t_sel_list
= l_t_sel_list
      e_t_groupby 
= l_t_groupby ).

  core_object
->build_where_conditions_complex(
   
exporting
      i_t_selection
= i_t_selection
   
importing
      e_t_where
= l_t_where ).

* #CP-SUPPRESS: FP secure statement, no user input possible
 
open cursor with hold core_object->p_cursor for select (l_t_sel_list) from (core_object->p_tablnm)
   
where (l_t_where)
   
group by (l_t_groupby).

ENDMETHOD.
ENDCLASS.
OPEN_CURSOR  Overwrite-exit method has the same logic as original method except that BUILD_WHERE_CONDITIONS_COMPLEX method is called instead of BUILD_WHERE_CONDITIONS
Now when the changes are in place, lets run the report again and see what SQL Where Clause expression is generated
Service Class 10.jpg
Finally, lets run the report again and see if shows correct data.
Service Class 11.jpg
Now data is correct. All Carriers includes all data not only Lufthansa and American Airlines.

Introduction to Roles and Authorizations in BW 7.4

The Roles and Authorization maintained in BW7.4 provides a restriction on accessing reports based on infocube level, Characteristics level, Characteristics Value level, Key Figure level, hierarchy Node Level. The above mentioned restrictions are maintained by using this below mentioned approach;

 

Authorizations are maintained in authorization objects.

Roles contain the Authorizations.

Users are assigned to roles

 

Capture 21.PNG

 

Transactions Used

Infoobject Maintenance - RSD1.

Role Maintenance - PFCG

Roles and Authorization maintenance - RSECADMIN.

User creation SU01.

 

Note: A Characteristic object should be Authorization Relevant to make it available for restrictions. To make a characteristics object, Authorization Relevant; Go to “Business Explorer” tab in Info object details. Without making an object Authorization relevant checked, we cannot use it or include it into the Authorization Object.

 

Enter T code RSD1

Capture.PNG

enter the info object and click on Maintain.

Capture 1.PNG

Click on Business Explorer Tab then select the Authorization Relevant check box.so now we can use this

object in Roles and Authorization.

 

SCENARIO:

In my Scenario we want to create authorization on info object(0FUNCT_LOC) with hierarchy.suppose the hierarchy have three level's and i have 3 user's like User1,User2,User3. but User1 need to access hierarchy level 1 data ,User2 need to access hierarchy level 2 and User3 need to access hierarchy level 3.so that we need to follow the steps.

 

Creating Roles and Authorization objects

Creating Authorization objects

Enter T code RSECADMIN

Capture 2.PNG

then click on Ind.Maint.

 

 

cap2.png

Enter the Authorization name and click on create.

cap1.png

 

 

 

Maintain short,medium,long description and click on Insert Row and enter the objects.

0TCAACTVT Activity in Analysis Authorizations

0TCAACTVT Grant authorizations to different activities like to change and Display, Default value is 03 Display.

0TCAIPROV Authorizations for InfoProvider

0TCAIPROV Grant authorization to particular InfoProviders, Default value is * .

0TCAVALID Validity of an Authorization

0TCAVALID Define when authorizations are valid or not valid, Default Value is * .

and click on insert special characteristics.

 

cap3.png

cap4.png

 

 

cap5.png

 

 

 

now enter the info object 0FUNCT_LOC. and double click on that then go for Hierarchy Authorizations Tab.

click on create option.

cap6.pngcap7.png

 

  

 

select Hierarchy click on browse.

cap8.png

select Node Details and click on browse.

 

 

select particular Node from left side and move to right side what ever we required for particular user.

select particular Type of Authorization is

Capture 12.PNG

then click on continue.

Now click on User Tab.

Capture 13.PNG

 

 

 

click on Indvl Assignment then it will appear the below screen.

cap10.png

 

Enter the User and click on Role Maintenance.

cap11.png

 

click on create single role.

cap12.png

 

enter the description and click on change authorization data ICON.

 

 

cap13.png

 

add the above marked objects and click on generate ICON.

Now come to User tab enter the required user's

 

cap14.png

 

Click on user comparison then we get the below screen.

cap15.png

If we want to give access particular T code then go to Menu tab click on Add that T code and then screen will appear like this.

 

Capture 20.PNG

enter t code and click on Assign Transactions.and save it.

now log in Analyzer or SAP BW with

for the User2 and User3 also we need to follow the same steps.

Overview of Remodeling

 

If we want to modify an DSO that data has already been loaded. We can use remodeling to change the structure of the object without losing data.

If we want to change an DSO  that no data has been loaded into yet, we can change it in DSO maintenance.

 

We may want to change an InfoProvider that has already been filled with data for the following reasons:

 

We want to replace an InfoObject in an InfoProvider with another, similar InfoObject. we have created an InfoObject ourself but want to replace it with a BI Content InfoObject.

 

Prerequisites

 

As a precaution, make a backup of your data before you start remodeling. In addition, ensure that:

we have stopped any process chains that run periodically and affect the corresponding InfoProvider. Do not restart these process chains until remodeling is finished.

There is enough tablespace available in the database.

After remodeling, we have to check which BI objects that are connected to the InfoProvider (for example, transformation rules, MultiProviders) have been deactivated. we have to reactivate these objects manually. The remodeling makes existing queries that are based on the InfoProvider invalid. we have to manually adjust these queries according to the remodeled InfoProvider. If, for example, we have deleted an InfoObject, we also have to delete it from the query.

 

Features

A remodeling rule is a collection of changes to your DSO that are executed simultaneously.

For DSO, you have the following remodeling options:

For characteristics:

Insert or replace characteristics with:

Constants

An attribute of an InfoObject within the same dimension

A value of another InfoObject within the same dimension

A customer exit (for user-specific code)

Delete

For key figures:

Insert:

Constant

A customer exit (for user-specific code)

Replace with: ○ A customer exit (for user-specific code)

Delete You cannot replace or delete units. This avoids having key figures in the DSO without the corresponding unit.


Implementation of Remodeling Procedure To carry out the Remodeling  procedure, Right click on your DSO and in the context menu, navigate through Additional Functions -----> Remodeling.


Capture.PNG                                                                                                                       we will get the following window after clicking on Remodeling. Enter a remodeling rule name and press Create to create a new rule.

Capture1.PNG                                                                             After clicking on Create we will get the following pop-up window where in we have to enter a description for the rule we wish to create (as shown below).

Capture2.PNG                                                                               After entering the description, press the Create button. we will see the following screen.


           Capture4.PNG                                                                              

As we can see, the left pane shows the structure of the DSO in consideration.

To add a new remodeling rule, Click on the Green Plus sign on the Top-Left corner of your screen (Also circled in Red below). It is called the Add Operation to List button.

 

Capture4.PNG                                                                                   You will get the following pop-up where you can add the remodeling rules.

Capture5.PNG                                                                                         Business Requirement The requirement is as follows:

To delete the Time Characteristic 0CALDAY from the data fields.

To add 0COMP_CODE to the key fields with constant value 1100.

To delete the key figure Revenue(ZREVU8) as it is no longer relevant for reporting in this DSO.

We will implement these requirements one by one.

 

In the pop-up that opened in the last step, select the Delete Characteristic Radio Button and enter the technical name of the Characteristic name you wish to delete (0CALDAYin this case)

Capture 6.PNG                                                                                               Confirm by pressing the CREATE button.

capture 9.PNG


Adding Characteristic 0COMP_CODE.with value 1100 to key fields of DSO.

Capture 7.PNG           

 

 

we need to check AS Key Field check box.if we want that in particular position.

click on create button.


capture 10.PNG

To delete key figure we need to follow these steps.

Capture 8.PNG                                                                                                 Then click on create button.

                                                                                                 

capture 11.PNG



after that click on activate and simulate then go for schedule option.

Capture12.PNG                                                                                    simulation done and click on continue then now it will schedule screen.

Capture 13.PNG                                                                                                  select immediate option then it will appear the below screen here we need to select save option.

capture 14.PNG                                                                                                now we will get message like this.

Capture 15.PNG                                                                                                                                                  if we want to see job click on jobs then we will check it. after that DSO will be inactive and we need to activate.

Capture 16.PNG                                                                                    Now Remodeling successfully done on DSO.

 



Hi,

 

anyone who has ever tried to create a pivot table on top of the Bex Analyzer output will have experienced this issue.

When displaying key and text for an info object, the column header for the key is filled, but the text column remains empty without a header.

This makes it impossible to create a pivot table on top of it.

 

Using the Callback macro in Bex 7.x it is possible to scan the column headers in the result area and put in a custom text.

In this blog I describe how to do this.

 

First of all, run the query in Bex analyzer.

 

After running the query, go to view --> macro's --> view macro's

select CallBack and press edit.

macro screen.jpg

 

Scroll below to the following piece of code

callback macro before.JPG

After the End With and before End If, insert the following lines:

 

    'set column headers for key + text

    Dim nrCol As Long

    Dim resultArea As Range

    Set resultArea = varname(1)

    nrCol = resultArea.Columns.Count

    For i = 1 To nrCol - 1

        If resultArea.Cells(1, i + 1) = "" Then

            resultArea.Cells(1, i + 1) = resultArea.Cells(1, i) & "_text"

        End If

    Next i

 

This code will put suffix _text in the column header, based on the preceding column header.

 

The end result in the macro then looks like this:

callback macro after.JPG

After refreshing the query, you will now see the column headers being added based on the previous column header, with _text behind it.

 

Hope this will help a lot of people.

 

Best regards,

Arno


Hi All,

 

 

 

Requirement – There are lot of Bex queries which uses hierarchy node hardcoded .Currently hierarchy maintained at BI and future you are automated by maintaining set hierarchy at ecc side. Your query filtered with node for example “REG IND” but issue is at ecc side you cannot name node with space so if you maintain as “REG_IND” will not show proper result in query.

 

If you do not wants to change Bex queries because it’s huge effort for modification Bex queries.So you can go for following work around.

 

Note – It’s always do correct changes at ecc side or modify bex queries with correct node which is coming from ECC.

 

This blog gives you an idea to change node using abap program.

 

If you search with “how to change hierarchy node “you will find lot of thread which saying you are not able to change node name in BI but able to change description.

 

h1.PNG

 

 

That is correct manually we cannot change but using abap program you can change it.

 

Step1 – Go to T code Se38

 

Provide program name and copy following code.

Here our hierarchy info object is zgkb

 

 

REPORT ZHIRACHY_NODECHANGE.

 

 

data: wa_node_change type /BIC/Hzgkb.

 

 

update /BIC/Hzgkb set NODENAME = 'REG IND'

 

 

where IOBJNM = '0HIER_NODE'

 

 

and OBJVERS ='A'

 

 

and NODEID = 1

 

 

and NODENAME = 'REG_IND'.

 

  

 

Execute this program as one step after ecc hierarchy load complete through process chain.In this way you can change hierarchy node using ABAP program.

 

Thanks for reading. Hope it is useful information..

   

 

Regards,

Ganesh Bothe

 

 

 

 

     When reporting with BOBJ Clients on BW data users might request too detailed information pushing BW system over its limit and causing performance / system stability issues. There is a safety belt functionality which allows to set a maximum number of cells retrieved from BW. In my blog I will explain to set safety belt for different BOBJ Clients. If you do not authorization or system to play with you can create trial BW / BOBJ landscape in Cloud like exaplined here

 

     Setting BW Safety Belt for Analysis OLAP

     It is set in Central Management Console updating Properties of Adaptive Processing Server

   BW Safety Belt 1.jpg

Here are setting and default values

SettingDefault Value

Maximum Client Sessions

15
Maximum number of cells returned by a query100,000
Maximum number of members returned when filtering100,000


To demonstrate how Safety Belt is working lets change Maximum number of cells returned by a query to something small, for example, 5.

 

BW Safety Belt 2.jpg

and restart the Server

BW Safety Belt 3.jpg

Now if we run Analysis for OLAP without drill down, then no error occurs

BW Safety Belt 4.jpg

But if we drill down by Product or Sold-to number of cells will exceed the limit.

BW Safety Belt 5.jpg

 

     Setting BW Safety Belt for Web Intelligence and Crystall Report

     It is set maintaining BICS_DA_RESULT_SET_LIMIT_DEF and BICS_DA_RESULT_SET_LIMIT_MAX paremeters in RSADMIN table. To demonstrate how safety belt works lets set the limits to some small value, for example, 5 running SAP_RSADMIN_MAINTAIN program.

BW Safety Belt 6.jpg

BW Safety Belt 7.jpg

Now if we run Web Intelligence report without drill down, then no error occurs

BW Safety Belt 8.jpg

But if we drill down by Product or Sold-to number of cells will exceed the limit.

BW Safety Belt 9.jpg

Safety Belt for Crystal Reports works the same way as for Web Intelligence

Suraj Yadav

Extraction in SAP BI

Posted by Suraj Yadav Mar 15, 2016

What is Data Extraction?


Data extraction in BW is extracting data from various tables in the R/3 systems or BW systems. There are standard delta extraction methods available for master data and transaction data. You can also build them with the help of transaction codes provided by SAP. The standard delta extraction for master data is using change pointer tables in R/3. For transaction data, delta extraction can be using LIS structures or LO cockpit etc.


Types of Extraction:


  1. Application Specific:
    • BW Content Extractors
    • Customer Generated Extractors
  2. Cross Application Extractors
    • Generic Extractors.

 

extractors.gif



BW Content Extractors


SAP provided the predefined Extractors like FI, CO, LO Cockpit etc, in OLTP system (R/3) . The thing that you have to do is, Install business Content.

 

Lets take an example of FI extractor. Below are the steps you need to follow:

  • Go to RSA6 >> select the desired datasource >> In the top there is a tab Enhance Extract Structure >> Click on it


Untitled.jpg

  • It will take you to DataSource: Customer Version Display. Double click on the ExtractStruct.

Untitled.png

 

  • Click on Append Structure button as shown:

Untitled.png

  • Add the field Document Header Text (eg: ZZBKTXT) in the Append Structure with ComponentType: BKTXT. Before you exit, make sure that you activate the structure by clicking on the activate button.

Untitled.png

  • Required field has been successfully added in the structure of the data source.

Untitled.png

Populate the Extract Structure with Data

       SAP provides enhancement RSAP0001 that you use to populate the extract structure. This enhancement has four components that are specific to each of        the four types of R/3 DataSources :


  • Transaction data EXIT_SAPLRSAP_001
  • Master data attributes EXIT_SAPLRSAP_002
  • Master data texts EXIT_SAPLRSAP_003
  • Master data hierarchies EXIT_SAPLRSAP_004

 

With these four components (they're actually four different function modules), any R/3 DataSource can be enhanced. In this case, you are enhancing a transaction data DataSource, so you only need one of the four function modules. Since this step requires ABAP development, it is best handled by someone on your technical team. You might need to provide your ABAP colleague with this information:

  • The name of the DataSource (0FI_GL_4)
  • The name of the extract structure (DTFIGL_4)
  • The name of the field that was added to the structure (ZZBKTXT)
  • The name of the BW InfoSource (0FI_GL_4)
  • The name of the R/3 table and field that contains the data you need (BKPFBKTXT)

With this information, an experienced ABAP developer should be able to properly code

the enhancement so that the extract structure is populated correctly. The ABAP code itself

would look similar to the one shown below:

 

Untitled.png


  • Now check the data via tcode RSA3.

 

(You can open the four Function Modules given above (Tcode SE37), you will get include statement in all the FMs. Double click on the include program you will get the ABAP code as above for all standard data sources which can be modified.)

 

 

Note: Similarly you can enhance all other SAP delivered extractors. ( For LO Cockpit use tcode LBWE)

 

 

Customer Generated Extractors

 

For some application which vary from company to company like LIS , CO-PA ,FI-SL because of its dependency on organization structure , SAP was not able to provide a standard data source for these application. So customer have to generate their own data source. So this is called Customer generated Extractors.

 

Lets take an example of CO-PA extraction

  • Go to Tcode KEB0 which you find in the SAP BW Customizing for CO-PA in the OLTP system.

Untitled.jpg

 

 

  • Define the DataSource for the current client of your SAP R/3 System on the basis of one of the operating concerns available there.
  • In the case of costing-based profitability analysis, you can include the following in the DataSource: Characteristics from the segment level, characteristics from the segment table, fields for units of measure, characteristics from the line item, value fields, and calculated key figures from the key figure scheme.
  • In the case of account-based profitability analysis, on the other case, you can only include the following in the DataSource: Characteristics from the segment level, characteristics from the segment table, one unit of measure, the record currency from the line item, and the key figures.
  • You can then specify which fields are to be applied as the selection for the CO-PA extraction.

Untitled.jpg

 

 

Generic Extractors


When the requirement of your company could not be achieved by SAP delivered business content data source , Then you have to create your own data source that is purely based on your company's requirement , That is called generic extractors .

 

Based on the complexity you can create Data source in 3 ways .

 

1. Based on Tables/Views ( Simple Applications )

2. Based on Infoset

3. Based on Function Module ( Used in complex extraction)


Steps to create generic extractor:


1. Based on Tables/Views ( Simple Applications )


  • Go to Tcode RSO2 and choose the type of data you want to extract (transaction, Masterdata Attribute or Masterdata Text)

Untitled.png

  • Give the name to the data source to be created and click on create.

Untitled.png














  • On the Create data source screen, enter the parameters as required:

Untitled.jpg

Application Component: Component name where you wish to place the data source in the App. Component hierarchy.

Text: Descriptions (Short, Medium and Long) for the data source.

View/Table: Name of the Table/View on which you wish to create the Generic data source. In our case it is ZMMPUR_INFOREC.

 

  • The Generic datasource is now displayed allowing you to Select as well as Hide field. The fields ‘hidden’ will not be available for extraction. Fields in the ‘Selection’ tab will be available for Selection in the Infopackage during data extraction from the source system to the PSA.

Untitled.jpg


  • Select the relevant fields and Save the data source.

Untitled.png

  • Now save the DataSource.

Untitled.jpg

 

 

2. Based on Infoset


https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwiCruWWusPLAhWBPZoKHa7KArgQFg…


3. Based on Function Module


https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0ahUKEwjOk46_usPLAhVqIJoKHej3A8wQFg…


 

Note: Data for all types of extractors can be viewed via Tcode RSA3, where you have to give the DataSource name, Data Records/call, No. of Extr calls and the selections:

 

Untitled.png

 


The detailed information on LO-Cockpit, Update modules, generic extractor using FM and Infoset, delta pointer, safety interval will be shared in upcoming blogs.

 

Thanks,

Suraj Yadav

    If you need Business Objects / Business Warehouse Trial then this blog is for you . Right now SAP offers BW 7.4 SP08 and BOBJ 4.1 free Cloud Application Library trials (you pay only for Amazon Web Services only). The fact that these are two separate trials has its pros and cons. Pros is that you have more flexibility to control AWS costs by starting / stopping BW and BOBJ separately. Cons is that you have to connect BW to BOBJ yourself. In my blog I will explain how to connect BW to BOBJ and demonstrate BW / BOBJ end-to-end scenario.

    These are CAL free trials:

BW BOBJ Sandbox 1.jpg

BW BOBJ Sandbox 2.jpg

These are the costs of running BW and BOBJ instances:

BW BOBJ Sandbox 3.jpg

BW BOBJ Sandbox 4.jpg

Note: it is important to check Public Static IP address check-box for BW instance to save you trouble updating BOBJ OLAP Connection every time BW is started.

    Once BW and BOBJ instances are created, in AWS EC2 Management Console you check and make a note of BW IP address (you will need to connect it to BOBJ). As you can also see BW comes also with front e.g. a remote desktop with SAP GUI and BW Modeling Tools in Eclipse.

BW BOBJ Sandbox 5.jpg

 

Create BW OLAP Connection in BOBJ CMC

BW BOBJ Sandbox 6.jpg

It is important to set Authentication mode to Pre-defined, otherwise in Prompt mode Webi will not see our OLAP Connection

Note: that server name is the Public Static IP Address of BW Server from AWS EC2 Management Console

 

Make TCP Ports of BW Server Accessible from Anywere

 

Without this BOBJ will not be able to connect to BW. Open BW Server security group in AWS EC2 Management Console

BW BOBJ Sandbox 7.jpg

Edit Inbound Rules

BW BOBJ Sandbox 8.jpg

Modify first entry

BW BOBJ Sandbox 9.jpg

And delete second entry

BW BOBJ Sandbox 10.jpg

Save Inbound Rules

BW BOBJ Sandbox 11.jpg

 

 

Install SAP GUI Business Explorer

 

What I noticed is that Eclipse BW Modeling Tools are not working because BW Project can not be expaded (it dumps on BW side see trx. ST22). I suggest to install SAP GUI Business Explorer and create BW Queries there and do all the modeling in SAP GUI trx. RSA1. Alternatively you can user other BW Trials (BW 7.4 SP10 on HANA or BW 7.4 SP01 on HANA), but the latter have higher AWS costs.

 

 

Create End to End Scenario

 

Create BW Query and allow external access for the Query

BW BOBJ Sandbox 12.jpg

In Web Intelligence create a new report selecting D_NW_C01 BW Query as a source from BW OLAP Connection

BW BOBJ Sandbox 13.jpg

BW BOBJ Sandbox 15.jpg

BW BOBJ Sandbox 16.jpg

Actions

Filter Blog

By author:
By date:
By tag: