1 2 3 5 Previous Next

Data Warehousing

70 Posts

Hi,

 

I decided to write this blog after I realized that it's not so easy to check which queries use given aggregate. There was a need to optimize the aggregates which were created on different occassions but to do this I had to know which queries work with them. 

 

Checking if an aggregate was used by any query is possible in RSA1 in “Maintain aggregates” option against a given cube. We can see here how many times an aggregate was used since its last activation.

1. RSA maintain aggregates.png

 

But we don’t see which exact query used an aggregate.

 

To check that two tables should be used:

1. RSDDSTAT_DM,

2. RSDDSTAT_OLAP.

  

We have to use both tables because none of them contain all the fields that we need i.e. an aggregate number and a query’s technical code.

In RSDDSTAT_DM we have an aggregate number and STEPUID field which we use to join that table with RSDDSTAT_OLAP.

2. RSDDSTAT_DM.png

 

We don’t have in this table the query technical code.

A query's technical code can be found in RSDDSTAT_OLAP as well as STEPUID .

 

3. RSDDSTAT_OLAP.png

The simplest possible way (no ABAP, no need to define BW data source) to combine the data mentioned above is to use SAP QuickViewer functionality (TCode:SQVI).

 

All we need to do now is to define and run query joining the data.

1. Give a query code name

2. Start creating the query by presing "Create a query" button

4.SQVI create query.png

 

3. Write a query title

4. Define Data source as "Table join"

5.SQVI table join.png

 

5. Add both table to project

 

6.SQVI add table.png

 

6. Join tables with STEPID only.

 

7.SQVI connecting tables.png

Press F3 to exit the window after all is done.

 

7. Define which fields should be displayed in a query and which should be used in filter. Use approprate checkboxes to do it.

7.SQVI list of fields.png

What was chosen can be seen here

8.SQVI list of fields ddd.png

 

And here:

9.SQVI list of fields ddd.png

 

8. Set Export as MS Excel

10.SQVI to excel.png

9. Save changes

11.SQVI to excel save.png

 

10. Run query

 

It’s important to restrict “Internal type of query …” to OLAP.

 

11.SQVI ograniczenie.png

 

11. After data is displayed in MS Excel functionality for removing duplicates may be used.

11.SQVI wyswietlenie.png

 

Thank you for reading this blog.

I would be very grateful for any comment on it.

 

I'd like to give spacial thanks to Tomasz Piwowarski for helping me while creating a QuickViewer query.\

 

Regards, Leszek

Hi,

 

I tried to make usage of the fuction module: RSD_IOBJ_USAGE to create a where-used list for Info Objekcts that are relevant for analysis authorization.

 

Could be useful for some of you.

 

I'm thankful for any hint concerning my ABAP.

 

 

Cheers




REPORT ZIOBJ_MATRIX.



**********************************************************************

TYPES:



       BEGIN OF ls_infoobj,

          RSDIOBJNM TYPE RSDIOBJNM,

       END OF ls_infoobj,



       tt_infoobj TYPE STANDARD TABLE OF ls_infoobj,



       BEGIN OF ls_report,

        infoobj   TYPE RSDIOBJNM,

        name      TYPE RSDIOBJNM,

        typ       TYPE STRING,

        counter   TYPE n,

       END OF ls_report.



DATA: lt_infoobj TYPE tt_infoobj,

      wa_infoobj TYPE ls_infoobj,



      lt_cube TYPE TABLE OF rs_s_used_by,

      lt_multi TYPE TABLE OF rs_s_used_by,

      lt_iobj TYPE TABLE OF rs_s_used_by,

      wa_used_by TYPE rs_s_used_by,



      lt_odso TYPE RSO_T_TLOGO_ASC,

      wa_odso TYPE RSO_S_TLOGO_ASC,



      lt_iset TYPE RSO_T_TLOGO_ASC,

      wa_asc TYPE RSO_S_TLOGO_ASC,



      lt_viobj TYPE RSD_T_VIOBJ,

      wa_viobj TYPE RSD_S_VIOBJ,





      lt_report TYPE TABLE OF ls_report,

      wa_report TYPE ls_report.





**********************************************************************





START-OF-SELECTION.



* 1. Part: Collection of relevant elements



CALL FUNCTION 'RSEC_GET_AUTHREL_INFOOBJECTS'

* EXPORTING

*   I_INFOPROV                      =

*   I_IOBJNM                        =

*   I_CONVERT_ISET_NAMES            = RS_C_TRUE

IMPORTING

    E_T_IOBJ                        = lt_viobj

* EXCEPTIONS

*   COULD_NOT_GET_INFOOBJECTS       = 1

*   OTHERS                          = 2

          .

IF SY-SUBRC <> 0.

* Implement suitable error handling here

ENDIF.





LOOP AT lt_viobj INTO wa_viobj WHERE IOBJNM NS '0TC'.

    wa_infoobj-RSDIOBJNM = wa_viobj-IOBJNM.

    APPEND wa_infoobj TO lt_infoobj.

ENDLOOP.





* 2. Usage of relevant elements



LOOP AT lt_infoobj INTO wa_infoobj.



CLEAR: lt_cube, lt_multi, lt_iobj, lt_odso, lt_iset.



CALL FUNCTION 'RSD_IOBJ_USAGE'

  EXPORTING

    I_IOBJNM                    = wa_infoobj-RSDIOBJNM

*   p_IOBJNM

*   I_IOBJTP                    = RS_C_SPACE3

*   I_TH_TLOGO                  =

*   I_BYPASS_BUFFER             = RS_C_FALSE

    I_OBJVERS                   = 'A'

*   I_INCLUDE_ATR_IN_ISET       = RS_C_FALSE

*   I_INCLUDE_ATR_IN_REF        = RS_C_FALSE

IMPORTING

    E_T_CUBE                    = lt_cube

*   E_T_IOBC                    =

*   E_T_ISCS                    =

*   E_T_ISNEW                   =

*   E_T_TABL                    =

*   E_T_CMP_IOBJ                =

*   E_T_ATR_IOBJ                =

*   E_T_ATR_NAV_IOBJ            =

    E_T_MPRO_IOBJ               = lt_multi

*   E_T_NAIP_IOBJ               =

*   E_T_HIECHA_IOBJ             =

*   E_T_ICE_KYF                 =

*   E_T_AGGRCHA_IOBJ            =

*   E_T_CHABAS_IOBJ             =

*   E_T_UNI_IOBJ                =

    E_T_IOBJ                    = lt_iobj

*   E_T_CMP_KYF                 =

*   E_T_ISMP                    =

*   E_T_ISMP_INT                =

    E_T_ODSO                    = lt_odso

    E_T_ISET                    = lt_iset.

*   E_T_MPRO                    =

*   E_T_UPDR                    =

*   E_T_ANMO                    =

*   E_T_AQSG                    =

*   E_T_QUERY                   =

*   E_T_DAS                     =

*   E_T_KPI                     =

*   E_T_TRFN                    =

*   E_T_DTP                     =

*   E_T_HYBR                    =

*   E_T_DAP                     =

*   E_T_DMOD                    =

*   E_T_COPR                    =

*   E_T_BPF                     =

*   E_T_APPL                    =

*   E_T_FBP                     =

*   E_T_HCPR                    =

*   E_T_QPROV                   =

* EXCEPTIONS

*   ILLEGAL_INPUT               = 1

*   OTHERS                      = 2

IF SY-SUBRC <> 0.

* Implement suitable error handling here

ENDIF.







* Used in InfoCube

LOOP AT lt_cube INTO wa_used_by.

  wa_report-name = wa_used_by-tobjnm.

  wa_report-typ = 'Info Cube'.

  wa_report-infoobj = wa_infoobj.

    APPEND wa_report TO lt_report.

ENDLOOP.



* Used in MultiProvider

LOOP AT lt_multi INTO wa_used_by.

  wa_report-name = wa_used_by-tobjnm.

  wa_report-typ = 'Multi Provider'.

  wa_report-infoobj = wa_infoobj.

    APPEND wa_report TO lt_report.

ENDLOOP.



* Used in InfoObject

LOOP AT lt_iobj INTO wa_used_by.

  wa_report-name = wa_used_by-tobjnm.

  wa_report-typ = 'Info Object'.

  wa_report-infoobj = wa_infoobj.

    APPEND wa_report TO lt_report.

ENDLOOP.



* Used in DSO

LOOP AT lt_odso INTO wa_asc.

  wa_report-typ = 'DSO'.

  wa_report-name = wa_asc-objnm.

  wa_report-infoobj = wa_infoobj.

    APPEND wa_report TO lt_report.

ENDLOOP.



* Used in InfoSet

LOOP AT lt_iset INTO wa_asc.

  wa_report-typ = 'InfoSet'.

  wa_report-name = wa_asc-objnm.

  wa_report-infoobj = wa_infoobj.

    APPEND wa_report TO lt_report.

ENDLOOP.





ENDLOOP.







* 3. Part: Report



WRITE: 'Where-used list:'.



LOOP AT lt_infoobj INTO wa_infoobj.

  WRITE: / wa_infoobj.

ENdloop.



SKIP 1.



WRITE: / '',

       /(30) 'Info Objects', (30) 'Info Provider', (30) 'Typ'.



LOOP AT lt_report INTO wa_report.

  WRITE: / wa_report-infoobj, wa_report-name, wa_report-typ.

ENDLOOP.

Going around various organizations, I often hear complaints about how SAP BI/BW is "slow", "expensive", "rigid", and "carries long development cycles".  In the era of HANA, the prediction of BW's immediate demise resonates seemingly everywhere.

 

Yet, for all the years I have been working with SAP BW, I find it is a tremendous tool for both developers and self-servicing business users.  First coming to mind: It has a model driven GUI, and is database agnostic.  You don't need a DBA every time you want to touch a data model.  Even data staging, governance, and DB statistics come packaged within the graphical system administration tools and transactions.

 

Do you want to create a master data object with Attributes and Texts ? Go ahead open Tx RSD1, and fill in a few "forms" as in wizards.  The underlying tables and joins are created for you in the underlying database.  You don't need to have specific database and SQL knowledge.  Even in the wonderful world of HANA today, you need to bring a couple of wonderful tables into a join, define the join type as Text Join, and remember to add in the language column.

 

What about the "Long Development cycles" ? BW data model, even under the best practice, is a little extra compared to working directly with tables.  We see SAP trying hard to bring the "agile" and "fast" into BW:  Open ODS, ODP, Transient providers are a few.....


That still doesn't sound fast enough ? Well, have you visited a brick-and-mortar warehouse lately ? If you haven't, I encourage you to do so whenever you get a chance.  Large warehouses run by the likes of Fedex or UPS are a thing of beauty, if you enjoy watching well-run processes.

 

When I stopped by a warehouse the first time, I noticed one significant overhead--or a "waste of time": Incoming well-packed pallets were unpacked and had items individually scanned.  New pallets were built from scratch before being put onto the shelves.  It would have been obviously a lot quicker to directly put the inbound pallets onto the warehouse shelves, right ?   For that matter, shipping directly out of the manufacturing plant and skipping the warehouse step would be even faster ?  Yet, unless you are a mom-and-pop store-front operation, warehouse and warehouse processes are the necessary evil to bring out the efficiency, speed, and savings in supply chain management.

 

In many regards, data warehouse represented by SAP BW is the equivalent of physical distribution centers.  When organizations accumulate large amount of information, a well-designed and well-run BW enables the consolidation, self-service, and 360 degree view.  This "overhead" of data warehouse modeling, in turn, provides the streamlined and consolidated information delivery and discovery.

 

Digging in a little deeper into organizations where SAP BW's very existence is challenged, I realized that their BW system is often run under the shadow of SAP ERP.  For example, despite BW's elaborate data level security and task access which can distinguish owners, BW authorizations are setup based on the Transaction-code mode of ECC.   BW developers are starved from many useful BW transactions and capabilities, let alone the business community.

 

On the opposite end of the trajectory, organizations that fell in love with BW are those where the BW system is "open" and "inviting".  There are business users who write and navigate almost all of their own queries , which became extensive analytic workbooks/Web reports. 

 

Not dragging the story too long, bits and pieces came to mind during a recent conversation regarding how to start on the right track towards BI success.  Let me share them here, perhaps invoking some open discussions.

 

Let BI thrive under its own merits.

 

BI has many unique aspects that make it unfit with some traditional IT principles:

 

1) BI is access to ever-changing information and uncovering the dynamic truth underneath, beyond long-lasting static formatted reports.

 

2) BI is a decision support system.  BI is not necessarily a "must-have" to run a business.  The tagline is: ERP runs the business, BI manages the business.  Leadership and senior management sponsorship and adoption can go a long way to jump-start BI success.

 

3) Adoption: The ONLY success criteria for BI is business adoption.

 

ERP projects are measured by delivering on target, per functional requirement and process definitions.  While BI projects can be bench-marked against initial requirement, time, and budget, the true measure of BI success is whether the user communities adopt the BI deliverable as their own.

 

A few years ago, there were studies showing half of the dashboards delivered to users went ignored after a few months.

 

Two conclusions we can derive from the study:

 

      a) Dashboard lifetime may not necessarily be long...The initial design can have some short-term decisions built-in, as the outputs could well need to be updated soon after use.

 

       b) Even when those BI projects came in on-time and on-budget, unless the outputs are truly adopted and truly reflect users' REAL need, we can not claim success.

 

Coming to the execution, a couple of quick sparks:

 

1) BI, more than ERP, is about relationships between IT and business.  We should encourage the business to have a sense of Ownership in the BI process, not making them feel "us vs them". Consequently, the BI project methodology often works better when we allow Joint decisions, Flexibility, and Quick turn-around ---Agile.

 

Many modern BI tools, counting in BPC for example, are designed to allow business to take center stage and ownership.  That is also why we have a mushroom from the likes of SAP Lumira, Tableau, and Qlikview.   One of the three pillars for SAP BI tool-set is agility.

 

BI team needs to directly connect to the business we are supporting.  We need to understand the personalities of the business process and the people.

 

Open dialogues, direct contacts, and regular workshops are key to put BI into the users hands.  Many significant BI requirements come from informal or private inquires/conversations, and observing business operations.

 

If we only allow formal SDLC requirement process in place, we may discourage the business from coming forward with "immature" thoughts or ideas.   The maturity of decisions often comes after uncovering the truth through business intelligence, not before.

 

In practice, the emphasis of formal SDLC and IT ownership had often forced business to grab raw data, opening their own departmental data marts or BI shops...

 

Open is key.....

 

2) Following the Open theme: BI Access and security authorizations should attract people in, not keeping people out.

 

BW's analytic authorization model is a different concept from the ECC's Transaction Code model.

 

BW, BOBJ, and BPC have elaborate and extensive data and task level securities, which can be utilized effectively to protect the information.

 

In the meantime, BW transactions and BI tools should be open at the "maximum" level to BI developers.  Power users should be encouraged to report on their own.

 

Many of the finest improvements in BW came in the form of new transaction codes.  BW developers should be encouraged to explore those functionality to take advantage of the investments.

 

In principal, BW developers should be treated as DBA and ABAP developers in the BW clients, not as simply report developers.  This is especially true when we want to emphasize lean and creativity in the BI groups.

 

 

Rounding back to the top of the paragraph:  BI's success is directly tied to the BI groups's creativity, user experience, and ultimately user Adoption.  The more organizations open up and foster those, the better we will be in the BI journey.

 

More to come.....

When you have to set the customizing of the Aggregation task in /POSDW/IMG transaction, you have to take into account the "Processing and Performance" configuration of this task. For that, you should know how this parameter works, so I am going to explain you how this parameter works.

 

The aggregation task has two steps: one to generate the aggregation itself and another one to send or work with the aggregated data. In Both of these steps you can configure the parameter.

 

 

 

  • Aggregation Task (STEP 1)

 

At this first step, this parameter represents the number of transactions that you are going to tend into account for each LUW.

 

 

 

 

 

 

If you don't know how a LUW is, you can use the F1 help of this field:

 

 

 

 

Badi /POSDW/BADI_AGGR_TRANSACTION (The badi where the aggregation is done) works only with one transaction per iteration, it means that the badi receives one transaction and makes the agregation of its information (retail information for example). When the Badi finishes the treatment of all the relevant transactions, the result is an internal table which contains the aggregation data of all the processed transactions and this information has to be updated in the Aggregation database.


At this point you can think that the parameter are not going to have any effect on this task, but when the aggregation data is going to be updated in the database, the parameter is very important because the method ( of the /POSDW/CL_AGGREGATED_DATA class ) COPY_AGGREGATED_DATA_REC receives the entries of the aggregation internal table and depending on the value of the parameter "Number of items", this method is going to be executed one o more times.






The entries of the internal table are going to be processed for updating the aggregation database in packages. Each package contains a number of entries which are the result of the aggregation data of some transactions. How many entries we are going to have in each package? It depends on the value of the "Number of items" that we chose in the customizing, but this parameter don't represents the number of  aggregation internal table entries that we have to take. Instead of that, it represents the number of transactions that we have to take into account to produce the aggregation internal table entries, so in each package there are going to be "X" entries which have been produced by the treatment of the number of transactions that we chose before. A transaction might produce one or more aggregation internal table entries.



An example of this:

 

  1. We configure the parameter in the customizing as 5 ( 5 transactions). 

  2. We execute the aggregation task for 10 transactions (They produce an aggregation internal table with 50 entries).

 

 

   3.  When the information is going to be updated in the database, the information will be splitted in packages. The first package has 30 entries that have                   been produced by the first block of 5 transactions. These entries are treated and their information is updated in the database. The task status of                       these 5 transactions is "PROCESSED". But we still have more entries in another package. These 20 entries are from the last 5 transactions, and they are         procesed and updated in the database. The task status will change to "PROCESSED".



If the "Number of items" parameter is configured as '0' or INITIAL, it produces that all transactions have to be processed at the same LUW and the package will contain all the entries of the aggregation internal table.








 

  • Aggregation Task (STEP 2)

The second Step is responsible for processing the information of the aggregation database (OUTBOUND task), so for example we can send it by an ABAP_PROXY or we can do what we want with it. At this second step, we have to configure the parameter like the first step:




As you can notice, the value of this parameter can be different to the value of the first step because in this second step we aren't going to work with transactions, instead of that we are going to work with entries of the aggregation database. In the aggregation database there are entries produced by the aggregation task so when we execute the OUTBOUND task for processing these entries, the package size is very important because we are going to treat in each LUW an specific number of database entries. Actually, if we configure the parameter as 20 and we have 50 entries in the database, 3 packages will be created: the first package will have 20 entries, the second will have 20 entries and the last package will have 10 entries.



The badi for the OUTPUT task is /POSDW/BADI_PROCESS_PACKAGE and we select the database entries using the method GET_PACKAGE_DATA. For the example of 50 entries, we would have to call the method 3 times.



 

 

Like the first step, if we configure the parameter as '0' or INITIAL, all entries will be processed at the same LUW.

 

 

 

 

 

 

 

In summary, the package size is very important for processing and performance, so you should choose the best option to your purpose.

EasyQuery, What is it?

 

To put it simply, it is an automated process by which SAP system generates all necessary backend objects so that one can consume the data output from a normal query in BW, programatically from a (local or remote)/(SAP or Non-SAP system) by calling an RFC enabled autogenerated FM.

 

How to?

 

Just tick the EasyQuery check box in the properties of a normal query in query designer and execute the query and all backend FM and structures are autogenerated. All you need to do is to declare a variable by the type of autogenerated structure and pass it in to the autogenerated FM.

 

Transport issues

 

Here we reach the crux of this post. The standard method of transporting an EasyQuery is to simply collect the original query and transport it and then check the EasyQuery checkbox again in the target client and execute again. The issue here is there is no guarantee that the autogenerated objects will have the same names in the target client as they had in source client. This creates problems in code for consumption especially when its consumed from SAP systems. You will have a piece of code which addressed the autogenerated FM correctly in one client and after transport it will obviously error in new client if the autogenerated FM name is different.

 

Below fixes are possible.

 

1. One Option is to open each client and correct the code so that it matches the autogenerated objects. This not recommended at all as this will mean opening the production client too.

 

data: t_grid_data type /bic/ne_4 “number 4 in this type will have to be

                                                            "edited according to what number got

                              “generated after transport in the target

call function /BIC/NF_4          “same number above will have to be

                                                    “updated for function name too

    exporting

      i_s_var_02ix_asondat             
= wa_l_r_asondate

   
tables
      e_t_grid_data                    
= t_grid_data
      e_t_column_description           
= t_col_desc
      e_t_row_description              
= t_row_desc
      e_t_message_log                  
= t_message_log.

 

2. Second option is to use the FM : RSEQ_GET_EQRFC_NAME to fetch the autogenerated FM name for a particular EasyQuery name passed. You will still have to provide the custom structure as input to the autogenerated FM which can be accomplished by using a CASE..WHEN loop to check the nomenclature of the FM name returned below and declare the custom structure according to that. This works because both the autogenerated FM and structure use the same matching numbers as differentiators.

 

lv_eq_name = 'IHRPA_C01_Q0013'.


call function 'RSEQ_GET_EQRFC_NAME'exporting
  I_QUERY
= lv_eq_name
importing
  E_RFCNAME
= lv_fm_name. “Get the substring from this variable after

                         

                                                       “first 3 characters and use that in a CASE

 

*Example – CASE lv_substring.

*              WHEN ‘4’

*              t_grid_data type /bic/ne_4

*              WHEN ‘5’

*              t_grid_data type /bic/ne_5

 

call function lv_fm_name
   
exporting

      i_s_var_02ix_asondat             
= wa_l_r_asondate

   
tables
      e_t_grid_data                    
= t_grid_data
      e_t_column_description           
= t_col_desc
      e_t_row_description              
= t_row_desc
      e_t_message_log                  
= t_message_log.

 

 

3. The third and seemingly both logical and clean method would be to take these autogenerated objects as a template and create custom Z objects and use them in the code for consumption. We can have a single custom FM for all easyqueries created and this custom FM will replicate the functionality inside the autogenerated FM minus some security checks (example: a date check which is there to make sure autogenerated FM matches the latest changes to the query). This custom FM can be made to accept the EasyQuery name, Custom structure(which will have to be created taking the autogenerated structure as sample for each EasyQuery created, seperately) and ASONDATE

 

data: t_grid_data type z_Query_Name_Struct. “created using /bic/ne_4 as template.

                                         "Will have to be repeated for each separate EasyQuery and also

                                         "regenerated easyquery.

call function z_Query_Name_Function “created using /BIC/NF_4 as “template, but can be used for any

                               "EasyQuery going forward

    exporting

      i_s_var_02ix_asondat              = wa_l_r_asondate

i_s_var_eq_name = 'IHRPA_C01_Q0013'

   
tables
      e_t_grid_data                    
= t_grid_data
      e_t_column_description           
= t_col_desc
      e_t_row_description              
= t_row_desc
      e_t_message_log                  
= t_message_log.

 

I will be back with implemantation code for z_Query_Name_Function and steps for z_Query_Name_Struct

Hope the information provided above is useful. Please suggest further as you see fit, on the subject.

 

Regards

Darshan Suku

Ramya huleppa

Update from PSA Issue.

Posted by Ramya huleppa Dec 19, 2013

Hi,

 

Few days back I had searched SDN for the same issue(mentined below) and found many solutions related to it but none of the solutions which i found could help me in resolving the issue.

 

Hope it helps any.

 

 

The PC failed due to update from PSA issue.

 

When checked for error message

 

Details Tab : - last delta upload not yet completed. Cancel

 

In The monitor screen I can find that the load being in yellow status for more than expected time and ended with short dump the  extracted records are (for ex: 0 from 52 records)

In the source system I can find that the job is also  finished.

 

Checks done:-

  • 1.       Set the status to red and tried reloading but failed again with the same error.
  • 2.       Checked for IDOC’s  in BD87 but no IDOCS stuck.
  • 3.       Checked in SM58 if any TRFC’s stuck I could find the below error status saying

 

SQL error 14400 when accessing table /BIC/B00000XX000

 

TRFCs.png

 

When checked in ST22 I can find the below error

 

dump.png

 

Tried doing :-

 

Deleted the PSA table data and tried loading again but the PC failed showing the same error but when changed the info package settings from only PSA to data targets the load was success So it shows that there is some issue with PSA table as mentioned in the short dump.

 

Exact Solution:- As the  error was saying that “Inserted partition key does not map to any partition “in the short dump as a trail and error mathod tried activating the transfer structure using the program RS_TRANSTRU_ACTIVATE_ALL which worked and the loads was fine as normal.

04.Dec.2013 I received my invoice from Amazon: (Please note this can vary a lot depending on your usage, country, etc.)

 

This is the bill for 2 hours of usage: (currency USD)

 

 

 

 

I was looking for a trial version of SAP Lumira to play around in my iPad, I found this page:

 

http://global.sap.com/software-free-trials/index.epx

 

Here's a list of a lot of products SAP offers as free or time trials.

 

Well I found the Lumira free version here, but also I found:

 

SAP NetWeaver Business Warehouse, powered by HANA

 

The temptation was big, I've used 7.3 but never on top of Hana, I've seen post and documents, but I wanted to feel it.

 

So why not ? Immediately after signing up, you will be prompted for your Amazon EC2 account

 

NOTE: You (yes!, you and from your own wallet) will have to pay for AMAZON EC2 hosting, basically EC2 is a service for renting virtual machines. So you will need to rent virtual machines to run your BW on top of HANA. Amazon invoicing model it's pretty complex, but the bottom line is: While your virtual machine is turned on (running) you are spending money. I used my BW 7.3 on top of HANA for 2 hours, and was loading data and playing around the whole time.

 

So far im my AMAZON EC2 account I don't see any charges, but they will come, I will update this blog when I receive the charges.

 

Now I will just add some images and links from my experience and ask you to comment about yours.

 

Cheers!

 

The longest waiting time is 10 to 20 minutes when your amazon virtual machines are being created / activated:

 

Activating.png

 

And after some time (I was refreshing the web page every now and then) you see that's done:

 

Activated.png

 

Then you can click the connect button and a RDP file will download with this name: "Instance of SAP NetWeaver BW 7.3 on SAP HANA SP06 with SAP BusinessObjects BI 4.0 SP07.rdp"

 

That's the great part, your remote desktop connection client will open and you will be connected to your virtual machine server.

(Default passwords here)

 

 

On your remote desktop all your tools will be there installed and ready to use.

 

 

This is an actual screen shot:

Desktop.png

Then just double click your SAP GUI and voilá you are logged into your own BW 7.3 powered by HANA:

 

BW73.png

 

Now just have fun, experiment and remember:If you logout and leave your  server running you will be spending money, so please follow the instruction for shutting down your instance whenever you are not using it.

 

Also share your experiences running BW 7.3 HANA  Trial on AMAZON!

Below is the Example for better understanding of Upper Limit And Lower Limit for Generic Delta in Generic Data source:

 

1.jpg

 

2.jpg

 

3.jpg

Anil Kumar Thalada

Performance Tuning

Posted by Anil Kumar Thalada Aug 26, 2013

Performance Tuning

Ø   When the query is running slow, how should we improve the query performance? Query Performance

Ø   When we are extracting data from source system to BI, Loading might be going slow? Loading Performance

Ø   Query Performance: Query Execution Process: Whenever we execute the query it triggers the OLAP processer, it first check the available of data in OLAP cache if cache is not available, it identifies the info provider on which BEx report should be executed on & it triggers the info provider & selects the records & aggregates the records based on characteristic values in the OLAP processer & transfer to the front end (BEx) and the records are formatted in the front end.

Ø   Frontend Time: Time spent at the Bex to Execute the query is called as Frontend time

Ø   OLAP Time: Time spend at the processer to perform the process called as OLAP time

Ø   DB Time: The time spent at the database to retrieve the data to the processer is called as DB Time

Ø   Total Time taken to execute query = Frontend time + OLAP time +DB time

Ø   Aggregation ratio: Number of records selected from the database to the OLAP processor / number of records transferred to the BEX

Ø   1. How to collect the statistics:RSA1 à Tools à Setting for BI statistics (Tcode: RSDDSTAT) à(RSDDSTAT_DM & RSDDSTAT_OLAP tables will collect the statics)à If the tables already having data delete the contents of table à You can find a button delete statistical data à It will ask the period à Delete à Observe the job in SM37 à Now select the info provider & Query on which you want maintain the statics à Make the necessary settings

Ø   Save à Now if any one execute the query the statics will be maintained in statistical tables

Ø   How to analyse the statistics collected:1) By looking at the contents of the tables RSDDSTAT_DM, RSDDSTAT_OLAP

Ø   Another Way: By using the Transaction code ST03N

Ø   Another Way: By Implementing BI statistics

Ø   Go to statistical tables à Contents à Settings à List Format à Choose Fields à Deselect all à Select INFO CUBE & QUERY ID (Name of the query) & QDBSEL (Number of records selected from data base) & QDBTRANS (Number of records transferred to BEX) & QTIMEOLAP (Time spent at OLAP) à  QTIMEDB (DB TIME) àQTIMECLIENT (FRONTEND TIME)à TRANSFER à Observe the statics

Ø   Another way: ST03 à Expert Mode à Double click on BI  Work Load à Select drop down for aggregation à Select Query à Filter your query à Go to all data tab à Observe the statistical information

Ø   Another Flexibility by implementing BI statistics (RSTCC_INST_BIAC) àInstead of looking the data in the table what SAP has done is they have given some readymade queries, info cubes, transformations, readymade multi providers, install them & load the data to these cube à There are some readymade BEx queries which will give the analysis of the reports

Ø   0TCT_C01 (Front-End and OLAP Statistics (Aggregated))

Ø   0TCT_C02(BI Front-End and OLAP Statistics (Details))

Ø   0TCT_C03(Data Manager Statistics (Details))

Ø   0BWTC_C04(BW Statistics - Aggregates)

Ø   0BWTC_C05(BW Statistics - WHM)

Ø   0BWTC_C09(Condensing Info Cubes),

Ø   0BWTC_C11(Data deletion from info cube),

Ø   0TCT_MC02 (MULTIP PROVIDER - Front-End and OLAP Statistics (Details))

Ø   0TCT_MC01 (Multi Provider - Front-End and OLAP Statistics (Aggregated))

Ø   0BWTC_C10 (Multi Provider - BW Statistics)

Ø   Most of the system maintenance reports come from these contents, Like How many number of users used some reports & Administration Reports

Ø   STEPS: Install the Business content data source à RSA5 à Expand the application component Business Information Ware house àExpand application component TCT àInstall the data sources (Total 6)àReplicate the data source using My Self Connectionà RSA13 à Select My self-Connection  à Data Source overview à Expand BW data sources à Expand Business information warehouse à Technical content à Context menu à Replicate à

Ø   Install all other contents like info cubes, reports, multi providers, info packages, transformations, DTP’s

Ø   RSOR àExpand the Multi Provider à Double click on select objects àFind à0BWTC_C10 à Select Inflow Before & After àInstall in the background à Once the installation is done

Ø   Load the data to the all cubes by scheduling the info package & DTP’s

Ø   2 reports which are mainly used for report analysis àutilizing OLAP per query (0BWTC_C10_Q012)& utilizing OLAP per Info cube (0BWTC_C10_Q013)

Ø   Open the query Q012 in analyzer à Execute à Specify the cube name & query name à Execute à Observe the statistics

Ø   Different aspects what we can do to improve query performance:

Ø   If DB TIME IS MORE: 1.Modelling Aspects 2. Query design 3. Compression

  1. 4. Aggregates 5. Partitioning 6. Read mode of the query 7. Pre calculated web template 8. Line Item dimension 9. Indexes

Attributes

Types of Attributes:

  1. Ø  Display Attributes.
  2. Ø  Exclusive Attributes
  3. Ø  Navigational Attributes
  4. Ø  Time dependent Attributes
  5. Ø  Time dependent – navigational attributes
  6. Ø  Compounding attributes
  7. Ø  Transitive attributes

1. Display Attribute: Any info object if Attribute only check box selected it becomes display attribute

Ø  It is stored in Attribute table - /P

Ø  It gives the present truth in the reporting

Ø  It will completely depend on the Main Characteristic

2. Navigational Attribute: Whenever the attribute want to act as characteristic at query level we use navigational attribute

Ø  It gives present truth

Ø  It is stored in - /X

Ø  Navigation attribute should be ON with description

Ø  Whatever we can do with a normal characteristic in a query, we can do that with navigational attribute in the reporting

Ø  Naming convention of navigational attribute à Main characteristic name _ attribute name

Ø  This navigational attribute should be taken into the info provider (Ex: DSO, CUBE, Multi Provider) & Info object should be ON so that it enables for reporting otherwise it is not available for reporting

3. Exclusive Attribute: The attribute only check box is deselected

4. Time Dependent attribute (Display Attribute + Time Dependent Property): (Select the time dependent check box)à When we have the value of characteristic changing based on time period, we model these info object as time dependent attributes – which enables to maintain the different values w.r.t 2 more fields (Date From & Date To)

Ø  Time dependent attributes will be stored in -/Q

Ø  Key date will define what value it has to bring from the time dependent attribute table

5. Time dependent Navigational attribute

Ø  It is both Time Dependent & navigational

Ø  It is stored in table - /y

 

6. Compounding Attribute:

Ø  Superior level of attribute

Ø  When the value of one info objects depends on the value  another info object

Ø  Example: I have two plants, PLANT 1: M1, M2, M3, PLANT 2: M1, M2, M3

Ø  Here Material is compound of PLANT

Ø  Compounding attribute will act as part of primary key to all your attribute, text & SID table

Ø  Ex: 0MAT_PLANT, 0REASON_CDE_0COMP_CODE.

Ø  Compounding attribute will degrade loading performance

7. Transitive Attribute:

Ø  2nd level of navigational attribute

Ø  If one navigational attribute will have another navigational attribute

Ø  How do you find delta Process of Data Source?

Ø  ROOSOURCE TABLE

Ø  Detail level of Delta information: RODELTAM

Ø  Early Delta Initialization (It will enable for LO): In info package update tab you can see a radio button Early Delta Initialization - If you execute the early delta initialization, the source system update can continue and data can be written to the delta queue while the initialization request is being processed.

 

 

 

Regards

 

Anil Kumar Thalada

 

 

 

How to enable Delta Mechanism for Generic Extractors

 

Ø 3 Types of Delta Mechanism for Generic Extraction

Ø 1. CALDAY 2. Time Stamp 3.Numaric Point

Ø Delta Mechanism with CALDAY: I have a table VBAK – VBELN, ERDAT, AUDAT, NETWR

Ø We can set up delta mechanism based on CALDAY (Data Source Capability) when we have changed Date filed in data source

Ø AUDAT – will give latest changed date

Ø    VBELN

Ø    ERDAT

Ø    AUDAT

Ø    NETWR

Ø    4965

Ø    01.01.2006

Ø    01.01.2006

Ø    5000

Ø    4966

Ø    02.01.2006

Ø    02.01.2006

Ø    1000

Ø    4967

Ø    05.02.2006

Ø    05.02.2006

Ø    15000

Ø    4968

Ø    06.07.2007

Ø    06.07.2007

Ø    20000

Ø    4965

Ø    01.01.2006

Ø    09.07.2007

Ø    9000

Ø If we run Initialization Delta on 03.01.2006 It will Pick 4965 & 4966

Ø If we run Delta on 05.02.2006 at night 12’o clock it will Pick 4967 records (previously loaded records date >& TILL  05.02.2006 date)

Ø If we run Delta on 10.07.2007 It will Pick 4968 & 4965

Ø Limitations of Delta Mechanism with CALDAY: Multiple Deltas cannot be run on same day because it will not consider time factor. If the Delta Mechanism set up with CALDAY we can run delta only once per day at the end of day

Ø 2. Delta Mechanism with Time Stamp: Since Delta is Time stamp; we can run the delta number of times. If you setup Delta with Time Stamp we need to have Time Stamp Field in the source (UPD_TMSTMP)

Ø 3.Delta Mechanism with Numeric Point: If we set the delta based on numeric Point, it can only recognizes the newly added records but not the modified records

Ø Safety Interval: Let’s assume your delta run on Sunday, Thursday and Saturday. And you last run was on Sunday and next Run is on Thursday.
In this case if you have set the lower limit to 1, the delta run of Thursday will fetch data from Saturday till Thursday.

"Delta management can take the lower limit from the last extraction.
The lower limit for safety interval contains the value that needs to be taken from the highest value of the previous extraction to obtain the lowest value of the following extraction." E.g: You can say Lower limit = Highest value of last extraction- safety interval.

 

Example: Lower Limit = 2, Upper Limit = 1

 

 

Example: Lower Limit = 0, Upper Limit = 0

 

 

           Tips:

 

Ø If delta field is Date (Record Create Date or change date), then use Upper Limit of 1 day. This will load Delta in BI as of yesterday. Leave Lower limit blank.

Ø If delta field is Time Stamp, then use Upper Limit of equal to 1800 Seconds (30 minutes). This will load Delta in BI as of 30 minutes old. Leave Lower limit blank.

 

NOTE: Safety interval should be set so that no document is missed – even if it was not stored in the DB table when the extraction took place.

 

 

     Select Delta type:

Ø New status for changed records (I.e. after-image); this can be used with Data target DSO (AIE).

Ø Additive Delta (I.e. aggregated data records) (ADD); Compatible with Info cube & DSO

Anil Kumar Thalada

BW Aggregates

Posted by Anil Kumar Thalada Aug 22, 2013

AGGREGATES:

Ø  Aggregates are the smaller cubes which are built on  cube in order  to improve the query performance

Ø  Aggregates are specific to the characteristic , but by default it will take all the key figures of the cube

Ø  When we execute a query the processer will search for suitable aggregate, if it founds it fetches data from the aggregate, if not it search for data in the cube

Ø  Thumb Rule: DB is > 30% of the total time & Aggregation ratio should be > 10%

Ø  Select the info cube à Context menu à Maintain Aggregates à Create by yourself à Create à Specify description à OK à Drag & Drop the characteristic to right side on which you want aggregate à Activate à Continue à Now à

Ø  Observe the F table & E table for aggregate

Ø  Select aggregate à Click on Display Aggregate data

Ø  Initial fill: After creation of  the Aggregate whatever the data we load for first time from cube to Aggregate is called Initial Fill

Ø  Roll Up: After the Initial fill, what related data that has been loaded to cube has rolled back to the aggregates, this can be performed process called Roll Up. If you don’t do the roll up , this request will not be available for reporting

Ø  Go to Info Cube Manage à Roll up Tab à Selection à Immediate à Execute

Ø  Aggregates degrades the Database Performance

Ø  If we have more aggregates, it degrades the Performance of ‘Attribute Change Run’

Ø  Aggregates by default set to compression  & we cannot built a query on aggregates

Ø  Whenever we run the query, how can we know the data is fetching from Aggregate or from cube?

Ø  RSRT àSpecify the query name à Execute + Debug à Select Display Aggregate found à OK

Ø  Technical name of the Aggregate formed by the system after activation

Ø  We can have multiple aggregates built on a single cube

Ø  Different flexibilities:

Ø  Maintain aggregate selectively for characteristic à Select the characteristic à Context menu à Fixed value à Select the value as per requirement à Ok à Activate  

Ø  Navigational attribute can be used in building aggregates. Create a navigational attribute for characteristic à Aggregate maintenance à Drag & Drop navigational attributeà select the aggregate & save à Activate & refill

Ø  Based on the customer number, navigational attribute value will get assigned

Ø  Property of Aggregate additive, Property of master data overwrites

Ø  Whenever we change master data, data in the aggregated need to be changed as per attribute table, to reflect those changes we run“ Attribute Change Run”

Ø  RSA1 àTools Menu à Apply Hierarchy / Hierarchy Attribute change run à Selection Option (Immediate) à Info Object list (main Char) à Save à Back à Execute

Ø  We can use Hierarchies while building aggregatesàSelect the CNO à Context Menu à Hierarchy Level à Node Level (03) à Select aggregate & Save à Activate & Refill

Ø  Maintaining aggregate with multiple selective characteristics is not possible à To overcome with this problem we have to build multiple aggregates

Ø  Different Options with Aggregates: Switch on/off,  When we switch off an Aggregate, the structure of aggregate will be in present but the aggregate will not identified by OLAP Processer for reporting

Ø  Whenever we required we can switch on

Ø  Deactivate: Structure of aggregated will be there, but data will get deleted

Ø  OLAP TIME: Query Design, Modelling Aspects, OLAP Cache, Compression, Aggregates

Ø  Frontend Time: Query Design, Modelling Aspects, Formatting

 

Normally as a beginner we do have some issues in transporting the objects in the landscape..

 

So I thought this would through some light on to understanding the collection of objects and transporting across the landscape in SAP BW.

 

Firstly we need to know about the T-code's SE09/SE10/SE01 which are normally used for transportation purpose..across the landscape.

 

SE09:http://help.sap.com/saphelp_45b/helpdata/en/97/897415dc4ad111950d0060b03c6b76/content.htm

SE01:http://help.sap.com/saphelp_45b/helpdata/en/94/897415dc4ad111950d0060b03c6b76/content.htm

 

 

I wanted to explain a scenario where in I have following dataflow.

 

Eg 1:7.0 data flow.

 

Multiprovider <--Infocube<--Transformation<--DSO<--Transformation<--Datasource

 

Following are the objects need to be checked in a transport before transporting.

 

1. Multiprovider

2. Info cube

3. Transformation

4. DSO

5. Transformation

6. Data source.

Collect all above objects in a transport..

 

The other example is on the collection of Process chain Variants.

 

Eg 2:For Process Chain.

 

Collect all the variants in process chain before transporting from EXTRAS Menu and doing a Object directory entry.

 

Apart from this we do have provision of transport connection

 

http://help.sap.com/saphelp_nw04/helpdata/en/0b/5ee7377a98c17fe10000009b38f842/content.htm

Prerequisites to know/master on these idea's is to learn writing ABAP Routine's like start routine,expert routine,end routine,characteristic routine,dtp routine for filter's and Info package routine's for filter's.

 

Go through the following help note's to know about these routine's.

 

Routines in Transformations (SAP Library - Business Intelligence)

 

You can also explore SCN forum to master on these routine's.

 

As it is known that transformation generates a standard program where we can see our routine written in start/expert/end routine's.

 

1.Go to generated program and find your respective routine's in generated program and keep a session break-point as shown below.

 

-Select Dispaly Generated program in Extras menu as shown below.

Capture 1.JPG

-Find your piece of code in generated program

 

Capture 2.JPG

-Run(Simulate)

 

DTP in Debugging mode

 

Capture 4.JPG

Debugger screen opens and stops the control at session break-point as shown below.

Capture 5.JPG

 


 

2.Hard coding in the routine's.

 

Go to your respective routine's hard code BREAK-POINT as shown below.

Capture 6.JPG

Run/Simulate DTP.


Capture 4.JPG

Control stops at hard coded break-point as shown below.

Capture 7.JPG

 


 

3.From DTP

 

We have many break-points can be set up while in simulating the DTP as shown in below.

I have checked "Before Transformation" in Change break-points of transformation which creates a break point in simulation mode.

Capture 9.JPG

Control stops at a point before the transformation code is executed as shown below.

Now Just click on Session break-point as highlighted.

Capture 11.JPG

A screen appears with different tabs now select ABAP Command where you want to stop the control again which should be at your code(Start/End/Exper/Characteristic routine's).

Capture 12.JPG

In my example Delete is the command used in start routine so the control stops as shown below.

Capture 13.JPG

 

Now go ahead with your debugging come out with successful colors.

 

 

Note:Hard coding in the routine's is least suggested so please give least preference for point no. 2.

 

For more information Expert mode debugging go through my blog on

http://scn.sap.com/community/data-warehousing/blog/2013/08/06/debugging-using-a-dtp-indicator-expert-mode-request-simulation


We are well aware of creating a open hub destination for flat file/database table but we are still remained with ambiguities on third party tool .

 

Here is the method defined briefly which may clear some of our ambiguities.

 

Firstly the Open hub services are used to extract data from SAP BI/BW System to non-SAP System's.

 

Before sending a request a to Third party tool the data should be extracted to a table.The Third Party tool receives a message through RFC from after completion of extraction.

 

Creation of Open hub Destination-Third Party Tool as destination.

 

Step-1.

 

Select Third-Part Tool as destination.

 

Capture 1.JPG

 

Step-2.

 

Select generated RFC for third party tool.

 

Capture 2.JPG

 

 

 

After defining Open Hub destination for third party tool's an RFC call will send a message about the completion of extraction of data from BI to Table and then the API's can be used to read data.

 

Below are few steps which can be followed from http://help.sap.com/saphelp_nw70/helpdata/en/43/7a69d9f3897103e10000000a1553f7/content.htm

Usefull notes http://msdn.microsoft.com/en-us/library/dd299430.aspx

Hope some of your ambiguities were cleared...

Actions

Filter Blog

By author:
By date:
By tag: