1 2 3 20 Previous Next

SAP Business Warehouse

293 Posts

Introduction

 

We face many challenges in our BI projects in terms of creation of Analysis Authorizations in a bulk. Users always expect outstanding security of our BI reports. No matter what ever we do in the back end (Modeling and Query Designer). I am going to share very important tips & techniques which I have learned through out my experience which consists of some standard thumb rules too.

 

This blog is going to explain you about "Creation of bulk analysis authorizations". I am not going to talk about standard Steps to be carried out.  For example we need to create Profit center wise analysis authorizations and respective roles wherein the no.of analysis authorizations will at around 200 to 500.  In such situation i created a program in that we need to just upload the profit center numbers in CSV format.

 

The program will create the required Technical names of analysis authorization along with customized descriptions (whatever we want) within a fraction of second it will create all Analysis Auth. objects.

 

Please go through the screenshots mentioned below, as it contains how we create one Analysis authorization in our system.

 

1 :- Enter the Tcode 'RSECADMIN'  and later click on "Ind. Maint.".

1.JPG

2 :- Specify the Tech.Name of AA and click on "Create".

2.JPG

3 :- Specify the Short, Medium and Long Text and then click on infoprovider icon and select your info objects.  For example i want to create profit center wise AA objects including Cost center and Controlling Area.

3.JPG

4:-  Click on Profit Center intervals and maintain the value of Profit center.

4.JPG

5:- Maintain the info provider values as mentioned below.

5.JPG

 

6:- Save and activate.

 

The above six steps will take time at least a minute time to create one analysis authorization.  So for this i explored a lot and i found three tables are updated when we are creating one AA object.  RSECVAL, RSECBIAU, RSECTXT.

 

I just want to share this program to all of you, so that it can helps us to create bulk AA objects.

 

NOTE:-  Please be take care of this program before running.  If 'text.csv' file is not there in our C:\ then it will not do any harm to our AA objects.  As i did not add any checks before updating these values.

 

Conclusion :

 

It is always better to keep an eye on above points while developing / creating AA objects. This blog will help us to skip our 3 man days work and will give the output within seconds.  I have tried my level best to make it very clear to achieve this peculiar requirement. I am sure this is going to help us, as many clients does this kind of analysis authorizations.


Thank You

Every now and then we will see a thread posted in BW space seeking help in transformation routines.

Start routine, end routine, field routine and expert routine; which one to use, how to use, when to use. These questions can be answered only with respect to the logic that we need to apply in any particular case.

 

I would like to share here, how I see, approach and write start routine…

 

Start Routine:

 

In a data transfer process (DTP), start routine runs first. Then it is followed by transformation rules and end routine.


In medium and complex transformations, we will be having set of logic to be implemented.  Those logic include exclusions, look ups, conversion, calculation etc.

 

A plan shall be there with us for what to write in start routine. This will be decided based on the fact that start routines runs first.

 

 

When to write;

 

Scenarios which are good to be written in start routine are,

 

1. Delete out unwanted data.


Ex: You want to delete a record if its delivery flag is not set to X, in this case you have to use start routine.

 

2. Populate internal table by Select data from DB table, which will be used during lookups.

 

Ex: In schedule line Datasource currency field is not filled for some records, you want to fill them with company code currency. For this you have to look up Company code master. In start routine you can fill an internal table with all the company and currency details. The same can be done for transaction data also.



3. Sorting the records. Based on sorting further transformation rules can be written.

 

In a Goods movement Datasource, if you want to process you inward deliveries against the PO number chronologically, we can sort the source package in start routine and in transformation rules they can be processed serially.

 

How to write;

 

Simple filter


DELETE SOURCE_PACKAGE  WHERE /BIC/OIFLAG NE ‘X’.

 

It is better to delete unwanted records in start routine, because it won’t be processed unnecessarily in subsequent steps and reduce the data loading time.


Populating Internal table


SELECT comp_code country currency
FROM /bi0/pcomp_code
INTO CORRESPONDING FIELDS OF TABLE it_compcd(internal table)
FOR ALL ENTRIES IN SOURCE_PACKAGE[]
WHERE comp_code = SOURCE_PACKAGE-bukrs.


When you write a select in field routine, it means that you are writing a select inside a loop. For an every iteration of loop, SELECT statement will hit DB table. It will result in performance issue. So it is good to write the select statement in start routine and take all possible records from DB table into an internal table.

This internal table can be looked up using READ statement.

 

Sort


SORT SOURCE_PACKAGE BY vendor createon po_number.

 

  This code will sort source package by vendor, date and PO. this mean your oldest PO processed first in transformation rule.



 






Thanks!!!


Note: The content in this blog is applicable to retail systems only.


Hi All,

 

In my project, we faced issues in delta extraction for 0MATERIAL_ATTR and 0MAT_PLANT_ATTR. Thought of sharing the workaround for the same.

 

For Retail Systems, delta doesn’t work for 0MATERIAL_ATTR and 0MAT_PLANT_ATTR.

Hence, it is suggested to use 0ARTICLE_ATTR and 0ART_PLANT_ATTR instead.

 

Also, please note that the delta extraction for these datasources is based on the concept of change pointers. It involves different tables which are listed below:

 

1. Table ROOSGEN shows which message type (format: RSxxxx.) has been assigned to a datasource.

2. Table BDCPV contains the change pointers. Every time a change relevant to the datasource is performed, a change pointer with message type RSxxxx is added     to table BDCPV.

3. Table TDB62 shows fields of tables that are relevant for a message type. That is, only changes of these fields will generate a change pointer (even if there are       more fields in the datasource)

4. ROMDDELTA stores for each datasource, which change document object is used and which table is used for generation of the change documents. The                 information stored in table 'ROMDDELTA' are used to generate the entries in the table 'TBD62'. The entries in ROMDDELTA are normally created when                 activating a DataSource in RSA5.
    
    If it is not there, the DataSource should be re-activated (and replicated in BW to be sure of consistency).

    For a datasource: In table ROMDDELTA, field TABNAME should be the same as the TABNAME field for TDB62.

    Also, esnure that the 'Object' field in table ROMDDELTA has value 'MAT_FULL' for both the datasources.

    For 0ART_PLANT_ATTR, if the TABNAME field in table ROMDDELTA has value 'MARC', you need to run a ABAP report for changing these entries to                   DMARC. Below is the ABAP code that can be used.


REPORT  ZZ_ROMDDELTA.

UPDATE romddelta
SET tabname   = 'DMARC'
WHERE tabname = 'MARC' and OLTPSOURCE = '0ART_PLANT_ATTR'.

COMMIT WORK.

ROMDDELTA.png

Hope this blog post helps you.

Thanks.

Regards,

Jatin

The original blog can be found here: ekessler.de

In this blog I describe the conversion of customer-specific implementation that may be necessary by changes to SAP standard data types.

The Note
1823174 - BW 7.4 conversions and customer-specific programs is already on this subject and also describes analysis and
solutions. The blog is intended to supplement give the note provides additional background information and assist in the search for the best way for the changeover.

1.1     Why and when is the change necessary?

Due to the expansion of key information objects previously max. 60 characters to 250 characters was now in BW 7.4, the data type of the domain RSCHAVL (which is used by the data element RSCHAVL) changed from CHAR 60 SSTRING.
The data type STRING is a dynamic data type of a variable length of up to 1333 characters (see http://help.sap.com/abapdocu_731/en/abenbuilt_in_types_dictionary.htm).
The data element RSCHAVL is used in several BW structures in the framework of selection options (Range tables). Figure 1.1 shows the two selection structures which are used to process BEx variables used in the context of the customer exits. The import parameter I_T_VAR_RANGE based on the structure RRRANGEEXIT (BW: Simplified structure for variable exit variable) and the export parameters E_T_RANGE based on the structure RRRANGESID (range expanded around SID). Internal access both structures back to the include structure RRRANGE (range table in brain).
Figure_1_1.jpg
Figure 1.1: Selection Structure in customer exit EXIT_SAPLRRS0_001
The change in the structures by changing the domain RSCHAVL does not necessarily cause the customer-specific implementations need to be adjusted. By changing the domain RSCHAVL all structures in which one or more components on the data element RSCHAVL based on deep structures. Figure 1.2 shows coding examples of the lead after the conversion to syntax errors.
Figure_1_2.jpg
Figure 1.2: Invalid ABAP Syntax in BW 7.4
The declaration of local structures using theTABLES statement is allowed only for tables based on flat structures. This also applies to the declaration by DATA and LIKE.
The LIKE operator can easily be replaced by the type-specific operatorTYPE here. For the declaration by
the
TABLES statement, the coding on the declaration byDATA must be changed.
The typical types CHAR use of offsets and lengths as ls_e_range-low + 6 (2) is based on string types are not allowed and will lead to syntax errors.
The use of offset and length specifications can be replaced by string operations (concatenation) or by the use of string templates, see Figure 1.2.
Other examples of implementations that run in NW 7.4 syntax errors are in Note 1823174 - BW 7.4 lists changes and customer-specific programs in the section syntax error.

1.1     What and where changes are necessary?

  • Customerexit forvariables
  • Customer-SpecificPrograms
  • Transformations / DTP / InfoPackages

1.2  How can the job be with the syntax error found

Okay, now we know what implementations cause syntax errors and in what areas we need to look.
  • But as we now find the job in customer own implementations?
  • Do all reports / Includes / function modules / classes / ... be reviewed manually?
  • What are the options for the correction of errors found?

1.2.1  Customer Exit for variables and user-defined programs


The answer here is not unique. If we looks at the three areas that need, we investigate we can for the first two, customer exit for variables and user-defined programs, make it clear that this can be checked automatically.
The Note 1823174 - BW represents 7.4 changes and customer-specific programs two Word documents are available in which is described as using the code
inspector (transaction SCI) customer-specific implementations can be examined for syntax errors. The reference here offers two variants of the code inspector.
The variant Code Inspector pre post can be performed before the upgrade and after the upgrade. Using the report ZSAP_SCI_DELTA the delta of the two races can be determined and compared.
The second version of the Code inspector CodeInspector_Post is described in the Note may be used if the syntax check before upgrading could not be carried out.

Figure_1_3.jpg
Figure 1.3 shows he result of the Code inspector variants
The syntax errors found can be accessed directly from the result of the report code inspector out using forward navigation and corrected.
When correcting errors that occur within the customer exit variables, customers are supported by SAP. In the blog New BAdI RSROA_VARIABLES_EXIT_BADI (7.3) I have the new BAdI presented for processing customer exit variable was introduced in BW 7.3. With SAP BW 7.4 has extended the delivery, the default implementations for an additional BAdI implementation. In addition to the standard BAdI implementation SMOD_EXIT_CALL (Implementation: BAdI for Filling Variables) with BW 7.4, the BAdI implementation CL_RSROA_VAR_SMOD_DIFF_TYPE (SMOD Exit Call with different Tables and Types) as inactive version is shipped. The default BAdI implementation SMOD_EXIT_CALL will continue to be delivered as active implementation.
  • What is the difference of the two implementations?
  • When to use which implementation has to be activated?
  • Can both implementations to be active?
Both implementations serve as a wrapper / mediator / conciliator call the customer exit. Customers who start on a fresh system and no customer exit implementations can have a "legacy" in their system and should include new implementations for the processing of exit variables have their own BAdI implementations implement, see New BAdI RSROA_VARIABLES_EXIT_BADI (7.3).
To explain the difference between the two implementations, we first look at the parameters of the customer exit EXIT_SAPLRRS0_001. In Figure 1.4 in the lower part shows the Importing and Exporting parameters of the function block. In addition to the two parameters I_T_VAR_RANGE and E_T_RANGE the two parameters I_T_VAR_RANGE_C and E_T_RANGE_C have been added. The two parameters I_T_VAR_RANGE and E_T_RANGE the components LOW and HIGH are based on the data element RSCHAVL (see above) and thus are of data type SSTRING fields.
The two parameters I_T_VAR_RANGE_C and E_T_RANGE_C the components LOW and HIGH are based on the data element RSCHAVL_MAXLEN and thus are of data type CHAR. The parameters I_T_VAR_RANGE_C E_T_RANGE_C and can therefore be used analogously to the above original parameters I_T_VAR_RANGE and E_T_RANGE, they are based on flat structures.
Figure_1_4.jpg
Figure 1.4: Optional Customer Exit Parameter
The parameter pairs I_T_VAR_RANGE_C, E_T_RANGE_C and I_T_VAR_RANGE, E_T_RANGE are two options. Which option is used depends on the currently active BAdI implementation. Figure 1.4 shows the relationship between the two of the SAP delivered BAdI implementations SMOD_EXIT_CALL and CL_RSROA_VAR_SMOD_DIFF_TYPE and the parameter pairs used. The BAdI implementation SMOD_EXIT_CALL (default) works with the parameters I_T_VAR_RANGE and E_T_RANGE and the BAdI implementation CL_RSROA_VAR_SMOD_DIFF_TYPE operate with the parameters I_T_VAR_RANGE_C and E_T_RANGE_C.

In the event that the code inspector has many syntax errors found are attributable to the conversion of the data element RSCHAVL recommends you use the optional BAdI implementation CL_RSROA_VAR_SMOD_DIFF_TYPE SAP. Figure 1.5 shows what steps are necessary to use the optional implementation. Start transaction SE18 (BAdI Builder), select the BAdI name option, enter the name RSROA_VARIABLES_EXIT_BADI BAdI and then select Display.

In the enhancement spot (Enhancement Spot) expand the entry RSROA_VARIABLES_EXIT_BADI. Double-clicking on implementations shows the list of the BAdI implementations (1).

The yellow light indicates that the default implementation SMOD_EXIT_CALL is active. The gray lamp indicates that the implementation CL_RSROA_VAR_SMOD_DIFF_TYPE is not active. The definition of the BAdI allows that may be active BAdI implementations several parallel. Therefore, the order of step (2) and (3) is arbitrary.

In step two, we first disable the default implementation. Double-clicking on the implementation in the list on the right of (1) opens the BAdI implementation (2). To disable the implementation of the indicator IMPLEMENTATION IS ACTIVE must be de-selected. Subsequently, the implementation must be enabled.

In step three, we activate the optional BAdI implementation CL_RSROA_VAR_SMOD_DIFF_TYPE. Double-clicking on the implementation in the list on the right of (1) opens the BAdI implementation (2). To activate the implementation of the indicator IMPLEMENTATION IS ACTIVE must be selected. Subsequently, the implementation must be enabled.

After the default implementation deactivated and the optional deployment is enabled, the colors of the lamps should have according to (4). If this is not the case, you must update the display policy.
Figure_1_5.jpg
Figure 1.5: Switch BAdI Implementation

Now that the optional BAdI implementation must now be adapted to the active user-defined coding. For this purpose, only the names of the objects used (structures, table types) must be replaced within the customer's own implementations, as listed in Table 1. For this, the editor function Search and Replace can be used.
OldNew
i_t_var_range
i_t_var_range_c
e_t_rangee_t_range_c
rrrangeexitrrs0_s_var_range_c
rrs0_s_var_rangerrs0_s_var_range_c
rrrangesidrrs0_s_range_c
rsr_s_rangesidrrs0_s_range_c
Table 1 List of reimbursable object name
The adaptation and conversion of the implementation described here is used only makes sense if the code inspector (see above) Syntax error found was due to to the conversion of the data element RSCHAVL back.

Has the Code inspector found no such error or the number of errors is very low, it is recommended that the syntax error in the client implementations to correct and use the default BAdI implementation.

Principle for custom programs are the same as for custom implementations within the customer exits. Here is the fact that there are other structures in which the data element RSCHAVL is used as in Table 1 (see above) listed. For these structures no CHAR-based alternatives. If, for example, the structure RSDRD_S_RANGE (Single Selection for a at InfoObject in a deletion Criterion) in the context of a customer-defined program used must be determined if it comes here to syntax errors. This can be checked similarly to exit variables using the code inspector.

Eventually, the object list in the Code inspector must be adjusted. This is the case if the implementation for the variable exit is organized in another development package as for example the maintenance programs for Housekeeping.

1.1.1      Transformationen / DTPs / InfoPackages

In the third area, transformations / DTP / InfoPackages, it depends on how implementation is implemented here. From technical perspective transformation are reports also referred to as a generated program! Figure 1.6 shows how to get the corresponding generated program of the transformation. Next Figure 1.6 shows what happens when trying the generated program using the code inspector to investigate syntax error
Figure_1_6.jpg
Figure 1.6: Code Inspector and Transformations
The generated program is an SAP object and cannot be investigated by the code inspector.

For the sake of maintainability and reusability many customers are changing over implementations start-, end-, field- or experts routines. Here you will find different approaches to the simple use of includes too complex class models. Outsourced implementations, since they're all here is customized implementations using the Code Inspector (transaction SCI) are reviewed.

Is being implemented directly within the transformation take place it must be manually checked the corresponding transformation.
Generated program transformation
From a technical perspective transformation is a report in which a local ABAP OO class is defined. Basis for the report and the local class templates. These templates are the basis of the generated program. The program code is added within the definition of transformation as start, end, field or expert routine is generated in the local class.

 

To not be able to check each transformation individually and manually transformations using the report RSDG_TRFN_ACTIVATE be automatically activated. The report can be scheduled this as a background job.


About the selection screen of the report RSDG_TRFN_ACTIVATE can be controlled if a single transformation, a list of transformations or transformation with specific source or target objects to be activated. Figure 1.7 shows how the report is run for a transformation. If all transformations are to be tested by re-activation for syntax errors must be released all selection fields.


When selecting a transformation, the generated program is regenerated (this happens only when necessary) and checked for syntax errors. That contains a transformation faulty code cannot enable it features this.

Figure_1_7.jpg

Figure 1.7: Re-Aktivierung von Transformationen


The log-result on the right side of Figure 1.7 shows that become inactive by activation of the transformation of the first or the associated DTP. The report then activated the dependent DTP.


In the inactive transformations must now be checked manually why they could not be activated. As an entry for the reworking of the outcome of the application logs can be (transformations) is used (see Figure 1.7) or the table RSTRAN.

 

Code pushdown transformations and DTP
A positive side effect of the re-activation of the transformations is that thereby implicitly a code pushdown is performed transformations if this is possible. This of course assumes that the database is a SAP HANA.

 

Using the report RSDHA_TRFN_CHECK_HANA_PROCESS can check what transformations are potentially useful for a code pushdown.

 

Routines in the DTP filter and in the InfoPackage
The structures that are used in the context of selection by BEx variables of type Customer Exit or ABAP routines in DTP and / or information packages from the SAP does not currently affected. That is to say here is only when action within its own implementation of a structure is used for the one component to the data element RSCHAVL based.

 

1     SAP Notes

 

1823174 - BW 7.4 Umstellungen und kundeneigene Programme

 

1943752 - SYNTAX_ERROR Dump Occurs when Executing a BW Query with Customer Exit Variable after Upgrading to BW7.4

 

 

1943752  - SYNTAX_ERROR Dump Occurs when Executing a BW Query with Customer Exit Variable after Upgrading to BW7.4            

 

2098262 - SAP HANA Execution: Only RownumFunction() calculated attributes are allowed

 

 

2      Links

 

http://help.sap.com/saphelp_nw74/helpdata/de/54/cf56a496a244518fd774e1bfc68bfd/frameset.htm

 

Programs for Activating BW Objects in a Productive System

In Berlin I attended  two SAP HANA on BW sessions. On Wednesday there was a session on the SAP BW product + roadmap session by Lothar Henkes and on Thursday there was an End to End scenario session by again Lothar Henkes but mainly Marc Harz. Additionally I started the opensap course this week after the Teched&&Dcode where I saw a familiar face… Hello Marc!

In this blog I share some of my thoughts on what I saw in Berlin. As I found some things in Opensap quite important in reiterating what SAP BW is supposed to do I included a paragraph on that subject.

 

BW_Overview.jpg

Source: SAP.

 

In the image above you see the main points of the new things that have been developed in SAP BW 7.4 up until SP8.

In the first presentation the new developments where grouped in a couple of themes. As I was actually impressed at the time at the structured manner of the presentation I will follow that structure and reference the other sessions.

SAP BW

But before we dive into the things I heard in Berlin, I will point at the opensap course that just started a couple of days ago. As SAP BW is clearly going through some rapid changes it was good to go back and look at what the goal of the application is. In one of the first slides in week 1 this overview was givern:

opensap_overview.jpg
Source: SAP.

SAP BW is an application on top of a database. What it wants to do is help you to manage the data warehouse.

As it is an application BW basically lays an abstraction layer over the database. In the past due to all kinds of technical constraints BW felt more like a technical exercise to get performance or to get it to work period.

Now that HANA is doing the heavy lifting, BW seems to get its focus back to what it is originally meant to do. Create a business layer over your database to easier build a data warehouse.

 

You can find the course here: SAP Business Warehouse powered by SAP HANA - Marc Hartz and Ulrich Christ

Try it. It is free and the first week looked very promising.

 

 

Virtual Datawarehouse

The virtual Data warehouse is a layer that is able to  work because of SAP HANA. Only with SAP HANA you are able to get the performance you need to use virtual layers. What SAP BW delivers is ways to create virtual objects that leverage the technology. Utilizing this technology you can think of creating separate views for different departments without having to copy the data. Additionally it creates more flexibility as in the past data reloading was a big part of the time needed to get changes done.

BW_VIRT.jpg
Source: SAP.

In SP8 you have two main objects for your virtual view. The Composite Provider and the Open ODS view. The latter is meant for virtual access to external sources.

The Composite provider looks like the main tool for modelling. It enables you to combine info providers with JOINS (merging) or UNIONS (appending). You can even use other composite providers as source. Note however that this currently is UNION only.

Basically this means that you theoretically can store data only once and build virtual layer upon layer on top of that.

Personally I think that you will keep some kind of staging area around, when you don’t know if the source system is going to retain the data, use transformations to create a persistent single version of the truth, things like cleansing and checking the data,  and from there go with virtual layers.

 

Simplification

The picture seems clear enough. From a large number of objects we go back to only a couple :

DS_simplification.jpg

Source: SAP.

 

 

I was really enthusiastic about this and now after a few days I still am. However I do need warn you that there still is a lot of complexity hidden within the objects. The Advanced Datastore Object (ADSO) for example has three checkboxes that can be set on independently of each other. This checkboxes determine which of the three tables underneath the application layer will actually be used. This means that you have 2^3 16 different setups to choose from. In the presentation there was a mention of templates for different situations. That should help in that case. From an architecture point of view You have to look at the options and determine which options should be used in which circumstances.

All in All it looks good. In the End to End session Marc Harz showed us a live demo where he showed the editor of the composite provider.

screenshot_composite.jpg

Source: SAP.

This looks a lot better than the old editors for multiproviders. Now with the ability to use compositeprovider as source for other compositeproviders you can create simple building blocks that together build your application.

 

Big Data

For Big data management SAP BW differentiates between three types of data based on the amount of usage: Hot, warm and cold. Hot data will be in HANA in memory, warm data will be in HANA, but on disk and finally cold data is stored in Near line storage on separate servers.

This should help you to achieve a more efficient usage as you’re only investing in expensive equipment for the hottest data and can keep a more modest budget for the rest.

BW_datamanagement.jpg

Source:SAP.

In this image you see an example how you could manage this. Basically you have different persistent objects that do or don’t reside in memory. Based on the usage you move the less used data to the warm objects. From these objects you get a flow to the near-line storage based on age and /or usage.

 

Performance

To be short. Run on HANA and hold on for dear life ;-)

Basically SAP BW was a 2 tier system, which you had to manage carefully to keep performance. A lot of ABAP code was all about collecting a lot of data and changing it on the application layer. As a BW consultant you often used ABAP just to increase the performance a bit. For example before the improved master data lookup you actually avoided the standard transformation and used abap in the start routine to collect a lot of data in a variable so in the transformation you could use an abap function to read the variable.

 

Now with BW on HANA everything gets pushed down from the application server to HANA. This means that for performance you are best of to avoid your own coding as much as possible. Standard transformations can be pushed down to HANA. Your own creations less so. For these the old transfer to application layer and back still goes.

 

In the presentation note 2063449 was mentioned. This note will tell you what has been pushed down and what is still to do. But as a rule of thumb develop like it is already pushed down, eventually it will be pushed down and if you already did it the right way you won’t have to redo it to get all the performance.

Planning

Also here a pushdown to HANA is taking place. The PAK should be feature complete now in comparison to BW-IP. Furthermore the FOX formula handling is improved and you can use a composite provider for planning scenario based on unions.

That you are also able to enter comments is a very nice feature. Customers for design studio are often asking for precisely this feature.

 

Conclusion

 

SAP BW is reinventing itself and focusses on its core function. Offering an application or business layer over your database. HANA is the driving force behind this by providing the heavy lifting needed. In the future more and more functions will be done on HANA itself. I am just wondering how they will balance between the customers on HANA and those on other databases.

Attribute change run (ACR) is the process of adjusting the aggregates whenever there is any change in the master data being used in that aggregate. In the process of loading data in BW, attribute change run plays a vital role post any master data attribute and hierarchy loads in order to get the correct data in reports.


Many a times during our batch loads, we encounter the situation where the ACR gets stuck for long without any progress. As a result, all the other processes which uses the same cube (eg. delete overlapping, create/delete indexes etc) running in other process chains, starts getting failed due to lock created by ACR on that cube.

 

As a workaround, we need to follow the below steps for correcting the failure.

  1. Identify and kill all the jobs and sub-jobs of that attribute change run. This can be done through SM37 or by using the program RSDDS_CHANGERUN_MONITOR in SE38.
  2. Deactivate the aggregates of that infocube manually from RSA1 for which job was stuck.
  3. Repeat the attribute change run either through process chain or manually through RSA1-> Tools -> Apply Hierarchy/Attribute Changes -> Monitor and Start terminated Change Runs.
  4. Check for change run job to finish (Should finish soon as aggregates are now deactivated).
  5. Repeat the other failed processes.
  6. Activate the deactivated aggregates (Only after checking if no other processes dependent on that cube is yet to finish and the available time slot, as the rebuilding of aggregates can take lot of time).


But the point here is that why do the ACR jobs get stuck for long and how can we avoid the failures and workarounds.


For adapting the aggregates to the changes, the change run works based on some strategies and parameters.


Strategies to adapt Aggregates:

There are 3 different strategies used to adapt aggregates in change run.

  1. Rebuild the aggregate (Adapt by Reconstruction)
  2. Delta Mode (Adapt by Delta)
  3. Rollup from previously adapted aggregate


Note: Infocubes having key figures with aggregation MIN/MAX needs to adapt the aggregates only by rebuilding them during change run.

 

BW: Parameter for Aggregates

Parameters for Aggregates can be set through below path:

SPRO >> SAP Reference IMG >> SAP Customizing Implementation Guide >> SAP NetWeaver >> Business Intelligence >> Performance Settings >> Parameters for Aggregates. (Tcode : RSCUSTV8)
IMG1.png

 

The parameters defined for the aggregates determines the adaptation strategy to be used while change run. Based on the threshold value and percentage of masterdata change, the reconstruction or delta strategy is decided.

IMG2.png

  • Limit with Delta: Threshold Value (0-99): Delta -> Reconstruct Aggregates

The value defined here determines aggregate adaptation strategy to be used for change run. If the percentage change in master data is greater than the threshold value mentioned, the Adapt by Reconstruction strategy is used which rebuild the aggregates else the Delta mode is used where the old records can be updated negatively and the new records positively.

 

  • Block Size

If the E or F table of the source for the aggregate structure is larger than the BLOCKSIZE parameter in table RSADMINC, the source is not all read at once, but is divided into blocks. This prevents an overflow of the temporary table space PSAPTEMP. A characteristic, with a value range divided into intervals, is used to divide the source into blocks. Only data from this type of interval is read from the source and written to the aggregate.

 

If no value is maintained for the BLOCKSIZE parameter in Customizing or if the value is 0, the default value of 100,000,000 is used. (exception: DB6 = 10,000,000).

 

  • Wait Time for Change Run Lock (in Minutes)

The waiting period (in minutes) specifies how long a process is to wait when it encounters a lock created by other parallel processes, such as for loading hierarchies or master data, another change run, or rolling up of aggregates.

If the system does not find a relevant lock, the change run waits the length of time specified here without creating its own lock.

 

For an example, below screenshot from the change run monitor screen shows the changed and total records for master data.
IMG3.png

Based on this, the percentage change in master data is calculated which is 11.98 and 11.73 percent.

This percent value is compared with the threshold value defined in “Limit with Delta” parameter for aggregates (10 in this case).

 

As the "Limit with Delta" parameter set here is less than the percent of master data changes, cube X and Y uses Rebuild (Adapt by Reconstruction) strategy for adapting its aggregates.


The standard value of “Limit with Delta” should be 20. However, setting this value purely depends on the volume of changes that occur in master data.It is recommended that the threshold value be kept above the maximum percentage change expected in the master data as rebuilding of aggregates could take
enormous time leading to the ACR running into deadlock or getting stuck.

 

Many of you might already be aware about these concepts, but for the people who run into errors and data load delays due to such issue, hope this helps.

 

Regards,

Nikhil

Since I used Trello for the first time I was impressed with it and I started to use it even to control my personal activities. For me the resources available and the usability are fascinating, not to mention the constant improvement in the product promoted.

As a BI consultant I started to using it to track my activities in projects where I step. As each project has its particularity, I always try to adapt or create a specific Board for the project.

Today, with the help of lessons learned in the book "Efficient SAP NetWeaver BW Implementation and Upgrade Guide” (I recommend it to everyone who are involved with BW projects to read this book), I created a Board following a few things discussed in the book.

I tried to leave the Board as succinct as possible, so I avoided to create a lot of details and obstacles in it, because as I said, each project has its specific needs and dynamics.


Link to example Board.

Capturar.JPG

Let´s see the summary of the Board:


  • Reference: Space for all important and useful reference documentation for the project, such as: SAP Notes, Project Scope and etc.
  • Project Team: Here is defined the project team according to their role and responsibility. Also it contain the main contacts for each resource.
  • Analyze: Activities that are being analyzed by BW team to survey and creation of the necessary step for modeling documentation.
  • Modeling: Activities in SAP BW development environment through the creation of objects in various layers of the LSA.
  • Reporting: Activities in the development of SAP BO environment, such as creating connections, Webi Reports and Dashboards.
  • Testing & Go-live: Activities that are already in the final phase of testing and preparation for entry into production.
  • Project Management: Activities related to the management and monitoring of the project.


And you? Uses or has used the Trello? What did you think of the Board? Leave your comment with suggestions, improvements for us to improve the same.

Feel free to use it in your projects, it is public!

 

Links:

Trello Tour

Trello Development Board

Trello Resources

Issue:

There are instances when Batch Jobs loading to data targets through PSA fail/do not complete. On checking the tRFCs we find there are tRFCs failing with error.

On deeper analysis we find there are short dumps in ST22 for each and every data packet that comes into BW for this particular data load.

 

Below is the snapshot from ST22.

 

Pic1.jpg

On looking at the Source Code Extract, it becomes evident that the data load fails while creating a new partition.

 

Pic2.JPG

 

On further researching it was discovered that the PSA table had already reached maximum number of partitions. As seen in the below snapshot, the partition number for this particular datasource (PSA) has already been reached to 9999.

 

Pic3.JPG

As per the SAP Note 1816538 - UNCAUGHT_EXCEPTION in PSA_PART_EXIST because of PSA Partitioning, adding partitions beyond 9999 is not permitted.

 

When the data load tries to insert records in the PSA it first creates a partition and then loads data into that partition.

 

 

Solution:

 

There might be times when dropping all data from the PSA would fix the issue (i.e. reset the partition number in the table RSTSODS).

However, when dropping data does not reset the partition number, we need to follow the below procedure:-

 

1. Make a note of the partition number for the PSA in Development system.

2. Drop all data from the PSA and Replicate the datasource.

3. Activate the data source and capture the data source along with the dependent objects (Update Rule/Transformation/Transfer Rules/Communication Structure).

4. Check the partition number for the PSA again in RSTSODS table - it should have been changed and reset to a lower number.

5. Before moving the change to QA system, make sure to drop all data from the PSA in QA and also make a note of the partition number in QA.

6. After successfully moving the change to QA, check for activation of dependent objects and then check the partition number in RSTSODS table - it should have been reset also.

7. Follow the same procedure before transporting to Production - Drop data from PSA, move the transport and then check the partition number in RSTSODS table - it should have been reset as shown in below snapshot.

 

Pic4.JPG

I would like to Share one of my experience where I felt BW can do wonders and can bring value to the customer. The below views are my personal view.

 

During one of the BW training session to the end user, I got a very strange question to answer. In my presentation I had covered all the major reports and all the navigational functionality of the reports.  Users were all happy and for the first time users it was a block buster movie. But this question was different from the normal lots. The question was “How can I know the reason why my Orders gave gone down for a particular customer and how these reports are going to help me to increase my efficiency". (My immediate thoughts was:  Do you expect the report to tell you that, you should be knowing it anyway  )

 

It took me some time to absorb the question and I had only one submission to make and that was these report can tell you where you are now , whether you are in track or not , but these report cannot answer why you are here.


After a few months I got a requirement of a new report on COPQ (Cost of poor quality). The specification was huge one and it covered all the aspects of
business wherever there was a possibility of incurring cost of poor quality. After a lot of deliberation & discussion a workable specification was finalized with business . It was impossible to capture all of parameters through the system. As the famous words of Albert Einstein goes “Not everything that can be counted counts, and not everything that counts can be counted”.

 

As the specification was cutting across different modules it had to be done in BW. This was the time I realized that BW analytics can do wonders. I could relate to the question that was previously asked to me on how reports can increase efficiency. During the development of this report I learnt BW can not only provide you with the analysis of what has happened but also can provide you with the indication of what has gone wrong. Of course there can be hundreds of reasons for that, but you can still provide these information using data which is already available in the system in a very readable format.

 

Let me explain to you with an example. If you are Sales manager and you see a downward trend of orders for a particular customer what will you expect the report to provide you. Firstly may be last 2 years trends of how the customer has placed orders and then may be whether we are able to provide the customer material on time or not (On time delivery analysis), who are my sales responsible for this customer, how many rejections or returns do we have from the that customer, how many times we have cancelled the invoices, are my receivables been taken care of and the list goes on. These data are available in the system in one or the other form. We only have to pick it up and present it in the right way.

 

Why BW?

 

The main reason why BW is needed is that only BW can bring in all these information together spread across different modules and hold it at one place.  Performance and easy navigation is the other reason. BW can provide the top to bottom analysis for KPI and this can be easily done through several method , some of them mentioned below

 

  1. Web Application Designer – You can select a customer through filter pane or drop down box. Analysis can be kept in different tab for each of the KPI.
  2. RRI – Report to Report Interphase is a good option if the user wants to drill down one report to another
  3. APD - There can be calculations which you will have to do on monthly basis or which will be difficult to calculate on run time
  4. Dashboards – Of course if Business Objects tools are used data can be represented more vibrantly.

      

With right functional specification, modeling and report designing BW can add more value to the business. It can highlight the pain areas using the transaction data and give right direction to the business.

 

This has been one of my experience with BW where I realized  what a wonder tool  BW is. I would like to know your experience where you felt the same. Please share as your comments

 

Best Regards

 

Gajesh


Author(s): Saorabh Trilokinath Shivhare, Deepti Shetty.

Target readers: BW Consultant, BW Engineer, Basis Consultant.

 

Purpose of the document:

Scope of this doc. Is to cover pre and post BW upgrade activities. One can read what to do, how to do and why to do. Whenever SAP note is applicable it is mentioned and any other ref. for that point is also covered. These all points are based on our understanding and experience.


Introduction:

 What is BW upgrade?

Upgrade BW server from existing version to higher version, in this scenario upgraded from 7.0 to 7.4.


 Why we do upgrade?

Most of the advanced features are available only in higher version e.g. Semantic Partitioned, to collect the statistics of the Bex queries in the database tables.


 Why one should read this doc.?

There are already many materials available for BW upgrade checklist which explain how to do activity.  This doc. Also explain reason behind doing these activities. Along with below points read the last column for reason of Why we do?

Note:

1. Some of step may differ in pre and post upgrade activities depending on version to be upgrade, particular feature to test and environment.

2. These steps are considering BW upgrade from 7.X to 7.4 version.

 

Pre upgrade activities:

Sr. No.What to do?SAP NoteHow to do?Why we do?
1.Execute program RSUPGRCHECK to check DDIC (data dictionary) consistency.If it shows any inconsistency in info object, DSO, Info cube and transfer rule.  Activate inconsistent objects. Otherwise this may cause error due to non-activated DDIC tables in upgrade.

DDIC Objects are data dictionary objects. DDIC objects are tables, views, data type, type group, domain, search help.

 

ABAP dictionary describes and manages all data definitions used in the system.

 

This report checks whether DDIC tables that are needed for the BW meta data object, are actively existing.

 

The report points out the incorrect objects found in the log.

2Get list of all
Inactive objects.
To get list of inactive objects follow steps: Execute SE11 > OBJVERS = 'A' and OBJSTAT = 'INA'. Table name for respective meta data BW object is mentioned in ref. doc.Status of meta data objects should be the same pre and post upgrade.
3.Clean Up

No separate Transaction code or program as it done in several other activities, this includes

PSA clean up, delete log files, deleting objects like Infocubes and DSO's that are not required, deleting aggregates that are empty.

House-keeping activity.
4Stop RDA(Real-Time Data Acquisition) DaemonUse T Code RSRDATo do when using
Daemon services in BW. 
5Remove Temporary BI Tables & Check Invalid Temp tables and Temp Objects.

a) Check for Invalid Temporary Tables in SE14 using the menu path Extras -> Invalid Temp. Table.

b) Execute se38 > program: SAP_DROP_TMPTABLES.

 

The following objects can be deleted.

• Temporary Tables (Type 01/0P)

• Temporary Views (Type 03/07)

• Temporary Hierarchy Table (Type 02/08)

• Temporary SID Tables (06)

• Generated Temporary Reports

Part of Housekeeping activity. These are DB objects such as tables/ views/ triggers and so on. This have /BI0/0 name prefix.

6Check/Repair Status of Info Objects

Procedure

1. Log on to the SAP system.

2. Call transaction RSD1.

3. Choose Extras Repair InfoObjects (F8).

4. Choose Execute Repair.

5. Choose Expert Mode Select Objects.

6. On the following screen, in addition to the default checkbox selection, activate the following checkboxes:

- Check Generated Objects

- Activate Inconsistent InfoObjects.

- Deletion of DDIC/DB Objects

- Display Log

7. Execute the program.

This activity is performed so that there should not be erroneous object.

 

We perform consistency checks on the data and metadata stored in a BW system. This   tests the foreign key relationships between the tables of the extended star schema. (ref. From SAP help).

7Clean/delete the messages for error logsRun Report RSB_ANALYZE_ERRORLOG & RSBM_ERRORLOG_DELETEHouse-keeping activity.
8Check Master Data consistencyRun report
RSDMD_CHECKPRG_ALL
It means there is inconsistency / missing record in master data tables i.e. P/X and Dimension table of Cube where this Master data objects are used.
9Cleanup background
jobs
784969use program RSBTCDEL2 Background jobs are scheduled for various
activities these can be V3 job and any other jobs.
10Table SMSCMAID1413569Before you start the
Software Update Manager, check if you use the table SMSCMAID. If so, see SAP
Note 1413569 to avoid a termination of the Upgrade due to duplicate records in
the table.
Table SMSCMAID is used
for scheduling. If index is added in Table SMSCMAID follow this note.  
11Check for missing
TBLIG entries by running report RSTLIBG
783308Run report RSTLIBGThis report list out
inconsistencies in existing Info provider.
12Run SAP_INFOCUBE_INDEXES_REPAIR
in se38
In se38 > execute
SAP_INFOCUBE_INDEXES_REPAIR
To repair indexes of
info cube.
13Clear all logistic
data extractions. Clear delta queue.
Check in source system
for all source systems.
Clear delta queue and
logistic data extraction before upgrade as post upgrade underlying tables and
fields of extract structure may get changed.
14Check inactive
Transfer rules, IC, DSO, aggregates etc and list them.

Once you click on DSO, we need to check whether every
request in DSO is in green status or not. Before upgrade, we should make sure
that all the requests in the DSO should be in green state.

Here we need to check the status of
aggregates, according to SAP recommendation they should be activated before upgrade. Rolling up of aggregates is not necessary, it depends upon the
business requirements.

check only for post
upgrade ref.
15Repair inconsistent
info packages using RSBATCH
To check if Info package
in active state is inconsistent or not. If inconsistent then Repair before
upgrade.
16Take pre snapshots of
critical Bex queries
Check only for post
upgrade ref. Take prompt selection and o/p both screen shots.
17Converting Data
Classes of Info Cubes 
46272

 

1.  Call transaction SE16 and             check the table RSDCUBE.
2.  Select OBJVERS equal ‘M’, ‘A’; and check the entries for the fields DIMEDATCLS, CUBEDATCLS, ADIMDATCLS, and AGGRDATCLS.
All Info Cubes are listed with their assigned data classes.
3.  Compare the data classes with the naming conventions for data classes described in SAP Note 46272.

If you find incorrect data classes, correct them as follows:
1.  Set up a new data class as described in SAP Note 46272
2.  Execute the RSDG_DATCLS_ASSIGN report.
It allows you to transfer a group of Infocubes from old data class to the new data class and also assigns the Infocubes to the right data class.

Need to check whether cubes are assigned to correct data classes or not. If any DDART data classes (customer data classes) that do not follow the naming conventions described in SAP Note 46272,these data classes will be lost during upgrade due to which the tables of the InfoCube can't be activated.

An error message is displayed in technical settings due to the above situation.


Appendix:


Step No.2 (Pre upgrade) and 5 (Post upgrade).

 

TableBW meta data Objects.
RSUPDINFOupdate rule
RSTSTransfer Rule
RSISNInfo source
RSDSData source
RSDODSODSO
RSDCUBEInfo cube
RSQISETInfo set
RSDIOBJInfo object

 

 

Post upgrade activities:

 

Sr. NoWhat to do?SAP NoteHow to do?Why we do?
1Run the extractors in
the delta mode to verify any technical errors.
Do delta run for any
data source. Delta queue should be blank pre upgrade and once this data load
started, monitor for this data source in delta queue. 
To check delta  load happening correctly.
2Basic Application
Checks
Test different BW
flow as Master data object flow, Process chain and one transaction flow from
source system to Bex query end. 
Test different BW
objects.
3Sample Data Loads by
running critical master data and transaction data process chains
Test transaction and
master data load.
4Check RFCs and source
system connections.
Execute Tcode – RSA1
and replicate all the source systems.
Test RFC connection
to all source systems.
5Check Transfer rules,
Info Cube, Aggregates and activate inactive objects.
To get list of
inactive objects follow steps: Execute SE11 > OBJVERS = 'A' and OBJSTAT =
'INA'. Table name for respective meta data BW object is mentioned in ref. doc.
Compare list of
objects with pre upgrade and activate objects which got deactivated.
6Activate transfer
structures, Run the program RS_TRANSTRU_ACTIVATE_ALL
Activate transfer
Structures
7Reactivation of all
active update rules .Run the program RSAU_UPDR_REACTIVATE_ALL
Activate update rule
8Reactivation of all
7.X data sources. Run the program RSDS_DATASOURCE_ACTIVATE_ALL
Activate Data source
9Activation of
MultiProviders. Execute the program RSDG_MPRO_ACTIVATE
Activate
Multiproviders.
10Bex query output  validationCompare Pre snapshots
with post snapshots of the Bex query output taken.
Compare Bex queries
output pre and post upgrade.
11To correct operator
syntax error in custom program.

Use
EXTENDED_PROGRAM_CHECK FM for Custom Program.

 

http://scn.sap.com/thread/547645

12New Functionality
Checks

Created a SPO of a
Cube / DSO and loaded data into it. It worked fine.

 

http://scn.sap.com/docs/DOC-34893

Semantic Partitioned
is new feature introduced in 7.4. By means of this one can semantically
partitioned based on partition condition. Same gets stored in data targets
depending on partition condition.
13TADIR entries for
factviews.
Execute program
(SE38) - SAP_FACTVIEWS_RECREATE
TADIR is table to
maintain Directory of Repository Objects.
This includes all   dictionary
objects, ABAP programs etc. During upgrade if any fact/views dropped /missed.
This program recreates it. It is part of upgrade to 3.x. Since in most of the
landscape still 3.x flow is used. Missing TADIR entry error may occur.
14Ensure BI Object
consistency

1. Go to RSRV transaction

2. RSRV transaction is used to perform consistency check on the data stored in BW.

3. Elementary and Combined test can be performed.

 

http://wiki.scn.sap.com/wiki/pages/viewpage.action?pageId=153388081

During upgrade if some object got missed. This program will list out.

This needs to be done manually can be done for any BW meta data object.

It’s better to do it for the important cubes e.g. cube related to COPA.

15Ensure Aggregate and
Indexes of cube are active.

Ensure the aggregates and indexes of an
infocube are maintained and active as the case. Can be check for one cube.

 

http://scn.sap.com/thread/1932810

16Repair info objects

Procedure

1. Log on to the SAP system.

2. Call transaction RSD1.

3. Choose Extras Repair InfoObjects (F8).

4. Choose Execute Repair.

5. Choose Expert Mode Select Objects.

6. On the following screen, in addition to the default checkbox selection, activate the following checkboxes:

- Check Generated Objects

- Activate Inconsistent InfoObjects.

7. Execute the program.

This activity is performed so that there should not be erroneous object.

We perform consistency checks on the data and metadata stored in a BW system. This   tests the foreign key relationships between the tables of the extended star schema. (ref. From SAP help).

17Check the flag Record
Statistics while maintaining the OLAP parameters in the Tcode RSRCACHE

Do as mentioned.

Click on Cache parameter >Record Statistics.

It is an
architectural change as part of BW 7.4 we need to do it so that we can collect
the statistics of the Bex queries in the database tables.
18To identify and
repair inconsistencies in the PSA metadata.
1489064

1. Execute report RSAR_PSA_NEWDS_MAPPING_CHECK in transaction SE38

2. Execute with the repair flag unchecked. This will provide the list of all obsolete PSAs which are no longer used in segmented datasource.

3. Execute in the Repair mode then execute with the repair flag checked. This will inactivate all the obsolete. PSAs which are no longer used in segmented datasource.

 

http://scn.sap.com/thread/3214082

Deletion of obsolete PSA.
19ABAP program
Correction for loading Hierarchy.
1912874

As part of BW 7.4,
while loading hierarchy using Info package one may come across this error.
Implemented SAP note 1912874 - CALL_FUNCTION_NOT_FOUND while loading hierarchy.
This same error occur while executing bex query.

 

https://scn.sap.com/thread/639887

Original consistent
value returned by Data source was changed by conversion routine which is not
consistent with Conversion exit. 
Sol:  Check correct conversion
routine entered. Correct conversion routine or data.  Can also activate automatic conversion in
transfer rule.
20Code Inspector to
execute
1823174Execute program is
ZSAP_SCI_DELTA is implemented in se38.
In the BW Releases before 7.4, the maximum length of a characteristic value is limited to 60 characters. As of Release 7.4 SPS2, up to 250 characters are possible. To achieve this, the domain RSCHAVL was changed from CHAR 60 to SSTRING 1333. As a result, data elements that use the domain RSCHAVL are no longer possible (syntax errors) or they cause runtime errors in customer-specific programs. This SAP Note describes solution options for certain problem classes.

 


Assumptions:
1. These steps are considering BW upgrade from 7.X to 7.4 version.
Acknowledgements:
Contents are the general not mentioning any Business function or Technical BW meta data objects of Client source system.


References:
http://www.Help.sap.com
http://wiki.scn.sap.com
http://www.scn.sap.com

andres diaz

BI in the organizations

Posted by andres diaz Sep 24, 2014

Today companies have expedientado growth in data flow, stored mainly in corporate information systems, and it is becoming more evident the need to obtain information from that source of data for making strategic dictions.

 

 

Information is the main source of knowledge, business growth is defined by the way processes are carried out, the companies are systems that are composed of different modules: finance, logistics, commercial and productive.

 

 

Each one of the modules or departments of a company need to have real-time information to make strategic tions based on the data you have in your corporate systems.

 

 

Business intelligence is not just a computer process is a fundamental component in the sustainable growth of the company that needs the complete knowledge of business processes.


Problem Statement/Business Scenario


As a best practice SAP suggests to delete the change log data in the DSOs but what if there is business logic in the transformation/update rule which consume the data from change log? There isn’t any standard process where selective change log requests can be deleted unless you delete them manually
Consider a scenario where we have a staging DSO which feeds data to 3 different cubes, out of the 3 cubes two are delta enabled and the third one is full request every day with snapshot data. If the CL data for full request isn’t deleted then it will grow exponentially in no time.

As per the below mentioned data model the staging DSO feed data to multiple cubes and among them is a snapshot cube to which full data load happen and there is no reason why we should retain the change log requests. In worst case scenario if we are to reload the data from staging DSO to Cube then it will not require the data in change log

Untitled.jpg

 

 


Purpose
To reduce the change log there are two ways of doing it – Either delete the requests manually or use an ABAP program to delete the change log with selections (full requests). SAP has provided the provision to delete the change log data via process chain but you cannot delete the requests selectively in case of the data being loaded to multiple data targets. To avoid all manual intervention a custom ABAP report can be created which will automate the process and it can be used as a weekly/monthly/yearly housekeeping activity.

ABAP Report Design
An ABAP program can be created to look for all the requests from table RSSELDONE which has the Request ID, Load Date, Info Package ID, System ID and Update Mode. Based on the info package id, load date and update mode all full requests can be identified and deleted from the change log table. Change Log table details can be got from DSO>Manage> Contents>Change Log.

 

Untitled.jpg

Example: If we have to retain full requests for 30 days alone then the program should delete all the data from change log based on the requests date which are older than 30 days.


Below is the snapshot of the table RSSELDONE which store all the requests

Untitled.jpg

 

Untitled.jpg

 

Thanks

Abhishek Shanbhogue

Hello friends,

 

Today I would like to talk about a common mistake, that can cause major delays on processing times of incident: Incorrect component assignment of incident and how to identify the correct component for your SAP Incident.

 

I often notice that new incidents from customers are created under BW-BCT*, as BW-BCT has over 100 sub components, listing almost all Applications and generic sub areas from SAP products. This happens because the description of the sub components is not accurate in some cases and doesn't specify the area.

 

Please keep in mind, Components under BW-BCT are reserved for issues related to the installation and operation of SAP application extractors used to load data into SAP BW. Choosing the correct component is an important step during the incident creation because it ensures the fastest processing of your request.

 

We can see at the following image the example: Under path BW->BW-BCT there are over 100 sub components with simple descriptions like "Customer Relationship Management" or "Documentation". In reality those components are related with logic of data extraction for the ECC application that deals with "Customer Relationship Management" and "Documentation", BW-BCT-CRM and BW-BCT-DOC, respectively.

Capture.PNG

 

Another reason this happens is because customer selects the application for the Incident as BW,  which will automatically expand BW area for component selection, misleading customer to find the desired component based on the description, which will most likely be under BW-BCT list as it has almost all SAP Applications extraction logics under it.

 

How to find the correct component for my Incident?

In order for SAP to assign an engineer to work on a customer incident, the incident must be assigned to an active component, that is being monitored by SAP engineers.

Here are a few tips to identify which component is the one for your incident:


Note search

Perform a note search using the SAP SMP (Service Market Place) search tool using SAP xSearch or Note search.
Use key words related to your issue like the transaction code used to reproduce the issue, a table name and the name of the product as in the following example in xSearch, let's try an example: my BPC user has been locked due to incorrect login attempts and I'm trying to find a solution for it:

 

Capture2.PNG

 


Notice I'm searching for the text "user locked SU01", the most suited areas would be BC, GRC and SV.
Now look at the narrowing example:

 

Capture.PNG

Notice I narrow down the search by adding 'BPC' to the search text. This Narrows the results of Notes from over 280 to only 5, which are for EMP-BCT-NW.

 

Knowledge Base Article(KBA) or Note

Customers often find a note related to the incident being faced, or even create new incidents based on notes already provided by SAP.
As a general approach, the component where the Note or KBA was created for will be the most suited component to create a new incident:

 

KBA example:

Capture3.PNG

 

Note example:

Capture.PNG

.

 

You can find the Component of a KBA or Note under Header Data section, close to the bottom of the page.

 

Short Dump

Usually when customer faces short dump, you can see an application component assigned to the dump at the header, for example:

Capture.PNG

 

This might be usually the correct component for the incident, but not in all cases.
In order to identify the correct component, you should analyze the dump text description and check the programs and the code section where it stopped.


Check the function module/report or Transaction code component

This is usually the best method to identify the component responsible for working on the issue as it shows the exact component responsible for that part of the code.
There are several ways of doing that, I'm going to explain the one I believe most people will have authorization for and that I believe is easy to do:
1. Open the transaction you are facing the issue and navigate on it one step before reproducing the issue.
2.Go to context menu System->Status.

Capture4.PNG
3.Double click on the Program(Screen) value. The code for it will open.

4.Go to context Menu Goto->Attributes. A small popup will open.

Capture.PNG

5. Double click on the Package value. A new screen will open.
6.You will see the component at Application Component value.

Capture6.PNG

 


Checking some of the steps mentioned above should help you identify the correct component. However, there isn't a single formula for all issues. Each issue has to be carefully interpreted to find the appropriate component.
Sometimes even we at SAP have a difficult time to identify what component is correct for the issue. That is why, it is imperative that customers provide a Clear description of the issue with a Concrete example with description of the steps under Reproduction Steps section.

 

 

Do you have another tip on how to identify the corret component? If so, please let me know in the comment section.

Hello altogether,

 

some days ago my first printed book was published by Espresso Tutorials and now it's also available as eBook:

 

Schnelleinstieg BW.jpg

This book is written for beginners in SAP Business Warehouse 7.3. It starts with a short introduction into Business Intelligence and Data Warehouses in common. Then it gives a short overview of Bill Inmons CIF as base for SAPs Business Warehouse and explains LSA and LSA++ in a short section.

 

The main part is an real world example, loading an IMS Health sample file into BW to get an answer for a business question.

As it's written using SAP Business Warehouse 7.3, I explain how you can use dataflow diagrams to build a very basic data flow to load the sample data.

Then it leads step by step with lots of screenshots through the data modelling. You begin to create your first characteristics, key figures. The next step is to create DataStoreObject, InfoCube and MultiProvider in a simplified multi-layer architecture. The next chapter is about ETL. The term itself is shortly explained. Then I show how to create transformations and DTPs. Finally I show how to put all DTPs together in process chains for master data and transactional data.

The last chapter is about Business Explorer. I explain how you can easily create a BEx query on top of your mutliprovider. With a few screenshots I show you can slice and dice through your data. Exceptions and conditions are explained shortly.

 

The most benefit is that you get some help how you can avoid the most common pitfalls in data modelling. It also gives some help in case of data load errors. Two other goodies are ABAPs:

1.) how to convert the keyfigure model into an account model

2.) how to easily create multiple key figures using BAPIs

 

The only disadvantage is that the book is available in german only. An english version is not planned at the moment.

 

Have fun reading it!

 

Cheers,

Jürgen

Ever wondered where your DTP/Transformation spent it's time?

 

This is how you can find out:

 

At first you have to create a small programm like the following:

 

REPORT ZTEST.

DATA: R_DTP TYPE REF TO CL_RSBK_DTP.

                " loading the dtp

CALL METHOD CL_RSBK_DTP=>FACTORY

    EXPORTING

         " here you have to place the technical name of your dtp

         I_DTP  = 'DTP_4YIEJ....'

    RECEIVING

         R_R_DTP = R_DTP.

DATA L_R_REQUEST TYPE REF TO CL_RSBK_REQUEST.

L_R_REQUEST = R_DTP->CREATE_REQUEST( ).

 

" we have to run it in sync-mode (not in Batch)

L_R_REQUEST->SET_CTYPE( RSBC_C_CTYPE-SYNC ).

L_R_REQUEST->DOIT( ).

 

Now run this programm through Transaction SE30 or SAT :se30.png


After the execution of the DTP you will receive the result. In our case it was an select on the psp element. We created an index and the dtp needed only 2 minutes instead of 30 minutes.

 

ergebnis.png

Of course you can also simply call the transaction RSA1 through SAT (using the processing mode "serially in the dialog process (for debugging)"). But doing it that way you have to filter out the overhead in the performance log created by rsa1 and the data will not be written to the target (it's only simulated) so you might miss sometimes a bottleneck in your dtp/Transformation.

 

thanks for reading ...

Actions

Filter Blog

By author:
By date:
By tag: