1 2 3 22 Previous Next

SAP Business Warehouse

330 Posts

Reporting on a transitive attribute, aka an attribute of an attribute, can be a tricky thing to model if you don’t want to materialize the transitive attribute(s) in your data provider.

 

Imagine we have an infoobject Z__ATE which represents a calendar day and contains many (date related) attributes, like fiscal period (FISCPER), calendar week (CALWEEK), etc

 

pic2.jpg

 

All of these attributes have a direct relationship with the Z__ATE (calendar day) and have been created as a navigational attribute. So whenever and wherever Z__ATE is added to a data provider (being a multiprovider, composite provider, etc etc) its navigational attributes can be used.

 

Further imagine we also have an infoobject called Z__SVL which contains Z_COUNTRY, Z_REGION and Z__ATE as a navigational attribute.

pic5.jpg

The above example implies that we can use Z_COUNTRY, Z_REGION and Z__ATE to navigate, but we’re not able to use the attributes of Z__ATE to navigate. The attributes of Z__ATE, in this example CALWEEK and FISCPER, are so called transitive attributes and can’t be used to navigate, when using infoobject Z__SVL

 

When reporting on transitive attributes is required and you don’t want to materialize the transitive attributes (in this case add CALWEEK and FISCPER as navigational attributes to infoobject Z__SVL), using a composite provider might come in handy.

The below composite provider has been created (tcode RSA1, select InfoProvider, Right mouse click on an Infoarea and select Create CompositeProvider) in which a Proof of Concept (POC) DSO is combined with multiple entities of infoobject Z__DAT (which is the same as infoobject Z__ATE).

pic3.jpg

 

Instead of materializing (adding navigational attributes of Z__DAT to the POC DSO), a non-materialized link (left outer join) has been created multiple times.

For example: An “changed on” infoobject (see the red box above) from DSO ZPOC has been added to the composite provider and this infoobject is (inner) joined with masterdata infoobject Z__ATE. (see top right in the picture above). Via this modeling solution a transitive attribute (all navigational attributes of Z__ATE) can be used for reporting on this composite provider, without materializing them.

 

(This blog has been cross-posted at http://www.thesventor.com )

As a member of Product Support I have seen recently a lot of incidents that can improve the processing time if the necessary information for analysis is available in the incident when it is created. Therefore I decided to create this blog post to describe some information that are very relevant to be provided when an incident is created under my area.

 

What is necessary to provide when open an BW-WHM* incident ?

 

 

To be able to analyse incidents on the BW-WHM* component, supports needs some initial information, and some information depends on each component.

 

The bellow wiki page describe in details what customers should provide for SAP Support when open incidents:

 

 

SAP Support Guidelines Data Staging - SAP NetWeaver Business Warehouse - SCN Wiki

 

 

Here I will resume the most relevant information for the more common components (for components that are not listed here, you can check on the wiki above):

 

 

  • CROSS-COMPONENT:

Step by step of how Support can do for reproduce the issue by his own.

Connection to the relevant systems.

Logon information updated in the secure area as per descibed on note 508140.

 

 

  • BW-WHM-DST-DTP

Technical name from DTP

Last load request that failed/or should be checked

 

 

  • BW-WHM-DST-TRFN

Technical name from Transformation

 

 

  • BW-WHM-DST-DS

Technical name from DataSource

Connection to Source and Target System

 

 

  • BW-WHM-DST-PC

Technical name from Process Chain

Last Log Id that failed

 

  • BW-WHM-DBA*

Technical name from infoobject or infoprovider

For developers and consultants, finding ways to simplify access to different environments and development tools can be of great help. This is especially true for SAP BW consultants, who are required to frequently access the BW system as well as one or more source systems, through several different development environments and frontend tools. While this series of articles will provide examples from my experience as a SAP BW consultant, some of it will also be relevant for other consultants and power users using SAP systems.

 

In this first part I will showcase the basic idea of working with shortcuts by showing how to simplify access to BEx Web. I will also present some other helpful shortcuts. In the second part I will deal with SAP shortcuts.

 

The basic concept

 

short.jpg

 

The concept involves creating a folder with batch scripts accessing the required environments and tools, and shortcuts referring to those scripts. The folder then has to be added to the Windows PATH environment variable. Once that is done, you can access any of the shortcuts by simply pressing Start+R, and then typing in the command. The idea is that the script will handle as much of the environment and login details as possible, besides (maybe) authentication and possibly allowing parametrization (such as launching a specific query from command line).

 

If you don't get why this is of help, think about all the time spent in GUI screens searching for the system to log in to, or looking for a specific query. In organizations with many environments and/or many queries, this adds up to quite some time.


First example - a Bex Web 7 shortcut

 

Do note that the hyperlinks inside the scripts in this and the following examples should be modified before use.

 

In this simple first example we will create a script that calls BEx Web in the production environment with a query parameter. I'll get to the SAPGUI examples in the second part.

 

1. Create the shortcuts folder. If you want it to be available to other users, you should create it in a shared drive, however do note that this prevents some personal customizations, as you'll see in the second part. For this example, we'll assume you created a folder in the c drive called "shortcuts".

 

2. Add this folder to the PATH environment variable. press Start+R and type:

 

setx path c:\shortcuts

 

You only have to run this command ONCE, unless the PATH variable gets overwritten in your organization when you restart the computer . In this case you could add a batch file with this command to the "Startup" folder in the windows start menu.

 

3. Create a script. I've dealt a bit with the structure of the BEx Web URL here, but I'll repeat the relevant parts:

The URL we're launching should look something like this:

 

http://host:port/irj/servlet/prt/portal/prtroot/pcd!3aportal_content!2fcom.sap.pct!2fplatform_add_ons!2fcom.sap.ip.bi!2fiViews!2fcom.sap.ip.bi.bex?QUERY=%1

 

Where host and port are the relevant host and port for your production system. %1 is a the parameter which is replaced at runtime with the technical name of the query. Some details regarding this structure can be found here.

 

Open up a text editor (notepad is fine for this), and paste in the following text, replacing host and port with the host and port of your production environment. If you're unsure regarding the host and port, launch a query in Bex Web from say the portal or Query Designer, and then grab them from the URL.

 

start "C:\Program Files\Internet Explorer\iexplore.exe" http://host:port/irj/servlet/prt/portal/prtroot/pcd!3aportal_content!2fcom.sap.pct!2fplatform_add_ons!2fcom.sap.ip.bi!2fiViews!2fcom.sap.ip.bi.bex?QUERY=%1

 

The "start" command causes the batch script window to close immediately after launching Internet Explorer.

Now save this as "bexw.bat" in c:\shortcuts.

 

4. Create the shortcut.

Open windows explorer and navigate to the shortcuts folder. Right-click bexw.bat and choose "Create Shortcut". Then right click the new file and choose "Rename". Rename the new file to "bexw".

 

Assuming the technical name of a query is DAILY_REPORT, you should be able to launch the query in BEx Web, in the production environment, by pressing Start+R and typing:


bexw DAILY_REPORT

 

You could create similar shortcuts for other environments, or even parametrize the host name, if that is more convenient to you.


Some other useful examples

Want to start Query Designer with a query already open? Here's a script that can do just that (but still requires you to authenticate and choose the environment).


start "C:\Program Files\Internet Explorer\iexplore.exe" http://host:port/sap/bc/bsp/sap/rsr_bex_launch/bexanalyzerportalwrapper.htm?TOOL=QD_EDIT^&QUERY=%1

You can see a bit of documentation for this in this SAP help page .

notice that if you're on a split-stack environment, the port needs to be the ABAP port, and not the JAVA port (The JAVA port is the one from the previous example).

 

You may want to configure your browser to automatically recognize the 3xbex files generated by the URL. In Internet Explorer 10, this is done by accessing the download window ( Ctrl+J ), right clicking the file, and unchecking "Always ask before opening this type of file".

 

A limitation of this method is that the query opened does not register in the bex "history" folder.

 

You can also launch Bex Analyzer in a similar manner. In the second part I'll show how to open Bex Analyzer with the SAP environment already chosen.

 

start "C:\Program Files\Internet Explorer\iexplore.exe" http:/host:port/sap/bc/bsp/sap/rsr_bex_launch/bexanalyzerportalwrapper.htm^?QUERY=%1

 

What about Bex Broadcaster? Here's a script for launching it with a query. This time we'll need the JAVA port. You could also launch it with the technical name of the broadcast setting by replacing "SOURCE_QUERY" with "SETTING_ID".

 

start "C:\Program Files\Internet Explorer\iexplore.exe" http://host:port/irj/servlet/prt/portal/prtroot/pcd!3aportal_content!2fcom.sap.pct!2fplatform_add_ons!2fcom.sap.ip.bi!2fiViews!2fcom.sap.ip.bi.bex3x?system=SAP_BW&CMD=START_BROADCASTER70&SOURCE_QUERY=%1


Don't you get annoyed with how note numbers are mentioned without URLs, and then you have to cut and paste that note into the SAP search site?

Here's a script with a note as a parameter. As a side note, you may also want to install a certificate for the SAP service marketplace.


start "C:\Program Files\Internet Explorer\iexplore.exe" http://service.sap.com/sap/support/notes/%1

 

As a final note, if you're a Firefox user, Ubiquity  is a neat alternative to the command line, although currently not in active development.

 

Next time: SAP shortcuts

Business requirement : - For VENDOR_ATTR get the vendor Emil ID

But vendor Email id stores the table ADR6 but this table don’t have vendor.

First give the vender id in LFA1 table now you will get the address number.

Ex – 00000001 and address number - 1234567

1.PNG

Go to the ADR6 table

2.PNG

Give the address no 1234567
now you will get the E-mali address

  • But user want see the in VENDOR_ATTR related the reports.

according to user requirment wrtie the code in the CMOD

Then go to 0VENDOR_ATTR

3.PNG
extract structure and then creae the append structure ex- ZBW_EMAIL

4.PNG

Function exit – EXIT_SAPLRSAP_002 – Master data attribute

5.PNG

CMOD CODE

WHEN '0VENDOR_ATTR'.
  
FIELD-SYMBOLS : <FS_VEND> TYPE BIW_LFA1_S.
  
DATA : IT_VENDOR TYPE STANDARD TABLE OF BIW_LFA1_S.

TYPES : BEGIN OF LS_LFA1,
            LIFNR 
TYPE LIFNR,
            ADRNR 
TYPE ADRNR,
          
END OF LS_LFA1.
  
DATA : IT_LFA1 TYPE STANDARD TABLE OF LS_LFA1,
          WA_LFA1
LIKE LINE OF IT_LFA1.
 

TYPES : BEGIN OF LS_ADR6,
             ADDRNUMBER 
TYPE AD_ADDRNUM,
             SMTP_ADDR 
TYPE AD_SMTPADR,
          
END OF LS_ADR6.
DATA : IT_ADR6 TYPE STANDARD TABLE OF LS_ADR6,
       
  WA_ADR6 LIKE LINE OF IT_ADR6.

IT_VENDOR[] = I_T_DATA[].

    
IF IT_VENDOR[] IS NOT INITIAL.

       
SELECT LIFNR ADRNR FROM LFA1
         
INTO TABLE IT_LFA1 FOR ALL ENTRIES IN IT_VENDOR
         
WHERE LIFNR = IT_VENDOR-LIFNR.

         
IF SY-SUBRC = 0.
           
SORT IT_LFA1 BY ADRNR.
           
SELECT ADDRNUMBER SMTP_ADDR FROM ADR6
             
INTO TABLE IT_ADR6 FOR ALL ENTRIES IN IT_LFA1
             
WHERE ADDRNUMBER = IT_LFA1-ADRNR.

             
IF SY-SUBRC  = 0.
                
SORT IT_ADR6 BY ADDRNUMBER.
               
ENDIF.
            
ENDIF.
      
ENDIF
.

REFRESH IT_VENDOR[].
LOOP AT I_T_DATA ASSIGNING <FS_VEND>.
 
READ TABLE IT_LFA1 INTO WA_LFA1
    
WITH KEY LIFNR = <FS_VEND>-LIFNR BINARY SEARCH.
   
IF SY-SUBRC = 0.

      
READ TABLE IT_ADR6 INTO WA_ADR6
         
WITH KEY ADDRNUMBER = WA_LFA1-ADRNR BINARY SEARCH.

       
IF SY-SUBRC = 0.
           <FS_VEND>-ZZSMTP_ADDR = WA_ADR6-SMTP_ADDR.
         
ENDIF.
     
ENDIF.
     
CLEAR : WA_LFA1, WA_ADR6.
 
ENDLOOP
.

0VENDOR_ATTR output with Vendor Email id

6.PNG

Hope it will help.

Thanks,

Phani

The below are the few reasons why the scheduled process chains are not started executing,

 

a. The Factory Calendar, only the working days defined in the Factory calendar will considered for process chains (Already Scheduled process chains) execution.

 

 

 

b. The Authorization, when the process chain is scheduled by a User XXXX, and if the authorization is removed for scheduling or executing process chains for the User XXXX, the next day process chain will fail to execute as the authorization is removed for the User XXXX. So it is recommended to schedule a process chain always with the User "ALEREMOTE" as this user will have SAP ALL access and there are no chances of removal of authorization.

Data Sources will fetch data to BW system from various application systems based on delta method or full load. Delta data load will capture the changed records from the time previous data load completes based on the logic derived and filters applied in initialization of delta in BW.

        Some times Manual checking of delta queues is necessary because of some delays occurs when there is a huge amount of records found in the delta queue.  Conventional Process involves manual checks of delta queues to avoid the delays in regular data loads. Manual checking of each delta queue separately will consume lot of effort. Proposed Approach addresses the above gaps and gives the flexibility to select the type of application system or given data sources. Once the data extracted, automatic e-mail alerts will be sent to support team through a programming logic.

 

Solution:

The Custom data source has been implemented based on extractor Function module to get the number of records for all the delta enabled data sources. Data source will fetch the information about all delta enabled data sources names and its number of records (will load in next load). This data source runs couple of hours before the regular data load starts followed by a program which will check  the number of records for each delta enabled data source and compare with threshold limit and trigger an e-mail to IT Ops support team if more number of records found in any Data source than usual record count. (A custom data source providing the information about list of all other delta enabled data sources (Standard or custom) and their number of records will delta get in the next data load)

 

To automate the regular monitoring of the delta loads, need to perform the below steps:

 

  1. Need to Create a generic Data source with function module with following steps in application System.

              1. Data source with structure contains Data source name, number of records and number of LUWs.

              2. Create a Function module to find the delta queue length including number of records and number of LUWs for each data source.  Use standard function module  'RSC1_TRFC_QUEUE_READ'  to fetch the number of records and number of LUWs for all delta enabled data sources in the given application system.
  2. Replicate the generic data source in BW system.
  3. Create a info package and add it in the process chain.
  4. After loading data to PSA; need to run a program to check the length of each data source available in PSA table.
  5. Create a Program to check the loaded data in the PSA and compare it with  threshold limits for each data source.
  6. Need to set the threshold limit for each data source based on past data loads.
  7. Program will check for length of an each data source in terms of number of records, if any data source length found greater than threshold limit, send a mail to group of members in the list.
  8. Need to write another program with following steps: (Program is for safety purpose).
  9. Second Program will wait for 20mins then check for the status of a variant for data source data loading in step 1 in Process Chain table.
  10. If the status is in not in green status(G) then send a mail to group of members in the list.

Usually delay occurs whenever there is a failure or huge number of records found.

  11.  Finally, Set Global variables through transaction STVARVC to set the waiting time or to set the threshold limit for data sources instead of hard coding. This global variable can be modified at any point of time as per the requirement in production environment.

It is been a while I have worked on Archiving solution, during the year 2008 to 2009 on BW 3.5 the standard SAP provided archiving solution SARA. After this, now got a chance to work on Archiving on SAP BW on HANA. Hope you find plenty of documents on How to for NLS archiving solution.

                As usual SAP has improved the solution a lot and developed NLS with SAP SYBASE IQ as database (Columnar), when it compared to SARA to Tape Drives. Anyway this blog is not to compare these two, I would like to share my learnings, Tips & Tricks for NLS archive on SAP HANA.

 

                Before proceed to blog, I would like to thank you to my onsite co-ordinator for all Knowledge Transfer.

Tips & Tricks:

  • Before start with NLS archive, list down all the InfoProviders which are required for archive. Prioritize based on Business Requirement, User’s Priority, and Based on Volume & Size of InfoProvider.
  • Get the volume, size and years of data stored in each InfoProvider from basis team. So that you can make quick decision which has to be archived and not needed.
  • Archive in phases, like run till Step 50 – Verification Phase. Later schedule Step 70 – Deletion Phase (from BW).
  • If InfoProviders are loaded with historical values, i.e., if daily delta brings changed data for previous years. Then such InfoProviders can’t be archived, because once it is archived to NLS it is locked for any change. It will not allow to load/change old data.
  • If InfoProviders such as Sales etc, in other words larger volume cube. Find the characteristics other than Time to divide the data for archiving. Otherwise Archive jobs might take longer time and memory (remember it is on main memory in HANA – which is expensive). Many times system run out of allocated memory if Archive jobs runs in parallel.
  • Schedule/take a regular backup both NLS and existing SAP BW on HANA before deletion of data. Just in worst case if system crashes (either one) you can recover from backups.
  • Have a tracker which includes all list of InfoProvider and its current status along with the steps completed, sample is below it may vary based person to person/project requirement.

ArchivalTracker.jpg

  • Most important is schedule the archive jobs instead of manual execution. This way you save time & effort and use the system effectively.

There are two ways to do it.

 

I. Using Process Chains

 

  1. Goto RSPC T-Code and create a new Process Chain.
  2. Expand 'Data Target Administration' and Pull 'Archive Data from an InfoProvider' process to chain.
    • ProcessChain1.jpg
  3. Create Process variant in next prompt, enter required details and find the desired InfoProvider.
    • ProcessChain2.jpg
  4. On Next screen enter the time slice OR any other characteristics and select the step (here till 50)
    • ProcessChain3.jpg
  5. Final Process chain looks like below

ProcessChain4.jpg

Note: If it is request based archiving, then option "Continue Open Archiving Request(s)". Explore more on this, here in my case it is always based on time slice.

 

Pros:

  1. Process Chain usage is useful when archive is done regularly and if there is a requirement to archive specific time/year data to NLS.
  2. If the No Of InfoProvider and time slice is limited, process chain can be created for each InfoProviders and schedule.

Cons:

  1. If the Archive is based on time and one time activity, then it is tedious to create a chain for each InfoProvider and its time slice.

 

II. Using Job scheduling technique

Process Chain is not well suited for our requirement, hence have gone for Job Scheduling technique. Schedule the jobs one after the other, using the start condition as previous job completion.

 

Pros:

  1. It is instant and quick as well as easy. Most important is that each job can be planned for time (when system is free) and increase/decrease the jobs based on time/system free time. For example, just 6 months data archive can be scheduled OR complete 12 months OR 8 months.
  2. Always flexible to change the steps of Archive OR time slice. Also there is no maintenance as in process chain.

Cons:

  1. During the scenarios like memory full, the jobs might cancel and subsequent job will start after. Which will try to fill memory and system might shutdown/crash/slowdown the other process.
  2. Once scheduled, you may not have option to change due to less authorization in production.

 

Let’s look out on how to schedule,

  1. Goto RSA1 – InfoProvider and search for the desired InfoProvider and goto manage screen.
  2. If DAP (Data Archiving Process) already created, then there will be a new tab in manage screen named ‘Archiving’.
  3. Click on Archiving Request button underneath, a new pop appears to enter the details of archiving conditions.Job1.jpg
  4. First thing is to select the STEP 50 as shown and goto ‘Further Restriction’ then enter the criteria.Job2.jpg
  5. For the very first job, select the date and time keep at least half an hour from current time. So that you can schedule the remaining jobs. Check and Save,then click on ‘In the Background’ to schedule .
  6. Click on the Job Overview(next to Archiving Request) button in Manage screen of Archiving tab to see the Released Job and its details.

         Job3.jpgJob4.jpg

7. For Next Job schedule, we will enter the Released Job name as start condition. Just copy the name of the job, when you schedule the next job. This step is simple because there is only one job ‘Released’ and same will be considered as start condition.

Job6.jpg

8. For Subsequent job schedule, we would need the Job ID from the ‘Released Job’. We can get this ID by two ways. Either from Job Overview (goto released job and double click on it, further click on job details will give you Job ID) OR the moment you schedule the job, watch the footer (left bottom screen) for the job ID.

Job5.jpgORJob8.jpg


    • The reason why we need this ID is, there will be two jobs in Released Status. For third job we need to specify start condition after completion of second. Here is the trick if you specify the job name, it will take the old/first job. Hence for third job schedule, just select date and time, click on check button.
    • Later select the start condition as job, then it will give you pop up to select the correct job as start condition. In this screen, select the job which you noted the Job ID of 2nd job.

Job7.jpg

   9.  Once all jobs are scheduled, below is the released job status.

Job9.jpg

  10. Once in a while check the jobs via SM37, please note jobs are started one after the other.

Job10.jpg

 

Start archive jobs parallel by considering the system memory availability and work efficiently when these jobs are in progress. Again these are my own experience, feel free to correct if you find any efficient way of achieving the task OR add any steps if it all required.

 

Feel free to share your feedback, thanks for reading the blog.

 

Related Content:

http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/60976558-dead-3110-ffad-e2f0af3ffb90?QuickLink=index&overridelayout=true&59197534240946

Executive Summary:


There is a failure in the BW Data load due to the System generated code issue. This problem is occurring while loading data from source (Master Data) to target (DSO). The System generated code (In the Transformations, code of format GPxxxxxxxx) is not properly replicated into the Production system after one of the Change Request is moved to the Production system.


 

The below screenshots are taken from Quality and Production system. In Quality system the System generated code is replicated properly and in the Production system the System generated code is not replicated properly.


In Quality System:


 

In Production System:


 

Resolution:


We need to re-transport the transformation loading the data from source (Master Data) to target (DSO). By doing this System generated code will replicated properly in Production System.

This paper describes how to use the proper modeling techniques to design the Process chains in order to reduce the data load runtime where the data coming from the different source systems. The modelling techniques include the backend data model optimizations, utilization of work processors, utilizing the parallelization's excluding the serializations, utilizing the proper design settings, process chains design etc.


 

The below are the procedures followed to reduce the data load runtimes while uploading the data through the Process Chains.

 

  • Always need to load the Master data first and then to load transaction data. By using this technique data load runtime can be reduced while executing the transaction data load (This will be achieved by reducing the time consumption at the SID processing).

 

  • Use the parallel object execution technique rather than the serialization. This technique needs more work processors

 

  • Complex programming logics in BW data model to be avoided wherever possible by using formulas, read master data concepts etc.

 

  • Utilizing the maximum parallel processing of the data packets in the DTP execution by making the setting at the DTP level. This technique needs more work processors

 

  • Use Data sources processed by Delta mechanism wherever applicable.

 

  • Avoid creating Secondary indexes unless they are very much required because it will consume significant time.

 

  • If there is no reporting involved on the DSO's uncheck the SID generation flag in the DSO (Data Store Objects) settings to save time in run time of data load.

 

  • Need to utilize maximum table space to achieve good performance of data loads.

 

  • By Utilizing the maximum number of work processors in parallel will have the positive effect on the data load runtime (Data load runtime will be less).

 

  • Info Cube index deletion - Delete the cube indexes before loading data and generate again after loading. This will reduce the data load runtime.

 

  • Create Master data process chains separately (Attribute and Text) where the data changes are less and scheduling those process chains on a Weekly / Monthly basis based on the customer requirement to capture these changes.


1. BEx query discontinued features:-

 

If we run the program SAP_QUERY_CHECKER_740 and we will get all the queries which are not supporting in BW 7.4 version.

Run the report SAP_QUERY_CHECKER_740. Better to Run it in background as it might run very long. The spool output shows the queries that will not run correctly in 740 any longer. We will get output somewhat like this…

 

 

 

The following features are discontinued in 740:

 

  • Constant Selection with Append (CSA) is no longer supported and cannot be modeled any longer in the 700 BEX Query Designers. Some business scenarios use the feature to model an outer join.
  • The business requirement can be met by modelling an Info set. Using an Info Set is highly recommended, especially if a selection condition that was evaluated via virtual Info Objects in the CSA approach, can be directly handed down to the SQL statement.
  • In a BW Info Set, the join operation LEFT OUTER JOIN was not permitted up to now with a BW Info Cube. The reason is that, in this case, SQL statements with poor performance may be created.
  • Formulas before Aggregation are no longer supported. The report SAP_QUERY_CHECKER_740 analyzes both, the calculated Key figure definitions, and the query definitions that use the calculated key figures.

Exception Aggregation:-

Exception Aggregation can be defined for a basic key figure, calculated key figures and formulas in the Query Designer. It determines how the key figure is aggregated in the query in relation to the 'exception' characteristic. There is always only one Reference Characteristic for a certain Exception Aggregation.

 


If we use Exception Aggregation, the following two points are important to know:


The Reference Characteristic is added to the 'Drilldown Characteristics' and aggregation is carried out by the OLAP processor for these 'Drilldown Characteristics'.
The Exception Aggregation is always carried out last after all other necessary aggregations.

  • If the calculation operation commutates with the aggregation, the flag "calculation after aggregation" can be set. The option 'Calculate before Aggregation' are obsolete now and shouldn't be used any longer. Calculating before aggregation results in poor performance because the database reads the data at the most detailed level and the formula is calculated for every record.
  • Aggregation and calculation occurs at different points in time. By default, the data is first aggregated to the display level and afterwards the formulas are calculated.

   

 

The Exception Aggregation setting allows the formula to be calculated before aggregation over a chosen Reference Characteristic. The remaining aggregation is then executed using the defined Exception Aggregation for example 'average' or ' last value'.

Calculate after Aggregation: This field is only displayed for Calculated Key Figures; it is not displayed for formulas.

 

If the operation has to be calculated on a certain granularity, use formula exception aggregation to specify the granularity on which the formula is calculated.

Thereby, you can create Calculated Key Figures by using a formula that uses exception aggregation itself (this is a nested exception aggregation).


2. Call Function not found post Upgrade:-

 

Process chain steps loading hierarchies will failed. After upgrading to 740, when we load hierarchies using info packages, the loads fail when the info object doesn't have a conversion exit defined. This is due to a Program error.

To resolve the issues we need to implement a SAP note 1912874 - CALL_FUNCTION_NOT_FOUND.

 

On further Analysing.it is showing ABAP dumps.

 

 

 

Activation of SICF services in  BW:-

 

During Upgrade, The Software Update Manager disables services of the Internet Communication Framework (ICF) for security reasons. Post upgrade SAP BW 7.4; Internet Communication Framework (ICF) services will be inactivated due to security reasons. The services needs to be activated on application-related basis only, and it can be done manually (Right click then Activate), They can be done manually by following the URL given in the error screen through transaction SICF.

 

In SICF Tcode...You can get most of the services needs to be activated post BW 7.4 upgrade under default host/sap/public and the tree will open.

 

If, for example, you want to activate services for the /SAP/public/icman URL, you have to activate the "default host" service tree in transaction SICF. After that, you must activate the individual "sap", "public", and "icman" services.

 

You can activate an ICF service as follows:

1. Select the ICF service in the ICF tree in transaction SICF.

2. You can then activate the service in one of the following ways:

a) Choose "Service/Virt. Host" -> "Activate" from the menu.

b) Right-click to open the context menu and choose "Activate service".

 

 

If the "default host" node is inactive in transaction SICF, the HTTP request produces a "RAISE_EXCEPTION" ABAP runtime error stating that the HOST_INACTIVE exception condition was triggered. If a service is inactive in the SICF transaction, the error text "Forbidden" is displayed when you access this service.

 

Some services that must be activate in the system . Depending on the operational scenario:

Support for the Internet protocol (HTTP, HTTPS and SMTP) in the SAP Web Application Server /default_host /sap/public/ icman.

 

After you have installed SAP Web Application Server, you must ensure that this service is activated in transaction SICF.

 

We are going to face the issues in Metadata Repository, Maintain Master data etc. For this we need to active the services in BW. For example

 

 

 

Pre upgrade

 

 

Post upgrade BW 7.4:-

 

Post upgrade we will find a lot of services which will be in inactive state .we need to activate them.

 

 

 

In case of Metadata Repository we need to activate the services.

 

 

 

 

 

 

Some of the important services need to be activated as part of Post upgrade Checks.

 

With the message server

 

/default_host/sap/public/icf_info

/default_host/sap/public/icf_info/logon_groups

/default_host/sap/public/icf_info/urlprefix

 

With the Web Dispatcher

/default_host/sap/public/icf_info

/default_host/sap/public/icf_info/icr_groups

/default_host/sap/public/icf_info/icr_urlprefix

 

Using Business Server Pages (BSP)

/default_host/sap/bc/bsp/sap

/default_host/sap/bc/bsp/sap/system

/default_host/sap/bc/bsp/sap/public/bc

 

 

Analysis Authorization:-

 

If we are using the reporting authorizations concept and upgraded to SAP Net Weaver 7.3, We have to migrate these authorizations to the new Analysis authorization concept or redefine authorizations from scratch.

In SAP BW 7.3 Analysis Authorization is optional because the Reporting authorization will also work. But in 7.4 there is no Reporting authorization .Analysis authorization is mandatory. All the BW roles should be migrated to Analysis authorization.

The authorization objects S_RS_ICUBE, S_RS_MPRO, S_RS_ISET and S_RS_ODSO were checked during reporting authorization But this objects will no longer be checked during query processing in BW 7.4. Instead, the check is performed using special characteristics 0TCAIPROV (Authorizations for Info Provider), 0TCAACTVT (Activity in Analysis Authorizations) and 0TCAVALID (Validity of an Authorization). These are standard info Objects in BW.

These authorization objects are offered during migration configuration as a migration option. If you select these authorization objects, authorization for these special characteristics are generated according to the entries in the Activity and the associated field for the corresponding Info Provider and then assigned to the users.

 

By this Authorization we will not able to access any queries output. It will be showing You Don’t have sufficient Authorization for the Info provider.Until unless we will be adding 0BI_ALL object,We cannt access any query output . But it will not be given to any user as per Security.So we need to implement Analysis Authorization to get the output of the queries.

 

The info object which are Authorization relevant:

 

 

When we check Authorization Value Status table, In Older Version we have 0BI_ALL Authorization in Name of an Authorization Field.

 

 

But in SAP BW 7.4 upgraded Version We have.

 

 

0BI_All:  Assign all Analysis Authorizations to a user. Which are equivalent to SAP_All in BI. It can be assigned directly via RSU01.

 

We can check (Characteristic Catalog) Table RSDCHA about the info object which are checked as Authorization Relevant.Whenever these info object are called in query output.User will not get the output if he is not authorized.

 

The Custom Authorization objects can be created and assigned to users.

 

Exceptions are validity (0TCAVALID), Info Provider (0INFOPROV) and Activity (0TCAACTVT), which cannot be removed and always have to be authorization relevant.

 

Some of the Authorization issues faced after upgrade is with Semantic Partition and Writing ABAP Routines.

The available role with users should be modified with the object S_RS_LPOA to give required access. This is the Authorization object for working with semantically partitioned objects and their sub objects.

 

 

 

 

In case of writing Routines we will not Authorized.

 

 

 

 

 

Authorization Objects for Working with Data Warehousing Workbench

 

We should have the authorization for authorization object ABAP Workbench (S_DEVELOP) with the following field assignment:

  • DEVCLASS: You can choose from the following program classes, depending on the routine type:
  • "BWROUT_UPDR": Routines for update rules
  • "BWROUT_ISTS": Routines for transfer rules
  • "BWROUT_IOBJ": Routines for Info Objects
  • "BWROUT_TRFN": Routines for transformations
  • "BWROUT_ISIP": Routines for Info Packages
  • "BWROUT_DTPA": Routines for DTPs
  • OBJTYPE: "PROG"
  • OBJNAME: "GP*"
  • P_GROUP: "$BWROUT"
  • ACTIVITY: "23"

-------------------------------------------

                                                           Fiscal Period description(Text) showing wrong values

 

 

Issue:--

 

Fiscal Period(0FISCPER) showing incorrect description at IO and report level.

 

For illustrating purpose I am giving below example

 

Incorrect Fiscal period description  at report level.

Fiscal period(wrong).png

 

Expecting correct description

 

Fiscal Year/PeriodDescription
001.2015July 2014
002.2015August 2014
003.2015September 2014
004.2015October 2014
005.2015November 2014
006.2015December 2014
007.2015January 2015
008.2015February 2015
009.2015March 2015
010.2015April 2015
011.2015May 2015
012.2015June 2015

 

 

Resolution:--

 

Go to below path in SBIW and change Text Fiscal Year / Period value to " 2 Calendar Year " as shown in below screenshot.

 

 

Setting1.jpg

 

 

After modification of setting Fiscal Period Description showing correct values.

 

Please find below screenshot.

  Fiscal period (Correct).png

 

I hope this  doc will help you guys!!

                                          SAP MRS (Multi Resource Scheduling) - A ready reference (Part 1)

 

 

 

 

 

Being a beginner in SAP MRS is a challenge. As a BW techie, When I started in MRS Module, I found a lot of scattered information but there is no article or blog with consolidated information even on basic terminologies of SAP MRS.

 

This blog is my attempt to provide an insight of basic  aspects and terminologies of SAP MRS to a novice.

 

 

 

 

 

Introduction to MRS

 

 

 

SAP Multiresource Scheduling enables you to find suitable resources for demands and assign them to the demands. Demands are units of work from the areas of Plant Maintenance, or Project System, for which a resource is to be planned.

 

It is End to end scheduling process.

 

 

 

 

 

Salient features of MRS --

 

 

 

  • Effectively manage high volumes of resources and demands for your service, plant maintenance, or project business.

 

  • Get a real-time view of resources and project assignments with a user-friendly planning board.

 

  • Drag and drop” to update project details.

 

  • Boost productivity and reduce operational downtime.

 

  • With the ability to view, analyse, and interpret order data, you can easily match technician skills to assignments – for better repair work and improved service quality.

 

  • We can integrate  MRS with  all SAP modules.

 

 

 

 

MRS runs fully integrated in the ERP  system.

 

 

 

PS Integration

 

HR Integration

 

PM Integration

 

DBM Integration

 

C Projects Integration

                                           

 

PS Integration --

 

 

 

Below  diagram will explain the process  from creation of projects to till approval.

 

Project process Overview.png

 

 

 

Below diagram explains  resource assignment in project planning from creation of resource request to till task completion.

 

Resource planning.png

 

 

 

Each Network activity converted to Demand .Below  screenshots will explain how resources assigned to activity and  resource planning.

 

 

Activity.png

 

 

 

 

MRS Planning board

 

 

 

Below diagram is the planning board. we can see network activities in our left hand side and number of resources next to network activity(here it is 1)

 

 

 

Planning board is main work are in MRS here we can assign resources on respective days.

 

We can assign assignments to resources and split assignment to multiple days.

 

 

 

 

 

We can do may other things like  time allocation, leaves, color configurations(to identify task status)

 

 

Planning Board.png

 

 

 

Some Important t-codes of MRS(PS and PM related) --

 

 

 

OPUU - Maintain Network Profile Project Systems - Network and Activity

 

CJ2B - Change project planningboard Project Systems - Project Planning Board

 

OPT7 - Project planning board profile Project Systems - Project Planning Board

 

/MRSS/PLBOORGSRV - planning board (General) PM - Maintenance Orders

 

PAM03 - Graphical Operational planning (PAM) PM - Maintenance Notifications

 

CJ2C - Display project planningboard Project Systems - Project Planning Board

 

 

Below are important tables related to MRS and comments column will explain table contents in layman terms.

 

 

 

 

           Table

                                              Description

           Comments

/MRSS/D_CAG_CG_G

  Type G Capacity Graphs: Basic Availability w/o On-Call Times

Resource Assigned Hours

/MRSS/D_CAG_CG_H

Capacity Graph Type H: Basic Availability

Resource Available Hours

/MRSS/D_CAG_CG_B

Type B Capacity Graphs: W/o Cap. Assgmnts, w/ Reservations

Resource Adhoc Hours

/MRSS/D_CAG_CG_A

Type A Capacity Graphs: W/ Cap. Assignments & Reservations

Remaining Hours

/MRSS/D_RES_TA

Time allocations for resources

Time Allocation

/MRSS/D_BAS_ASG

MRS Basis Assignments

Booked Hours

/MRSS/D_DEM_INFO

Informative Fields for Demand Items

Resource Utilization Hours

/MRSS/D_DEM_PLR

Data required for planning-relevant items

Demand Hours

 

 

 

In my next blog I will explain the BW MRS reports on top of tables  and other modules integration with MRS.

 

 

Other reference for MRS understanding:

 

http://help.sap.com/mrs

Note: I did originally publish in the following post in my company's blog on software quality. Since it might be interesting for developers in the SAP BW realm, I re-publish it here (slightly adopted).


As my colleague Fabian Streitel explained in another post, a combination of change detection and execution logging can substantially increase transparency regarding which recent changes of a software system have actually been covered by the testing process. I will not repeat all the details of the Test Gap Analysis approach here, but instead just summarize the core idea: Untested new or changed code is much more likely to contain bugs than other parts of a software system. Therefore it makes sense to use information about code changes and code execution during testing in order to identify those changed but untested areas.

 

Several times we heard from our customers that they like the idea, but they are not sure about its applicability in their specific project. In the majority of these cases the argument was that the project mainly deals with generated artifacts rather than code, ranging from Python snippets generated from UML-like models and stored in a proprietary database schema to SAP BW applications containing a variety of artifact types beyond ABAP code. Even under these circumstances Test Gap Analysis is a valuable tool and may provide insight into what would otherwise be hidden from the team. In the following I explain how we applied Test Gap Analysis in an SAP BW environment.

 

The Starting Point

As you all know, in SAP BW a large amount of development is performed graphically in the BW Workbench. In cases where custom behavior is required, routines can be attached at well-defined points. As a consequence, it is very hard to track changes in a BW application. Of course, there is metadata attached to every element containing the relevant information, but seeing all changes that have occurred since a given point in time (e.g., the current production release) is not a trivial task. The same holds for execution information. Since we were already using Test Gap Analysis for transactional ABAP systems, we reached out to a team developing in BW and showed them some results for their own custom ABAP code.

 

only-manual-code.png
Figure 1: Test Gaps in Manually Maintained ABAP code only

 

The picture shows all ABAP code manually maintained by the development team. Each rectangle having white borders corresponds to a package, and the smaller rectangles within correspond to processing blocks, i.e., methods, form routines, function modules, or the like. As explained in Fabian’s post, grey means the block was unchanged, while the colors denote changed blocks. Out of these, the green ones have been executed after the most recent change, while the orange ones have untested modifications with regard to the baseline, and the red ones are new and untested. Of course, tooltips are provided when hovering over a rectangle containing all the information necessary to identify which code block it represents - I just did not include it in the screenshot for confidentiality reasons.

 

Moving Forward

As expected, the feedback was that most of the development effort was not represented in the picture, since development mainly happened in Queries, Transformations and DTPs rather than plain ABAP code. Another insight, however, was that all these artifacts are transformed into executable ABAP code. Therefore, we analyzed the code generated from them, keeping track of the original objects’ names. The result was (of course) much larger.

 

all-code.png
Figure 2: Code generated from DTPs (left), Transformations (middle top), Queries (middle bottom), and manually maintained code (far right)

 

Including all the programs generated out of BW objects, the whole content of the first picture shrinks down to what you can see in the right column now, meaning that it only makes up a fraction of the analyzed code. Therefore, we have two main observations: First, ABAP programs generated from BW objects tend to get quite large and contain a lot of methods. Second, not every generated method is executed when the respective BW object is executed. In order to make the output more comprehensible, we decided to draw only one rectangle per BW object and mark it as changed (or executed) if at least one of the generated methods has been changed (or executed). This way, the granularity of the result is much closer to what the developer expects. In addition, we shrink the rectangles representing these generated programs by a configurable factor. Since the absolute size of these programs is not comparable to that of manually maintained code anyway, the scaling factor can be adjusted to achieve an easier to navigate visual representation.

 

all-code-adjusted.png
Figure 3: Aggregated and scaled view for generated code (left to middle) and manually maintained code (right)

The Result

With this visualization at hand, the teams can now directly see which parts of the application have changed since the last release in order to focus their test efforts and monitor the test coverage over time. This helps increasing transparency and provides timely feedback regarding the effectiveness of the test suite in terms of change coverage.

A recap ...

 

since the NetWeaver Release 7.0 the SAP JAVA or J2EE stack is a component within the SAP BW Reporting Architecture, and was building the foundation for the BW Integrated Planning (BW-IP) as well. I also spended a lot of time creating documents and giving the main input for the BI-JAVA CTC template which is described for 7.0x here - SAP NetWeaver 7.0 - Setting up BEx Web - Short ... | SCN  and for 7.3x/7.40 here - New Installation of  SAP BI JAVA 7.30 - Options, Connectivity and Security

 

Then, not really recognized by the Audience (not even me ... ;-) a unspectacular SAP Note was released - Note 1562004 - Option: Issuing assertion tickets without logon tickets which introduced and extension to the parameter login/create_sso2_ticket  - See also the SAP Help Background to that Topic.

 

btw: the activation for SSL together with the SAPHostAgent can be found in this SFG Document - SAP First Guidance - SAP-NLS Solution with SAP IQ | SCN

 

After understanding the impact, I have updated the Document - SAP NetWeaver BW Installation/Configuration (also on HANA) and didn't give them much attention. The mayor difference in my active Implementation for SAP BW 7.3x and 7.40 and SAP Solution Manager 7.1 was: No SSO error at all, no connection problems between ABAP and JAVA stack anymore.


Unfortunately countless SAP Notes, SAP online help and SAP tools still referring to the old value - login/create_sso2_ticket = 2 but since 7.40 the "correct" value is now a system default: login/create_sso2_ticket = 3
btw: did you know, that the parameter is dynamically changeable in tx. RZ11? This allows you to switch the parameter during the usage of the BI-JAVA CTC template and continue the configuration successfully.


create_sso2_ticket.JPG

CTC_error.JPG


Finding out the BI-JAVA system status


the easiest way is to call the SAP NetWeaver Administrator http://server.domain.ext:<5<nr>00>/nwa and proceed to the "System Information" page

NWA_status.JPG

NWA_details.JPG

Details of the Line "Version":

1000

7.40

10

2

2015 …

 

Main Version

SPS

PL

Date

 

Note 1961111- BW ABAP/JAVA SPS dependencies for different NetWeaver release and BI JAVA patch updating relevant

 


Note 1512355 - SAP NW 7.30/7.31/7.40 : Schedule for BI Java Patch Delivery

 

 

 

Running the BI-JAVA CTC template

 


Call the wizard directly with the following URL - http://server.domain.ext:<5<nr>00>/nwa/cfg-wizard

FUN.JPG

FUN_details.JPG


Checking the result


Now, that this hurdle is taken we have to check the the BI-JAVA configuration with the BI diagnostic tool or directly in the System Landscape of the EP.

EP_SLD.JPG

And the Result in the BI Diagnostic tool (version 0.427)

Note 937697 - Usage of SAP NetWeaver BI Diagnostics & Support Desk Tool

BI_DT.JPG

To get to this final state, you have to additionally check/correct the following settings in the NetWeaver Administrator for evaluate_assertion_ticket and ticket according SAP Note 945055. (The Note is not updated since 2007)

NWA_ticket.JPG

[trustedsys1=HBW, 000]

[trusteddn1=CN=HBW, OU=SSL Server, O=SAP-AG, C=DE]

[trustediss1=EMAIL=xxx, CN=SAPNetCA, OU=SAPNet, O=SAP-AG, C=DE]

[trustedsys2=HBW, 001]

[trusteddn2=CN=HBW, OU=SSL Server, O=SAP-AG, C=DE]

[trustediss2=EMAIL=  , CN=SAPNetCA, OU=SAPNet, O=SAP-AG, C=DE]


As we are now using assertion tickets instead of logon tickets, the RFC connection from ABAP to JAVA looks a bit different:

SM59_A_J.JPG

crosscheck as well with the entry for the default Portal in the ABAP Backend (tx. SM30 => RSPOR_T_PORTAL)

Note 2164596 - BEx Web 7.x: Field "Default Font" missing in RSPOR_T_PORTAL table maintenance dialog

RSPOR_T_PORTAL.JPG

Checking additional settings in the BI-JAVA configuration via the findings from this SAP Note (even it is SolMan related at this time)

Note 2013578 - SMDAgent cannot connect to the SolMan using certificate based method - SolMan 7.10 SP11/SP12/SP13

the Notes deals about the missing P4/P4S entries in the configuration

Note 2012760 - Ports in Solution Manager for Diagnostics Agent registration
Note 1898685 - Connect the Diagnostics Agent to Solution Manager using SSL



Activating the BEx Web templates


Ok. This is also now solved. Now that the BI-JAVA connection technically works, we can check if the standard BEx Web template 0ANALYSIS_PATTERN works correctly. Please remember that you have to at least once activate the necessary web templates once from the SAP BW-BC, otherwise follow the SAP Note

Note 1706282 - Error while loading Web template "0ANALYSIS_PATTERN" (return value "4")

BW-BC.JPG

Now you can call with tx. SE38 the Report RS_TEMPLATE_MAINTAIN_70 and choose as Template ID 0ANALYSIS_PATTERN

0ANALYSIS_PATTERN.JPG


Running BEx Web from RSRT


Ok. This only approves that the BEx Web Template can be called directly. But what happens when you call the web template or any query from tx. RSRT/RSRT2? you will encounter (as well I did) that this is a complete different story. Tx. RSRT has three different options to show the result of a query in a web based format, and we are interested in the "Java Web" based output.

RSRT.JPG

The recently added "WD Grid" output is nice to use together with the new BW-MT and BW-aDSO capabilities with SAP BW 7.40 on HANA.


But what we see is this:

RSRT_error.JPG

Hmm? We checked the BI-JAVA connection and the standard BEx Web Template and still there is an error? Is there a problem with RSRT? Is the parameter wrong?

No. Recently (again not really recognized by the Audience again) another SAP Note was released which also has an impact to SAP EP 7.3x/7.40:

Note 2021994 - Malfunctioning of Portal due to omission of post parameters

in this context the following SAP Note is also important to consider:

Note 2151385 - connection with proxy


After applying the necessary Corrections to the SAP BI-JAVA EP Instance, also the tx. RSRT finally shows the correct output:

RSRT_output.JPG

 

If you encounter the following error:

 

"DATAPROVIDER" of type "QUERY_VIEW_DATA_PROVIDER" could not be generated

Cannot load query "MEDAL_FLAG_QUERY" (data provider "DP_1": {2})

EP_error.JPG

 

This is solved by the SAP Note (to apply in the ABAP Backend) - Note 2153270 - Setup Selection Object with different handl id


If the following error ("classic RSBOLAP018") occurs, this can have different reasons, e.g. outdated BI-JAVA SCA's, user or connectivity problem.

"RSBOLAP018 java system error An unknown error occurred during the portal communication"


Some of them can directly be solved with the following SAP Notes below:

RSBOLAP018_2.JPG


Note 1573365 - Sporadic communication error in BEx Web 7.X

Note 1899396 - Patch Level 0 for BI Java Installation - Detailed Information

Note 2002823 - User-Specific Broadcaster setting cancels with SSO error.

Note 2065418 - Errors in SupportDeskTool

More Solutions can be found here - Changes after Upgrade to SAP NetWeaver BW 7.3x

 


Finally ...


On this Journey finding the real connection, I found also some helpful KBA which I added to the existing SCN Document - Changes after Upgrade to SAP NetWeaver BW 7.3x in the 7.3x JAVA section. The good news here is: at the end the documents are very nice for background knowledge and where not needed almost at all, if you stick to the automated configuration.


So, of course the BI-JAVA - EP configuration is a hell of a beast, but you can tame it ... ;-)

I hope this bring's a bit of a light into the successful BI-JAVA configuration


Best Regards

Roland Kramer, PM BW/In-Memory

"the happy one's are always curious"


Business data is often viewed as the critical resource of the 21th century. The more actual the business data is, the more valuable it is considered. However historic data is not utterly worthless. To offer the best possible, meaning the most performant, consistent and correct access to data given a fixed budget, we need to know: Who consumes which slice of our business data at what point in time? This blog is about how to find out valid answers to this question from the perspective of a BW administrator.

Access to the data is granted via SAP BWs analytic engine. SAP BW users access the data via a BExQuery. The analytic engine in turn requests the data from the persistency services. BW (on HANA) offers a multi-temperature data lifecycle concept, with data stored in-memory and columnar format, usage of the non-active data concept, in the HANA ExtendedStorage (aka Dynamic Tiering), usage of the Nearline-Storage options, or archiving and, of course, you can delete the data.

Now given our fixed budget, how should we find out how to distribute the data across the different storage layers?

SAP BW on HANA SP 8 comes equipped with the “Selection statistics”, a tool designed to track data access and then assist finding a proper data distribution. With the selection statistics you can record all data access requests of the Analytic Engine on your business data. The selection statistics can be enabled per Info Provider. If enabled, then for each data access request the minimal and maximal selection date on the time dimension, the name of the Info Provider, the name of the accessing user and the access time are stored.

One of the major use cases for the “Selection Statistics” is for the “Data aging” functionality in the Administrator workbench (Administrative Tools->Housekeeping->Data Aging) is to be able to propose time slices for shifting data to the NearlineStore. Technically the “Data Aging” tool assist in creating:

  • Data Archving Processes
  • Parametrization (variants) of Data Archiving Processes , containing the proposed time slice
  • Process Chains that schedule the Data Archiving Processes

The recording of selection statistics is currently limited to time slices only. This limitation was introduced to

a)      keep the amount of recorded data under control

b)      minimize the impact on the query runtime due to the calculation of the data slices.

c)      emphasize time filters, that are usually provided in all queries and are the most important criteria when it comes to data retention and lifecycle considerations. 

If you would agree to this fine, otherwise feel free to post a comment and share your view.

 

Here some screenshots that demonstrate the use of the tools:

1.)    Customizing the selection statistics (transaction spro)

Pic1.pngpic2.png

 

2.)    Analyzing the selection statistics

 

pic3.png


 































3.)    Using selection statistics for Data Aging

pic4.png

Actions

Filter Blog

By author:
By date:
By tag: