1 2 3 23 Previous Next

SAP Business Warehouse

337 Posts

This is the second part of the interview with Juergen Haupt by Sjoerd van Middelkoop. The first part of the interview, covering LSA++, native development and S/4HANA topics, is available here >>

This blog is also available on my company website.  This blog is cross-posted here to reach the SCN audience as well

 

Q: BW is now more open to non-SAP sources than it was before. Is the main development focus now on supporting any data model and source in BW modeling, or is the focus more on hybrid scenarios?

We are continuously improving and extending BW’s possibilities in respect to also supporting non SAP data. That means we do not force the use of InfoObjects any longer but enable straight forward modeling of persistencies using fields and defining datawarehouse semantics using Open ODS views on top of it. This allows customers to respond faster to business requirements. Next to that, we also support landscapes where a customers use SAP HANA as a kind of a Data Inhub or landing-pad replicating data from any source to HANA and modeling natively on that data. From LSA++ perspective these areas are like an externally managed extension of the Open ODS Layer.

 

When it comes to data warehousing the customer can integrate these data virtually with BW data or stage them via generate data flows to BW to apply more sophisticated services.

Q: How did BW on HANA and LSA++ change the way you see BW development?

BW on HANA now provides the option to work a lot more with a bottom-up approach. It means that you can evolutionary improve your models and your data starting for example with fields that define Advanced DSOs in the Open ODS Layer ending up with Advanced DSOs that leverage also InfoObjects to provide advanced consistency and query services. These Advanced DSOs are shielded by virtual Open ODS Views allowing a smooth transition between these stages if a transition is necessary at all. This flexibility is highly important to integrate non-SAP data in a step by step manner. I think this complements the proven but slow top-down approach in BW projects like we have seen them in the past.

Q: Talking about development in the current landscape. Customers that are have migrated to HANA a while ago and are remodeling their current LSA structures are finding it hard to keep up with developments in BW and the new functionality rapidly coming available. How can customers develop and remodel without investing in objects that will become obsolete soon?

This is a real challenge. Not a technology challenge, but more of an architectural and functional challenge. How will my landscape of the future look like what are the functions and features that provide most value for my business users? I would advise customers to think of their EDW strategy from a holistic point of view. That means for example you can’t see BW on HANA without considering SAP’s operational analytics strategy. Overall BW is not an island any longer, BW is now tighter connected than ever to other systems. So we have to think about the role in the future of all of our systems and what services they should provide.

So when customers think about going on BW on HANA, normally the first question is “Do we go greenfield or are we going to migrate?” This is very understandable question but I fear that this question does not go far enough.

Q: Most customers, when on the decision point to migrate or greenfield, consider their current investments and make sure these investments will not be undone.

Yes. Very often, but not always. Over the last time we s a steady increase of customers choosing a greenfield approach. They see that introducing BW on HANA is more than just a new version that you upgrade to. They are aware that BW on HANA means running and developing solutions on a real new platform, and they do not want to bring their ‘old’ style solutions into this new platform. So these customers go for a greenfield approach. This approach does not prevent you of course to transport in some of your existing content that you want to keep and maybe invested heavily in.

Q: This point of view is quite opposite of SAP’s ‘non-disruptive’ marketing strategy

What does non-disruptive mean? It is non-disruptive when it comes to migrating existing systems-yes. But does a ‘non-disruptive’ strategy really change the world to a better one? If you look on BW on HANA just as a new better version a non-disruptive migration would be your choice. But if you have the idea that BW on HANA is something really new that allows you creating values you never could offer before and that enables you to rethink the services you want your BW data warehouse to provide bringing it to new level then you cannot be non-disruptive.

It’s like driving into the Netherlands from Germany, I only notice it by chance because the road signs are different – the border has disappeared at least for car drivers… Compared to the EDW I would say that the border we used to have between EDW and sources has always been a very strict border. These borders between systems are more and more disappearing. And this has a lot of influence on all systems and the solutions we build in future. And this is related again with disruption. I can continue to work like I did it ten years before still stopping at borders that have disappeared in the meantime…….

Q: With the Business Suite on HANA and S/4HANA, embedded BW is seen by many as a viable option to use instead of a standalone BW system. In what cases should customers opt for an embedded scenario?

The question here is a matter of your approach. Let’s assume you start with S/4HANA Analytics or HANA Live, you can do everything with these virtual data models as long as business requirements and SLA’s are met. Then, the question is what to do when we need Data Warehousing services. Why not use the embedded BW? Yes, especially for smaller-sized companies, this will be an option. There are limitations of course. I think the rule of thumb here is that an embedded BW system should not exceed 20% of the OLTP data volume. With the HANA platform it is a matter of managing workload.

But there is also a certain danger with this approach and it does not derive just from the amount of BW data you should not exceed. The bigger the company is, the more likely you will have more than a single source. In this case you should start from the very beginning thinking about an EDW strategy. Otherwise you will sooner or later start to move data back and forth between these embedded BWs. So most importantly when making decisions about using the embedded BW is have a long-term vision about the future DWH landscape. In this context it is important to mention that with SAP HANA SPS9, we have the multi-tenant DB feature that allows us to run multiple databases on the same appliance. So sooner or later we will see BW on HANA and S/4HANA running on different HANA DBs but on the same appliance meaning that as there will be no boundary any longer then between the BWonHANA and S/4HANA. Thus you can share between them data and models directly. This would offer the benefits of the embedded BW but with a higher flexibility and scalability.

Q: So what you are saying is that embedded BW is an option for now in some cases, but with HANA multi-tenant DB in the near future and multi-source requirements stand-alone BW is the better option?

That depends on what your situation and what you are developing, for smaller clients and simple landscapes, I can imagine embedded scenarios to function very well, even in the future. For most other scenarios yes, I think stand-alone BW with multi-tenant DB is the better option.

Thank you very much for this interview!

You are most welcome!

 

This concludes my two-part blog of the interview I conducted with Juergen Haupt. I would hereby like to thank mr Haupt for his time and cooperation, SAP for their cooperation in getting this published, and the VNSG for getting Mr. Haupt in Eindhoven.

Applies to:       SAP BW 7.X


Summary:      

 

This document gives a clear picture on How to handle (Calculate)  Before Aggregation (This option was available in the BW 3.x version) at BEx Query level which is obsolete in BW 7.x


Author:           Ravikumar Kypa

Company:       NTT DATA Global Delivery Services Limited

Created On:    24th July 2015


Author Bio  

Ravikumar is a Principal Consultant at NTT DATA from the SAP Analytics Practice.

 

Scenario:

 

In some of the reporting scenarios, we need to get the number of records from the info cube and we have to use that counter in calculations. We can easily achieve this in BW 3.x system, as there is a readymade option given by SAP (i.e. Before Aggregation in the Enhance tab of a Calculated Key Figure) at Bex query level.

 

But this option is obsolete in BW7.X system and we can’t use this option. But SAP has given a different mechanism to achieve this at Bex level.

 

The below illustration explains you this scenario:

 

Data:

 

0DOC_NUMBER

MAT_DOC

MATERIAL

MAT_ITEM

PLANT

CALDAY

PRICE

UNIT

12346

23457

ABC

3

2000

20150102

30

USD

12346

23458

ABC

3

2000

20150102

30

USD

12347

23459

DEF

4

3000

20150103

40

USD

12347

23459

DEF

4

4000

20150103

40

USD

12345

23456

XYZ

1

1000

20150101

25

USD

12345

23456

XYZ

2

1000

20150101

25

USD

 

The user wants to see the Price of each material in the report, and the format of the report is as shown below:

 

MATERIAL

                  Price / Material

ABC

30 USD

DEF

40 USD

XYZ

25 USD

 

 

If we execute the report in Bex, it will give the below result:

 

1.jpg

But expected output is:

 

MATERIAL

PRICE OF EACH UNIT

ABC

30 USD

DEF

40 USD

XYZ

25 USD

 

We have to calculate this using Counter at Bex query level. In BW 3.X version we can achieve this by using the option ‘Before Aggregation’ in Enhance tab of the Calculated Key Figure (Counter).

 

Steps to achieve this in BW 3.X system:

 

Formula to calculate Price of each material is Price / Counter.

 

Create New Calculated Key Figure (ZCOUNTER1) and give the value as 1.

 

2.jpg

 

In the properties of the Calculated Key Figure Click on Enhance tab:

 

3.jpg

 

Keep the Time of Calculation as Before Aggregation as shown in the below screen shot:

 

4.jpg

If we don’t select the above option,the Counter Value will be 1 and it gives the below output:

 

5.jpg

So we have to calculate Price of each Material with Before Aggregation property (Now the counter value will be 2):

 

Now the output of the query will be like this:

6.jpg

Now we can hide the Columns ‘Price’ and ‘Counter (Before Aggr)’ and deliver this report to Customer as per his requirement.

7.jpg

This option is obsolete in BW 7.X ( check the below screen shot) :

 

Create a Calculated Key Figure as mentioned below (Give value 1):

8.jpg

In the Aggregation Tab, unselect the check box: ‘After Aggregation’.

 

9.jpg

You will get the below message:

 

Info: Calculated Key Figure Counter (Before Aggr) uses the obsolete setting ‘Calculation Before Aggregation’.

 

Steps to achieve this in BW 7.X system:

 

Create a Calculated Key Figure as mentioned below (Give value 1):

 

10.jpg

 

If we this Counter directly in the calculation it will give the below output:

 

11.jpg

We can achieve the ‘Before Aggregation’ option in BW 7.x system by following the below steps:

 

Create Counter1 with fixed value 1:

 

12.jpg

 

In Aggregation Tab select the below options:

 

          Exception Aggregation: Counter for All detailed Values

          Characteristic: 0MAT_DOC (Because we have different Material Documents (23457, 23458) for the material ABC):

 

13.jpg

Now the output of query has given correct value for the material ABC and the other 2 are not correct as they have same Material documents (refer sample data):

 

14.jpg

 

Now create Counter2:

 

15.jpg

Aggregation Tab:

 

Exception Aggregation: Summation

Ref. Characteristic: 0MAT_ITEM (Because we have different Material Items (1, 2) for the material XYZ).

16.jpg

 

Now the output is showing correct values for the materials ABC and XYZ and still we are getting wrong values for the material DEF, as it has same Material documents and Material Items:

 

17.jpg

 

Now create Counter3:

 

18.jpg

 

    Exception Aggregation: Summation

    Ref. Characteristic: 0PLANT (Because we have different Plants (3000 and 4000) for the material DEF).

 

19.jpg

 

Now create New Formula: Price of Each Material

 

Price of Each Material  = Price / Counter3

 

20.jpg

Now the output is:

 

21.jpg

 

Now we can hide the columns ‘Price’ and ‘Counter3’ and show the Price of each material in the output:

 

22.jpg

Likewise we have to analyze the data in the info cube and we have to identify the Characteristics on the aggregation has happened at Bex query level and we have to use them as the Ref. Characteristic in the Calculated Key Figure and we can achieve the counter ( no. of records aggregated).

Scenario:

While sending data to external systems via OpenHub or Analysis Process Design (APD), if there are negative numbers in the extracted data, the minus sign is positioned after the number in the output file. However, we want the minus sign to be positioned before the number.

 

 

Reason:

When the data is copied from the OpenHub interface, it is copied from the display in the internal format directly to the string that is finally written to the file. In the internal display of a negative number, the minus sign is displayed after the number.

 

SAP Note :   856619


For example :   - 9.21 would be stored in SAP as 9.21( Minus sign at the end) . This also gets transferred to external system or file.



Work around: 

Changing OpenHub  field setting from Internal to External does not help.  However, we can put a simple two line to code to get around this issue.

Field for which you expect a negative sign to occur put the below code in Field Routine / End Routine of Transformation.  For my scenario it is 0NETVAL_INV ( Net Value Invoiced ) .

 

 

    Field Routine :

__________________________________________________________

 

      IF SOURCE_FIELDS-NETVAL_INV IS NOT INITIAL.
         RESULT
= SOURCE_FIELDS-NETVAL_INV .
        
IF RESULT < 0 .
          
SHIFT RESULT RIGHT CIRCULAR  .
          
CONDENSE RESULT NO-GAPS .
       
ENDIF.
    ENDIF.




End Routine:


________________________________________________________________________________


LOOP AT RESULT_PACKAGE ASSIGNING <RESULT_FIELDS> WHERE NETVAL_INV < 0 .
 

   SHIFT <RESULT_FIELDS>-NETVAL_INV RIGHT CIRCULAR  .
  
CONDENSE <RESULT_FIELDS>-NETVAL_INV  NO-GAPS .


ENDLOOP.



You can have multiple variation of these codes based on your scenario and number of fields you want to change the sign for .  Basic ABAP Key word here is

SHIFT RIGHT CIRCULAR , which moves the minus sign in a circular fashion and bring to the front.  Then you condense the field to delete the gap between the number and the minus sign at the left.



Alternatively we can create a function module in BW system by copying CLOI_PUT_SIGN_IN_FRONT from ECC ( not sure why this is not available in BW by default ) and then call this function module.  However, as the code is very simple , I would prefer to put it in routine.

This blog has previously been published on my company's website, and posted here to reach the SCN audience as well.

 

At the High Tech Campus Eindhoven, the Netherlands. Juergen Haupt, Product Manager SAP EDW (BW/HANA) gave a presentation for the Dutch User Group (VNSG). In the morning before the meeting, I was fortunate enough to get the chance to sit down with Mr. Haupt for an interview.

About SAP BW on HANA, LSA++, Native development, S/4HANA Analytics and everything in between.

 

Juergen-Haupt         IMG_0683
Left: Juergen Haupt, SAP. Right: Sjoerd van Middelkoop, SOA People | Intenzz


Mr. Haupt, welcome to Eindhoven! Please introduce yourself to our readers. 


Well, thank you, Sjoerd!  Ok, my name is Juergen Haupt and I am now with SAP for 18 years, working in the area of Data Warehousing. Before joining SAP, I worked at Software AG, where I had the first contact with Data Warehousing. Starting to work with the early releases of SAP BW it quickly became clear to me that BW was a fully new BI approach bringing business requirements into focus. Nevertheless the first versions were primarily focused on OLAP, not on data warehousing like for example defined by Bill Inmon. Knowing about the impacts of ‘stove pipes’ and encouraged by customers. I began pushing the idea of Inmon’s ‘single version of the truth’ and the ‘conformed dimensions’ of Kimball towards an architecture driven BW approach. Around 2005 more and more customers positioned BW as their Enterprise Data Warehouse and asked for more guidance on how to set up a BW EDW. As a consequence we defined the Layered Scalable Architecture (LSA) that has become the standard setting up a BW EDW on AnyDB today.

 

But there is never a standstill. So in the moment where we had reached a solid, generally accepted state of LSA on RDBMS - SAP HANA and little later BW on HANA entered the scene…. And this is the reason LSA++ for BW on HANA is the successor of the LSA for BW on anyDB.

 

 

Q: So, if we compare the ‘traditional’ BW to BW on HANA – what are the major differences?


Well first of all customers that moved to BW on HANA report tremendous performance gains with respect to data loads and querying. Then they notice the simplification through less InfoCubes. Further simplification we see in BW on HANA 740 SP8 thru the new Advanced DSO that replaces traditional DSOs and InfoCubes. In addition to simplification comes the flexibility thru new CompositeProvider that allows combining any BW InfoProviders (DSOs, the new Advanced DSOs or InfoCubes) and create new virtual solutions. Even combinations with HANA native models outside of BW are possible.

 

But there are benefits at the second glance that are may be not so well known: let’s call it ‘the new openness of BW on HANA’. We all have the experience on what integrating non-SAP raw data in BW meant in the past certain efforts. You had always to assign and define InfoObjects to the raw data fields. This is now no longer a prerequisite to integrate data into BW as BW on HANA 7.40 comes with the so called field-based modeling. Field-based modeling means that you now can integrate data into BW with considerably lower effort than before. Regardless whether you load data into BW or whether the data resides outside BW: you can now directly model and operate on field level data without the need of defining InfoObjects in advance and subsequently mapping the fields to the InfoObjects. This makes the integration of any data much easier. And how is this achieved? Well the new Advanced DSOs allows storing field-level data in BW. Advanced DSOs can have only fields, a mixture with InfoObjects or just InfoObjects, like the old DSOs. On top of the BW Advanced DSOs with fields or on any SQL/ HANA view outside BW you define the BW on HANA Open ODS Views to model reusable BW-semantics identifying facts, master data, and semantics of fields like currency fields or text fields. Furthermore in Open ODS Views you can define associations between Open ODS Views and InfoObjects what means you model virtual star schemas. Last but not least you can use Open ODS Views in a query or combined with other Providers in a CompositeProvider like any InfoProvider

 

So in short BW on HANA is capable to model and work on raw data regardless where they are located and we can integrate these raw data with the harmonized InfoObject-world by associating InfoObjects in Open ODS Views to fields.

The idea of working with raw data in BW and the early and easy integration of raw data results in the new ‘Open ODS Layer’, which brings BW and the sources closer together

 

 

Q: So what you are saying is that the functionality that has been developed for BW on HANA is actually created from an architectural point of view, and not from a technological point of view?


Exactly, this is an important driver. Knowing that HANA can work on data like it is, without transforming the data into specific analytic structures you should be able to work with virtual objects directly on any field level data. Bringing the source systems closer to BW means that we need to have something intermediate between the source and the fully fledged and top down modeled EDW described by InfoObjects. This is achieved by the Open ODS Layer.

 

 

Q: LSA++ is, as you stated, the successor of LSA for BW on HANA scenarios. What are the main differences between the LSA approach and LSA++?


LSANo architecture stays forever. Any architecture has to be reviewed continuously especially when the circumstances change. When HANA came along and a little later BW on HANA was released, colleagues asked me very early “Juergen, can you make an update of LSA for BW on HANA?” I hesitated, because it was clear that BW on HANA is more than just exchanging the relational database, more than the offering of the in-memory BW Accelerator. This is why just an ‘update of LSA’ was and is not adequate – I do not want to bore you with the discussions we had – we can see the results looking to BW on HANA 7.4 and the LSA++ as successor of LSA:

Bearing in mind what I said before about BW on HANA we can look at LSA++ from two different perspectives – the first I call LSA++ for simplified data warehousing.

This perspective deals with the traditional way of doing data warehousing, moving data to BW and organizing the data in a proper way. With LSA++ the architecture becomes far more streamlined and flexible. We can find here two major differences with respect to the traditional LSA: First- making persistent Data Marts – BW InfoCubes - obsolete using virtual composition of persistent data (CompositeProviders). The result is the LSA++ Virtual Data Mart Layer. Second bringing BW closer to the source data thru BW field-based modeling. The result is the Open ODS Layer.

 

The Open ODS Layer broadens our architecture options as it may serve as inbound layer not only for an EDW Layer that is described mainly by InfoObjects. We can also stage the data in a DWH layer that is mainly described by fields. We call this a raw or domain data warehouse. A Domain data warehouse is dominated by one leading source system and all other sources integrate in the domain DWH with respect to this leading source. For example an S/4HANA can be such a leading source system. All other sources would then integrate in the related BW domain data warehouse with respect to the S/4HANA semantics and values.  Defining InfoObjects is always necessary if you have to harmonize multiple equivalent sources – this is the well-known EDW case.

 

But LSA++ is more than just simplified data warehousing. It is an open architecture, allowing an evolutionary DWH approach. I call this the LSA++ for logical data warehousing. It means a complimentary perspective to the traditional LSA++ simplified data warehousing perspective: sources of any nature (operational sources, data lakes like Hadoop or Open ODS as Data InHubs) play an equivalent role like the data warehouse: they are a basis for analytics.  The logical data warehouse like described by Gartner provides analytics and reporting on the original data as long as you can keep the service level agreements and cover the business requirements. You move data to the data warehouse only if the service requirements are violated or the business requirements cannot be fulfilled.

 

The LSA++ supports the logical DWH approach via an agile virtual data mart layer. Agility comes in from two modeling options in BW on HANA. First it comes in through the CompositeProviders allowing you to combine any BW Provider with HANA models from outside BW, wherever they are located. Second it comes in through Open ODS views of type fact, master or text allowing defining dimensional models on any data outside of BW defined by tables, sql-views or HANA views. You have always the possibility to switch a virtual Open ODS View source to a persisted BW Advanced DSO, like suggested by the logical DWH approach. Switching from virtual to persisted means that BW on HANA generates the data flow from the remote source to an Advanced DSO and the Advanced DSO itself based on the definition of the Open ODS View.

If you look to the virtual models on the source systems, like offered by HANA Live or S/4HANA Analytics, BW can then be considered as an extension offering additional services like historic data, business consistent views et cetera that the source cannot offer. The transition from the source model to BW can then happen in a very dynamic way.

 

 

Q: On SAP HANA you can define normalized DWH models like Data Vault directly. Data Vault is quite popular with Dutch companies. Do you think Data Vault modeling is a valid alternative for SAP ERP data?


We call our team SAP EDW Product Management, so that implies that we cover both BW on HANA and HANA native data warehouse modeling as we call it. A native HANA data warehouse can be modeled using any known DWH model (e.g. dimensional, 3NF, data vaults).That means freedom but also threat.  Threats especially for customers who decide about their future DWH architecture based on sentiments and a BW-perception that is driven by the past. We find all kind of BW perceptions in the market: people who love it and for whatever reason people who dislike it. I have a quite good idea why people may dislike BW but one thing is clear to me: sentiments are a bad advisor. Having a bad perception about the traditional BW in mind  we saw already customers who tried to build a native HANA data warehouse for SAP Business Suite sources saying “we have an SAP source system, and other SAP tools like Powerdesigner and Data Services, so we are going to ‘Vault it’. Making a long story short: finally this ended up as a nightmare as you have to rebuild all the semantics, associations and annotations natively. And it offers no business value because with BW, you get all this for free: BW knows these semantics because of the tight dictionary integration between SAP sources and BW.

In addition Data Vault modeling assumes that you should always expect the worst from your sources. It assumes that at any time and frequently source-model changes can happen that enforces you to change your DWH models and links and so on. But that is not the reality with SAP source systems. The SAP source models are in general pretty stable making the dimensional BW model working very well. Vaulting in general for SAP sources brings in complexity that cannot be justified.

 

 

Q: This is the case with standard SAP content. There is however not a single customer I know without quite a bit of customization in their SAP system. And this inability to adapt to these changes is a strong part of criticism on BW.


Yes, you are right and these customizations could not be modeled flexible enough in the past. But this is no longer true with BW on HANA. With BW 740 SP8 we now can model kind of dimensional satellites of a BW entity using Advanced DSOs with Open ODS Views on top or directly in a CompositeProviders .Let me give you an example: you have all the standard SAP attributes in your 0COSTCENTER InfoObject. You have the requirement to model country-specific attributes let’s say for UK only. Today you store these attributes in an Advanced DSO and define an Open ODS View of type master on top of it. In any ODS View of type fact or in a CompositeProvider you can then associate/ join the different views of  the entity cost center regardless whether they come from an InfoObject like 0COSTCENTER or an Open ODS views..

From my point of view, this will solve most modeling challenges customers had with such scenarios in the past: you load attributes with different ownership independently, you create new attributes without impacting the existing model, and you associate different attribute views and can even create dedicated authorizations.

 

Overall: I don’t believe that it makes sense to create data vaults for SAP ERP operational systems because it adds complexity, but no value. BW on HANA is pretty flexible to model volatility of SAP source models caused by customization. On the other hand if you have multiple, highly volatile non-SAP sources you are free to create a data vault DWH natively on SAP HANA The resulting architecture would then end up in a hybrid architecture between BW and a native HANA DWH.

 

 

This blog is the first half of the interview I conducted with Juergen Haupt. The second half will be posted shortly!


This blog is to address the beginners in SAP BW space, who are exploring and learning BW. Here is the ways to find the Function Module which loads data into Virtual Provider.

 

Background:

 

         During the issue reported by customer on data mismatch, you may have to find the root cause. In the cases of MultiProvider which has a combination of VirtualProvider and StandardCubes, it is necessary to understand the DataSources and DataFlow from various SourceSystems. But Virtual Provider will not have manage screen OR even it doesn't show the dataflow to know the source of data.

 

Solution:

       

To find the source of Virtual Provider follow the below ways.

 

I. Using RSA1,

 

1. Goto RSA1 ==> InfoProvider ==> locate your InfoProvider.

2. Goto display mode of the InfoProvider, click on Information icon OR press (Ctrl+F5).

VirtualProvider1.jpg

3. In subsequent pop up screen, choose 'Type/Attributes'.

VirtualProvider2.jpg

4. In subsequent pop up, find the details of Virtual Provider. Click on the details icon to find the name of the Function Module and its SourceSystem.

VirtualProvider3.jpgVirtualProvider4.jpg

 

II. Using Database Tables,

 

1. Goto SE16 ==> Enter the table name RSDCUBE.

VirtualProvider5.jpg

2. Pass the InfoProvider name as input along with OBJVERS = A.

VirtualProvider6.jpg

3. Field 'Name of Function Module/Class/HANA mod' - FUNCNAME will give you the Function Module name.

VirtualProvider7.jpg

 

Conclusion:

Either of these two ways will give you the source of the VirtualProvider along with its SourceSystem for further investigation.


Thanks for reading the blog, feel free to share the feedback OR even other ways to find the same.


Its possible to have so many Calculated Key Figures and Restricted Key Figures against a particular info provider

 

b11.png

 

And this is not a welcome sight for anyone looking to create a Query and consume one of the CKF's or RKF's

 

Solution


There is an ABAP Report "RSZ_COMPONENT_TREE", using which the CKF's and RKF's , for a particular info provider, can be grouped. For ease of use

 

 

How to do it


 

(1) In SE38 transaction, execute the ABAP Report "RSZ_COMPONENT_TREE"

 

(2) Key in the info provider and execute

 

b12.png

 

(3) The list of all CKF's and RKF's will be seen

 

b13.png

 

(4) Right click on the "Calculated Key Figures" and select "Create Node", giving an appropriate name

 

b14.png

 

(5) Drag and drop the needed CKF's inside the node

 

(6) Next, save the operation using the "Save Hierarchy" button

 

(7) The created nodes will now be reflected in BEx and Eclipse BW Query Designer

 

(8) The same can be done for RKF's

Recently I have been assigned with a reporting requirement to compare measures (E.g. Sales Figures) from This Year's date compared with the same date from Last Year's. Below might give some idea of the reporting requirement or the report format requires.

 

The reason for such reporting requirement is because the client need to compare how good the sales is compared with the same day of last year. This is to make sure that the marketing strategy could be in place or to discontinue certain line of products when there is a drastic change of the sales observed. Nevertheless there could be multiple ways to look at these data.

 

This Year DateLast Year DateThis Year SalesLast Year Sales
01.01.201501.01.20145,500.004,000.00
02.01.201502.01.20144,000.003,500.00

 

In order to create such a reporting requirements, a single InfoCube that contains the Day Sales are created. In this InfoCube there should be 3 different dates, that is the Current Date, Last Year Date and Next Year Date. These dates should be populated accordingly during the data load.

 

In order to view the above mentioned report format we need to combine two different InfoCubes but in order to prevent data duplication of Previous Year data, we could instead create another InfoSet from the same InfoCube that we have created previously and named it as Sales Figure (Last Year). This eventually will be used in the MultiProvider that combines the original InfoCube and the new InfoSet created.

 

In the MultiProvider, it should combine 2 InfoProviders, that is the original InfoCube and the InfoSet. There within the MultiProvider should be the Current Date and the Last Year Date InfoObjects. Additionally, 2 sets of key figures should be included that is the sales figures from the InfoCube and the InfoSet.

 

Assignment of the InfoObjects to the InfoProvider should look like below.

 

InfoProviderThis Year DateLast Year DateThis Year SalesLast Year Sales
InfoCubeThis Year DateLast Year DateSales Figure
InfoSet (Last Year)Next Year DateThis Year DateSales Figure

 

Explanation:

In order to get the Last Year Sales figures from the InfoSet the "This Year Date" should be assigned to the Next Year Date InfoObject in the InfoSet and  "Last Year Date" should be assigned to the This Year Date InfoObject in the InfoSet. In this manner we are able to read the Last Year Sales figure without duplication of the data in another InfoCube.

 

I hope my explanation of the above is details enough and clear for your understanding.

 

This blog is based on my original blog in my blog post but wanted to post it in this space in order to gain points for my contribution.

 

Best Regards

David Yee

Reporting on a transitive attribute, aka an attribute of an attribute, can be a tricky thing to model if you don’t want to materialize the transitive attribute(s) in your data provider.

 

Imagine we have an infoobject Z__ATE which represents a calendar day and contains many (date related) attributes, like fiscal period (FISCPER), calendar week (CALWEEK), etc

 

pic2.jpg

 

All of these attributes have a direct relationship with the Z__ATE (calendar day) and have been created as a navigational attribute. So whenever and wherever Z__ATE is added to a data provider (being a multiprovider, composite provider, etc etc) its navigational attributes can be used.

 

Further imagine we also have an infoobject called Z__SVL which contains Z_COUNTRY, Z_REGION and Z__ATE as a navigational attribute.

pic5.jpg

The above example implies that we can use Z_COUNTRY, Z_REGION and Z__ATE to navigate, but we’re not able to use the attributes of Z__ATE to navigate. The attributes of Z__ATE, in this example CALWEEK and FISCPER, are so called transitive attributes and can’t be used to navigate, when using infoobject Z__SVL

 

When reporting on transitive attributes is required and you don’t want to materialize the transitive attributes (in this case add CALWEEK and FISCPER as navigational attributes to infoobject Z__SVL), using a composite provider might come in handy.

The below composite provider has been created (tcode RSA1, select InfoProvider, Right mouse click on an Infoarea and select Create CompositeProvider) in which a Proof of Concept (POC) DSO is combined with multiple entities of infoobject Z__DAT (which is the same as infoobject Z__ATE).

pic3.jpg

 

Instead of materializing (adding navigational attributes of Z__DAT to the POC DSO), a non-materialized link (left outer join) has been created multiple times.

For example: An “changed on” infoobject (see the red box above) from DSO ZPOC has been added to the composite provider and this infoobject is (inner) joined with masterdata infoobject Z__ATE. (see top right in the picture above). Via this modeling solution a transitive attribute (all navigational attributes of Z__ATE) can be used for reporting on this composite provider, without materializing them.

 

(This blog has been cross-posted at http://www.thesventor.com )

As a member of Product Support I have seen recently a lot of incidents that can improve the processing time if the necessary information for analysis is available in the incident when it is created. Therefore I decided to create this blog post to describe some information that are very relevant to be provided when an incident is created under my area.

 

What is necessary to provide when open an BW-WHM* incident ?

 

 

To be able to analyse incidents on the BW-WHM* component, supports needs some initial information, and some information depends on each component.

 

The bellow wiki page describe in details what customers should provide for SAP Support when open incidents:

 

 

SAP Support Guidelines Data Staging - SAP NetWeaver Business Warehouse - SCN Wiki

 

 

Here I will resume the most relevant information for the more common components (for components that are not listed here, you can check on the wiki above):

 

 

  • CROSS-COMPONENT:

Step by step of how Support can do for reproduce the issue by his own.

Connection to the relevant systems.

Logon information updated in the secure area as per descibed on note 508140.

 

 

  • BW-WHM-DST-DTP

Technical name from DTP

Last load request that failed/or should be checked

 

 

  • BW-WHM-DST-TRFN

Technical name from Transformation

 

 

  • BW-WHM-DST-DS

Technical name from DataSource

Connection to Source and Target System

 

 

  • BW-WHM-DST-PC

Technical name from Process Chain

Last Log Id that failed

 

  • BW-WHM-DBA*

Technical name from infoobject or infoprovider

For developers and consultants, finding ways to simplify access to different environments and development tools can be of great help. This is especially true for SAP BW consultants, who are required to frequently access the BW system as well as one or more source systems, through several different development environments and frontend tools. While this series of articles will provide examples from my experience as a SAP BW consultant, some of it will also be relevant for other consultants and power users using SAP systems.

 

In this first part I will showcase the basic idea of working with shortcuts by showing how to simplify access to BEx Web. I will also present some other helpful shortcuts. In the second part I will deal with SAP shortcuts.

 

The basic concept

 

short.jpg

 

The concept involves creating a folder with batch scripts accessing the required environments and tools, and shortcuts referring to those scripts. The folder then has to be added to the Windows PATH environment variable. Once that is done, you can access any of the shortcuts by simply pressing Start+R, and then typing in the command. The idea is that the script will handle as much of the environment and login details as possible, besides (maybe) authentication and possibly allowing parametrization (such as launching a specific query from command line).

 

If you don't get why this is of help, think about all the time spent in GUI screens searching for the system to log in to, or looking for a specific query. In organizations with many environments and/or many queries, this adds up to quite some time.


First example - a Bex Web 7 shortcut

 

Do note that the hyperlinks inside the scripts in this and the following examples should be modified before use.

 

In this simple first example we will create a script that calls BEx Web in the production environment with a query parameter. I'll get to the SAPGUI examples in the second part.

 

1. Create the shortcuts folder. If you want it to be available to other users, you should create it in a shared drive, however do note that this prevents some personal customizations, as you'll see in the second part. For this example, we'll assume you created a folder in the c drive called "shortcuts".

 

2. Add this folder to the PATH environment variable. press Start+R and type:

 

setx path c:\shortcuts

 

You only have to run this command ONCE, unless the PATH variable gets overwritten in your organization when you restart the computer . In this case you could add a batch file with this command to the "Startup" folder in the windows start menu.

 

3. Create a script. I've dealt a bit with the structure of the BEx Web URL here, but I'll repeat the relevant parts:

The URL we're launching should look something like this:

 

http://host:port/irj/servlet/prt/portal/prtroot/pcd!3aportal_content!2fcom.sap.pct!2fplatform_add_ons!2fcom.sap.ip.bi!2fiViews!2fcom.sap.ip.bi.bex?QUERY=%1

 

Where host and port are the relevant host and port for your production system. %1 is a the parameter which is replaced at runtime with the technical name of the query. Some details regarding this structure can be found here.

 

Open up a text editor (notepad is fine for this), and paste in the following text, replacing host and port with the host and port of your production environment. If you're unsure regarding the host and port, launch a query in Bex Web from say the portal or Query Designer, and then grab them from the URL.

 

start "C:\Program Files\Internet Explorer\iexplore.exe" http://host:port/irj/servlet/prt/portal/prtroot/pcd!3aportal_content!2fcom.sap.pct!2fplatform_add_ons!2fcom.sap.ip.bi!2fiViews!2fcom.sap.ip.bi.bex?QUERY=%1

 

The "start" command causes the batch script window to close immediately after launching Internet Explorer.

Now save this as "bexw.bat" in c:\shortcuts.

 

4. Create the shortcut.

Open windows explorer and navigate to the shortcuts folder. Right-click bexw.bat and choose "Create Shortcut". Then right click the new file and choose "Rename". Rename the new file to "bexw".

 

Assuming the technical name of a query is DAILY_REPORT, you should be able to launch the query in BEx Web, in the production environment, by pressing Start+R and typing:


bexw DAILY_REPORT

 

You could create similar shortcuts for other environments, or even parametrize the host name, if that is more convenient to you.


Some other useful examples

Want to start Query Designer with a query already open? Here's a script that can do just that (but still requires you to authenticate and choose the environment).


start "C:\Program Files\Internet Explorer\iexplore.exe" http://host:port/sap/bc/bsp/sap/rsr_bex_launch/bexanalyzerportalwrapper.htm?TOOL=QD_EDIT^&QUERY=%1

You can see a bit of documentation for this in this SAP help page .

notice that if you're on a split-stack environment, the port needs to be the ABAP port, and not the JAVA port (The JAVA port is the one from the previous example).

 

You may want to configure your browser to automatically recognize the 3xbex files generated by the URL. In Internet Explorer 10, this is done by accessing the download window ( Ctrl+J ), right clicking the file, and unchecking "Always ask before opening this type of file".

 

A limitation of this method is that the query opened does not register in the bex "history" folder.

 

You can also launch Bex Analyzer in a similar manner. In the second part I'll show how to open Bex Analyzer with the SAP environment already chosen.

 

start "C:\Program Files\Internet Explorer\iexplore.exe" http:/host:port/sap/bc/bsp/sap/rsr_bex_launch/bexanalyzerportalwrapper.htm^?QUERY=%1

 

What about Bex Broadcaster? Here's a script for launching it with a query. This time we'll need the JAVA port. You could also launch it with the technical name of the broadcast setting by replacing "SOURCE_QUERY" with "SETTING_ID".

 

start "C:\Program Files\Internet Explorer\iexplore.exe" http://host:port/irj/servlet/prt/portal/prtroot/pcd!3aportal_content!2fcom.sap.pct!2fplatform_add_ons!2fcom.sap.ip.bi!2fiViews!2fcom.sap.ip.bi.bex3x?system=SAP_BW&CMD=START_BROADCASTER70&SOURCE_QUERY=%1


Don't you get annoyed with how note numbers are mentioned without URLs, and then you have to cut and paste that note into the SAP search site?

Here's a script with a note as a parameter. As a side note, you may also want to install a certificate for the SAP service marketplace.


start "C:\Program Files\Internet Explorer\iexplore.exe" http://service.sap.com/sap/support/notes/%1

 

As a final note, if you're a Firefox user, Ubiquity  is a neat alternative to the command line, although currently not in active development.

 

Next time: SAP shortcuts

Business requirement : - For VENDOR_ATTR get the vendor Emil ID

But vendor Email id stores the table ADR6 but this table don’t have vendor.

First give the vender id in LFA1 table now you will get the address number.

Ex – 00000001 and address number - 1234567

1.PNG

Go to the ADR6 table

2.PNG

Give the address no 1234567
now you will get the E-mali address

  • But user want see the in VENDOR_ATTR related the reports.

according to user requirment wrtie the code in the CMOD

Then go to 0VENDOR_ATTR

3.PNG
extract structure and then creae the append structure ex- ZBW_EMAIL

4.PNG

Function exit – EXIT_SAPLRSAP_002 – Master data attribute

5.PNG

CMOD CODE

WHEN '0VENDOR_ATTR'.
  
FIELD-SYMBOLS : <FS_VEND> TYPE BIW_LFA1_S.
  
DATA : IT_VENDOR TYPE STANDARD TABLE OF BIW_LFA1_S.

TYPES : BEGIN OF LS_LFA1,
            LIFNR 
TYPE LIFNR,
            ADRNR 
TYPE ADRNR,
          
END OF LS_LFA1.
  
DATA : IT_LFA1 TYPE STANDARD TABLE OF LS_LFA1,
          WA_LFA1
LIKE LINE OF IT_LFA1.
 

TYPES : BEGIN OF LS_ADR6,
             ADDRNUMBER 
TYPE AD_ADDRNUM,
             SMTP_ADDR 
TYPE AD_SMTPADR,
          
END OF LS_ADR6.
DATA : IT_ADR6 TYPE STANDARD TABLE OF LS_ADR6,
       
  WA_ADR6 LIKE LINE OF IT_ADR6.

IT_VENDOR[] = I_T_DATA[].

    
IF IT_VENDOR[] IS NOT INITIAL.

       
SELECT LIFNR ADRNR FROM LFA1
         
INTO TABLE IT_LFA1 FOR ALL ENTRIES IN IT_VENDOR
         
WHERE LIFNR = IT_VENDOR-LIFNR.

         
IF SY-SUBRC = 0.
           
SORT IT_LFA1 BY ADRNR.
           
SELECT ADDRNUMBER SMTP_ADDR FROM ADR6
             
INTO TABLE IT_ADR6 FOR ALL ENTRIES IN IT_LFA1
             
WHERE ADDRNUMBER = IT_LFA1-ADRNR.

             
IF SY-SUBRC  = 0.
                
SORT IT_ADR6 BY ADDRNUMBER.
               
ENDIF.
            
ENDIF.
      
ENDIF
.

REFRESH IT_VENDOR[].
LOOP AT I_T_DATA ASSIGNING <FS_VEND>.
 
READ TABLE IT_LFA1 INTO WA_LFA1
    
WITH KEY LIFNR = <FS_VEND>-LIFNR BINARY SEARCH.
   
IF SY-SUBRC = 0.

      
READ TABLE IT_ADR6 INTO WA_ADR6
         
WITH KEY ADDRNUMBER = WA_LFA1-ADRNR BINARY SEARCH.

       
IF SY-SUBRC = 0.
           <FS_VEND>-ZZSMTP_ADDR = WA_ADR6-SMTP_ADDR.
         
ENDIF.
     
ENDIF.
     
CLEAR : WA_LFA1, WA_ADR6.
 
ENDLOOP
.

0VENDOR_ATTR output with Vendor Email id

6.PNG

Hope it will help.

Thanks,

Phani

The below are the few reasons why the scheduled process chains are not started executing,

 

a. The Factory Calendar, only the working days defined in the Factory calendar will considered for process chains (Already Scheduled process chains) execution.

 

 

 

b. The Authorization, when the process chain is scheduled by a User XXXX, and if the authorization is removed for scheduling or executing process chains for the User XXXX, the next day process chain will fail to execute as the authorization is removed for the User XXXX. So it is recommended to schedule a process chain always with the User "ALEREMOTE" as this user will have SAP ALL access and there are no chances of removal of authorization.

Data Sources will fetch data to BW system from various application systems based on delta method or full load. Delta data load will capture the changed records from the time previous data load completes based on the logic derived and filters applied in initialization of delta in BW.

        Some times Manual checking of delta queues is necessary because of some delays occurs when there is a huge amount of records found in the delta queue.  Conventional Process involves manual checks of delta queues to avoid the delays in regular data loads. Manual checking of each delta queue separately will consume lot of effort. Proposed Approach addresses the above gaps and gives the flexibility to select the type of application system or given data sources. Once the data extracted, automatic e-mail alerts will be sent to support team through a programming logic.

 

Solution:

The Custom data source has been implemented based on extractor Function module to get the number of records for all the delta enabled data sources. Data source will fetch the information about all delta enabled data sources names and its number of records (will load in next load). This data source runs couple of hours before the regular data load starts followed by a program which will check  the number of records for each delta enabled data source and compare with threshold limit and trigger an e-mail to IT Ops support team if more number of records found in any Data source than usual record count. (A custom data source providing the information about list of all other delta enabled data sources (Standard or custom) and their number of records will delta get in the next data load)

 

To automate the regular monitoring of the delta loads, need to perform the below steps:

 

  1. Need to Create a generic Data source with function module with following steps in application System.

              1. Data source with structure contains Data source name, number of records and number of LUWs.

              2. Create a Function module to find the delta queue length including number of records and number of LUWs for each data source.  Use standard function module  'RSC1_TRFC_QUEUE_READ'  to fetch the number of records and number of LUWs for all delta enabled data sources in the given application system.
  2. Replicate the generic data source in BW system.
  3. Create a info package and add it in the process chain.
  4. After loading data to PSA; need to run a program to check the length of each data source available in PSA table.
  5. Create a Program to check the loaded data in the PSA and compare it with  threshold limits for each data source.
  6. Need to set the threshold limit for each data source based on past data loads.
  7. Program will check for length of an each data source in terms of number of records, if any data source length found greater than threshold limit, send a mail to group of members in the list.
  8. Need to write another program with following steps: (Program is for safety purpose).
  9. Second Program will wait for 20mins then check for the status of a variant for data source data loading in step 1 in Process Chain table.
  10. If the status is in not in green status(G) then send a mail to group of members in the list.

Usually delay occurs whenever there is a failure or huge number of records found.

  11.  Finally, Set Global variables through transaction STVARVC to set the waiting time or to set the threshold limit for data sources instead of hard coding. This global variable can be modified at any point of time as per the requirement in production environment.

It is been a while I have worked on Archiving solution, during the year 2008 to 2009 on BW 3.5 the standard SAP provided archiving solution SARA. After this, now got a chance to work on Archiving on SAP BW on HANA. Hope you find plenty of documents on How to for NLS archiving solution.

                As usual SAP has improved the solution a lot and developed NLS with SAP SYBASE IQ as database (Columnar), when it compared to SARA to Tape Drives. Anyway this blog is not to compare these two, I would like to share my learnings, Tips & Tricks for NLS archive on SAP HANA.

 

                Before proceed to blog, I would like to thank you to my onsite co-ordinator for all Knowledge Transfer.

Tips & Tricks:

  • Before start with NLS archive, list down all the InfoProviders which are required for archive. Prioritize based on Business Requirement, User’s Priority, and Based on Volume & Size of InfoProvider.
  • Get the volume, size and years of data stored in each InfoProvider from basis team. So that you can make quick decision which has to be archived and not needed.
  • Archive in phases, like run till Step 50 – Verification Phase. Later schedule Step 70 – Deletion Phase (from BW).
  • If InfoProviders are loaded with historical values, i.e., if daily delta brings changed data for previous years. Then such InfoProviders can’t be archived, because once it is archived to NLS it is locked for any change. It will not allow to load/change old data.
  • If InfoProviders such as Sales etc, in other words larger volume cube. Find the characteristics other than Time to divide the data for archiving. Otherwise Archive jobs might take longer time and memory (remember it is on main memory in HANA – which is expensive). Many times system run out of allocated memory if Archive jobs runs in parallel.
  • Schedule/take a regular backup both NLS and existing SAP BW on HANA before deletion of data. Just in worst case if system crashes (either one) you can recover from backups.
  • Have a tracker which includes all list of InfoProvider and its current status along with the steps completed, sample is below it may vary based person to person/project requirement.

ArchivalTracker.jpg

  • Most important is schedule the archive jobs instead of manual execution. This way you save time & effort and use the system effectively.

There are two ways to do it.

 

I. Using Process Chains

 

  1. Goto RSPC T-Code and create a new Process Chain.
  2. Expand 'Data Target Administration' and Pull 'Archive Data from an InfoProvider' process to chain.
    • ProcessChain1.jpg
  3. Create Process variant in next prompt, enter required details and find the desired InfoProvider.
    • ProcessChain2.jpg
  4. On Next screen enter the time slice OR any other characteristics and select the step (here till 50)
    • ProcessChain3.jpg
  5. Final Process chain looks like below

ProcessChain4.jpg

Note: If it is request based archiving, then option "Continue Open Archiving Request(s)". Explore more on this, here in my case it is always based on time slice.

 

Pros:

  1. Process Chain usage is useful when archive is done regularly and if there is a requirement to archive specific time/year data to NLS.
  2. If the No Of InfoProvider and time slice is limited, process chain can be created for each InfoProviders and schedule.

Cons:

  1. If the Archive is based on time and one time activity, then it is tedious to create a chain for each InfoProvider and its time slice.

 

II. Using Job scheduling technique

Process Chain is not well suited for our requirement, hence have gone for Job Scheduling technique. Schedule the jobs one after the other, using the start condition as previous job completion.

 

Pros:

  1. It is instant and quick as well as easy. Most important is that each job can be planned for time (when system is free) and increase/decrease the jobs based on time/system free time. For example, just 6 months data archive can be scheduled OR complete 12 months OR 8 months.
  2. Always flexible to change the steps of Archive OR time slice. Also there is no maintenance as in process chain.

Cons:

  1. During the scenarios like memory full, the jobs might cancel and subsequent job will start after. Which will try to fill memory and system might shutdown/crash/slowdown the other process.
  2. Once scheduled, you may not have option to change due to less authorization in production.

 

Let’s look out on how to schedule,

  1. Goto RSA1 – InfoProvider and search for the desired InfoProvider and goto manage screen.
  2. If DAP (Data Archiving Process) already created, then there will be a new tab in manage screen named ‘Archiving’.
  3. Click on Archiving Request button underneath, a new pop appears to enter the details of archiving conditions.Job1.jpg
  4. First thing is to select the STEP 50 as shown and goto ‘Further Restriction’ then enter the criteria.Job2.jpg
  5. For the very first job, select the date and time keep at least half an hour from current time. So that you can schedule the remaining jobs. Check and Save,then click on ‘In the Background’ to schedule .
  6. Click on the Job Overview(next to Archiving Request) button in Manage screen of Archiving tab to see the Released Job and its details.

         Job3.jpgJob4.jpg

7. For Next Job schedule, we will enter the Released Job name as start condition. Just copy the name of the job, when you schedule the next job. This step is simple because there is only one job ‘Released’ and same will be considered as start condition.

Job6.jpg

8. For Subsequent job schedule, we would need the Job ID from the ‘Released Job’. We can get this ID by two ways. Either from Job Overview (goto released job and double click on it, further click on job details will give you Job ID) OR the moment you schedule the job, watch the footer (left bottom screen) for the job ID.

Job5.jpgORJob8.jpg


    • The reason why we need this ID is, there will be two jobs in Released Status. For third job we need to specify start condition after completion of second. Here is the trick if you specify the job name, it will take the old/first job. Hence for third job schedule, just select date and time, click on check button.
    • Later select the start condition as job, then it will give you pop up to select the correct job as start condition. In this screen, select the job which you noted the Job ID of 2nd job.

Job7.jpg

   9.  Once all jobs are scheduled, below is the released job status.

Job9.jpg

  10. Once in a while check the jobs via SM37, please note jobs are started one after the other.

Job10.jpg

 

Start archive jobs parallel by considering the system memory availability and work efficiently when these jobs are in progress. Again these are my own experience, feel free to correct if you find any efficient way of achieving the task OR add any steps if it all required.

 

Feel free to share your feedback, thanks for reading the blog.

 

Related Content:

http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/60976558-dead-3110-ffad-e2f0af3ffb90?QuickLink=index&overridelayout=true&59197534240946

Executive Summary:


There is a failure in the BW Data load due to the System generated code issue. This problem is occurring while loading data from source (Master Data) to target (DSO). The System generated code (In the Transformations, code of format GPxxxxxxxx) is not properly replicated into the Production system after one of the Change Request is moved to the Production system.


 

The below screenshots are taken from Quality and Production system. In Quality system the System generated code is replicated properly and in the Production system the System generated code is not replicated properly.


In Quality System:


 

In Production System:


 

Resolution:


We need to re-transport the transformation loading the data from source (Master Data) to target (DSO). By doing this System generated code will replicated properly in Production System.

Actions

Filter Blog

By author:
By date:
By tag: