1 2 3 6 Previous Next

Data Warehousing

77 Posts

With effective memory solutions and Hadoop datawarehousing techniques, system will optimize itself so quickly that human intervention will become superfluous. Experts are sure that big data will grow certainly in coming years in two ways – it would not look like earlier data cases and data execution will be really fast.

 

The next revolution is big data analytics tool that will not only optimize data processing but also increases quality of data storage and makes it completely safe and secure. Now data handling is not that much tough as it was considered in past.

 

Hadoop datawarehousing is one of the most effective data storage techniques where your data is 100 percent safe and can be accessed quickly.

 

big data future.pngHadoop Datawarehousing and BI


Business intelligence has become common today and every project integrates with BI tools or utility for most effective and reliable outputs. It would not be saying wrong that modern Hadoop datawarehousing solutions and other BI tools has over taken conventional data analytics. When we are working with modern data analytics tools, human intervention is completely superfluous here.

 

The applications are already designed based on user behavior and all operations are automated. There is not enough time that machines or apps have to wait for user response or mysterious behavior.

 

If we talk about the Hadoop then it should capture enough data sets to reduce chances of error or other issues. The best part is that it is not expensive to implement Hadoop and this is low cost investment process where everything can be done in simple steps.

 

Hadoop datawarehousing is advance utility where you can store all types of data as per your convenience in form of data sets. Group similar data sets together and understand the document database working and how it may be beneficial for you. This is extremely fast data analytics process that allow deeper analysis and quick data execution.

 

Thanks for reading. Hope you liked it. Let us know your thoughts, by adding comments to this article. Also, don’t forget to share it on your social profiles. See you in next post.

After publishing a post called 3 tips to survive as a SAP BI consultant in 2016 and another one called Enterprise Data Warehouse + SAP Vora = High Value, I got a great question from Sivaramakrishnan M basically asking for some documents or links that could help him get started with SAP HANA and SAP Vora integration. After a quick search I realized it was not really easy to get hold of all information you need, so here it is: all you always wanted to know about SAP HANA and Hadoop, but was afraid to ask.

 

Before we get started on the link material, we can benefit from some context. I'm assuming you already know what SAP HANA is. SAP In-Memory plataform that allow real time transactional and analytical processing, composed of several different enginnes (Geospacial, Predictive, Business Function Library and so on) which allowed for a huge transformation on what we are able to do. More information can be found here at SCN: SAP HANA and In-Memory Computing

 

According to Hortonworks, the definition of Hadoop is: "... an open source framework for distributed storage and processing of large sets of data on commodity hardware. Hadoop enables businesses to quickly gain insight from massive amounts of structured and unstructured data". And for those really interested on how Hadoop (Hortonworks) architecture looks like, I'd suggest the following link: http://hortonworks.com/hadoop/ and there you will find the following image:hadoop.jpgHaving the concepts of each component clear, next step is to define what the integration between those two components could look like. It basically will depend on the use case you have:

  • Smart Data Access --> in case you need to read data out of Hadoop, you can use SAP HANA Smart Data Access (SDA) to do it. SDA is widely used when it comes to hybrid models (SAP HANA + SAP NetWeaver BW powered by SAP HANA) or even Near Line Storage (NLS) scenarios. You can basically access a "table" in a different repository (mainstream databases all included) from SAP HANA without actually having to bring the data over to SAP HANA. So you could have your "hot" data in SAP HANA and your cold data into Hadoop and using SDA a simple UNION would bring data from both "tables" together.
  • SAP BusinessObjects Universe --> in case you only need to report in Hadoop data out of SAP BusinessObjects Suite, you can combine data from any source to Hadoop using the Universe, SAP BusinessObjects semantic layer to get the job done. There you can setup relationships, rules, etc.
  • SAP DataServices 4.1 (and above) --> in case you really need to bring data from Hadoop into SAP HANA and maybe apply some heavy transformation on the way, that is your path to go. SAP DataServices have been tunned to been able to read and write huge amount of data both ways.
  • SAP Lumira --> in case you only need front-end intregration and less complex data handling and transformation, that is a easy way to go. SAP Lumira can access and combine data from Hadoop (HDFS Data Set, Hive or Impala Data Set or a SAP Vora Data Set) and SAP HANA.
  • SAP Vora --> in case you need to correlate Hadoop and SAP HANA data for instant insight that drives contextually-aware decisions that can be processes either on Hadoop or in SAP HANA

 

With all those use cases in mind, Hortonworks draw a great picture of how the architecture could look like:

http://hortonworks.com/wp-content/uploads/2013/09/SAP_MDA_May_20142.jpg

 

With all that clear, I believe we can jump directly to the main topic of this post. Please, find below usefull links and its descriptions to bring you up to speed when integrating SAP HANA with Hadoop.

 

DescriptionsContent
Hadoop and HANA Integration OverviewHadoop and HANA Integration
How to Use Hadoop with Your SAP® Software Landscape from a CIO viewpoint/How to Use Hadoop with Your SAP® Software Landscape
Different methods of integrating SAP HANA with Hadoophttp://hortonworks.com/partner/sap/
SAP Press reference book/Integrating SAP HANA and Hadoop
SAP Help Vora landing pageSAP HANA Vora 1.1 – SAP Help Portal Page
SAP HANA Data Warehousing Foundation 1.0, integrate Hadoop into your SAP HANA model to cold (not frequently used) dataSAP HANA Data Warehousing Foundation 1.0 – SAP Help Portal Page
How to start SAP HANA Spark ControllerStart SAP HANA Spark Controller - SAP HANA Administration Guide - SAP Library
Calling a Hadoop Map Reduce function from SAP HANACreating a Virtual Function - SAP HANA Administration Guide - SAP Library
Adding Ambari to your SAP HANA Cockpit. Once you integrate SAP HANA and Hadoop, would be pretty smart to manage everything in one shopAdding Ambari URL to SAP HANA Cockpit - SAP HANA Administration Guide - SAP Library

How to go from ZERO to a working application using SAP Lumira, SAP HANA and SAP Vora with Hadoop in 12 steps

SAP HANA Vora and Hadoop by Stephan Kessler & Óscar Puertas at Big Data Spain 2015
How to get access to SAP Vora Development Edition
SAP HANA Integration with Hadoop using SDI (Smart Data Integration) to power Smart Forms
SAP HANA Integration with Hadoop using SDA (Smart Data Access)
SAP HANA Integration with Hadoop using SAP Data Services
SAP HANA VORA & HadoopSAP HANA VORA & Hadoop

 

I'm very confident that once you reach the bottom of this post (and visit all or at least most of the links I have compiled here) you will be able to get your SAP HANA and Hadoop integration going. In case you have additional links that I should include on this post, please, let me know via comments and I'll be more than glad to add them here.

 

All the best,

You have started using SAP NetWeaver BW back in 2001, when the ERP project team would go in and state formally: all reporting requirements will be covered by the "BW Team". Or even better, you are a customer using an entirelly different BI solution for years and now your company is looking with eyes full of desire to SAP HANA and all innovations and possibilities it brings to the table: S/4 HANA, Cloud Solutions, Plataform as a Service, Suite on HANA, HANA Live, Digital Boardroom and many, many more.

 

In any case, if you are an Analytics consultant you are wondering: what data should I keep where? Which data should I report against directly in S/4 HANA, in SAP BW or against this new beast called SAP Vora (and an additional Hadoop environment as well). My colleagues have already discussed a lot the need of a Enterprise Data Warehouse and as a consequence, the fact that SAP BW is not dead. On top of that, you all have seen the announcement of SAP Vora. So, does it changes everything? Don't we need a EDW anymore once we have Hadoop and its OLAP engine powered by SAP in-memory technology?

 

Fact is, those are different things that serve different purposes. While your EDW environment will still be your single point of truth, allowing multidimensional data modeling to support your decision making process and business querying; Hadoop should be leverage for storing all kinds of data, in the smallest granularity possible, from sources that demand a constant streaming or a constant update of data. Yes, that is a thumb rule, and yes, in an ideal world you should evaluate case by case, but that is a (good) starting point.

 

Let us discuss a couple of use cases:

  • Your customer wants to store temperature sensor data from thousands of different sensors across the world in order to analyze subtle temperature changes driven by ecological phenomena. In that case, I would go with a streaming acquisition tool to simply dump of those sensor data into a Hadoop, them use SAP Vora to analyze the data and integrate into an EDW (BW on HANA) in order to perform the advanced analytics, prediction algorithms that would allow the customer not only to report the past, but to find patterns and to predict consequences. Why not put it all on EDW then? Simple like this: the distributed architecture based on commodities hardware used by Hadoop is simply the most cost efficient way to do it, then you use the premium solution (which also requires more investment) to do the advanced stuff.
  • Your customer have an e-commerce website where he can sell his products. It does work pretty well and now your customer wants to go for one step further. He wants to have access to all navigation data from everybody that goes into the website (logged or not) and perform a crooss system analysis to try to identify which navigation patterns (search for products, category A and then category B or even visit today and then come back in 38 hrs, etc.) leads to certain buying behaviors and based on that, create specific offers to try to speed up the purchase experience and ultimately increase revenue. In this case, all navigation data could be stored into Hadoop, primary analysis would be powered by SAP Vora and the cross analysis with sales data could be done directly against S/4 HANA to allow real time decision making and new patterns identification.


In short, new business models and business needs required that we are ready to face new challenges. It depends on us what we can offer to our customers to that they can make the difference on the Digital Market.


All the best,

Dear Followers

 

 

Over the past weeks we are seeing many incidents being opened by BW customers wich are receiving a Syntax error in GP_ERR_RSTRAN_MASTER_TMPL_M when trying to activate or transport transformations.

 

 

Syntax error in GP_ERR_RSTRAN_MASTER_TMPL_M

Message no. RG102

 

 

There are many causes for this error, they can occurs from transformation routines, or inconsistency in mapping of fields till program errors.

Having this error on mind I decided to create this blog to provide some notes that can be checked before open an incident for SAP.

 

1725511 - 730SP8: Metadata of Hierarchy transformations corrupted

1762252 - Syntax error in GP_ERR_RSTRAN_MASTER_TMPL

1809029 - Automatic repair for incorr field in functional TRFN group

1818702 - 730SP10: Syntax error in GP_ERR_RSTRAN_MASTER_TMPL_M during activation of transformation

1889969 - 730SP11:Syntax error in GP_ERR_RSTRAN_MASTER_TMPL for RECORDMODE

1919139 - NetWeaver BW 7.30 (Support Package 11) - hierarchy error in transformation

1919235 - "Syntax error in routine" of a migrated transformation/during migration

1933651 - Syntax error in GP_ERR_RSTRAN_MASTER_TMPL for rule type "Time Distribution"

1946031 - Syntax error GP_ERR_RSTRAN_MASTER_TMPL during activation of transformation

2038924 - 730SP13:Transformation with target as outbound Infosurce of an SPO cannot be activated

2124482 - SAP BW 7.40(SP11) Activation failed for Transformation

2128157 - SAP BW 7.40(SP11) advanced DSO read in Transformation

2152631 - 730SP14: Syntax error during activation of Transformations

 

Regards,

Janaina Steffens

In my last blog-post I covered Package Size & Impacts of Package Size in DTP. Continuing the same trend of DTP, I am about to write on a topic discussed on SCN forum on & off. I intent to write this blog-post because I myself struggled with knowing the exact way of how the processing of data with allocated processors happens inside a DTP & which is the best available mode. After deep understanding, I thought of writing a blog-post on this very interesting topic. Let's start.

To start with, there are 3 modes of data processing available in DTP which are listed as below:-

 

1. Parallel Extraction & Processing

2. Serial Extraction & Immediate Parallel Processing

3. Serial Extraction & Serial Processing

 

Let us get into details one-by-one.


1. Parallel Extraction & Processing :

As the mode-name suggests, it actually extracts the data & process the data parallel i.e. simultaneously. In this mode, all available processes extract the data parallel & as soon as extraction is completed the data processing starts. Here data processing is updating the data to the target. In this mode, there is a possibility that one process might be updating the data packet to the target, while others might still be extracting. Hence, the work is done parallel.

 

SAP also refers this processing mode as P1.

 

Example: If we assign 6 parallel processes then each process extracts one data packet at a time & updates the same data packet to the target. Hence, in this case 6 data packets will be processed simultaneously & updated to target accordingly. This process continues till the end of all packets available.

 

2. Serial Extraction & Immediate Parallel Processing :

When we select this mode for data processing, the process extracts the data from source & after complete extraction, the data packet is processed by another process. In the meantime the 1st process again starts extracting another package. And the cycle continues.

 

Now, here if the 1st processor has not completed extracting the 1st package yet then a new process will start extracting depending upon the number of parallel processes assigned/allocated. Hence, as the name suggests it serially extracts the data from source & as soon as extraction is finished the packet is processed & updated to target.

 

SAP also refers this processing mode as P2.


Example: If we have 6 processes assigned for this mode then one process will do the extraction of packets one by one & other processes will do the processing & updating to target parallel. As explained above if 1st process is still extracting then 2nd process will jump in for extraction of next packet. This procedure takes place serial extraction & is further processed parallel.

 

3. Serial Extraction & Serial Processing :

In this mode only one processor firstly extracts the data packet from source & then the same processor will update the data packet to the target without involvement of any other processor. Once the data packet extraction from source & update to target is finished, the processor moves to next data packet.

This processing mode is rarely used because of performance optimization issues & is not recommended generally.


SAP also refers this processing mode as P3.


Example: As we cannot allocate or define maximum number of background processes for this mode it is obvious that one processor will extract & update while other will do other jobs. 


Summary & Performance Comparison:

Although the selection of Processing Mode in DTP depends upon the combination of Data Source, Transformations/ Mapping & Data Target the selection of processing mode should be smart enough to optimize the performance of DTP. Based upon above information, I can state that P1 has better performance over P2 & P3 whereas P3 has the lowest performance over P1 & P2.


Hope this blog helps you in understanding how the processing of data from extracting data from source to processing it to updating it to data target is done in DTP by the  processing types.


Author: Shubham Vyas

Company: Mindtree Ltd. (Bangalore/India)

Created on: 27 June 2015














Most of the BW consultants have raised a query in SCN for the performance issues they are facing while running DTP. To eliminate the issue of slow or long running DTP it is essential to understand the concept of Package Size & it's impact on the DTP. Once we are clear through it we might be able to eliminate the issue of long running DTPs.

 

To start with, understanding of Package & Package Size in DTP is important. Let's do that first.


What is a Package?

A Package is a bundle of data. It contains the records in group.

 

What do you mean by Package Size?

It is the size of the records a Package can hold inside. By default, the package size of DTP is 50000. You can increase or decrease the package size in DTP depending upon the processing type, transformations, routines & data size.

 

Do notice that there are two types of loading in SAP BI:

1. Infopackage loading:

    Default Package Size 20000


2. DTP loading:

    Default Package Size is 50000


In this blog I will specifically talk about DTP Packages. DTP 'package size' plays important role in loading the data to info-providers. A good consultant might consider package size as a major while designing a project skeleton. The reason being the following impacts.


Impacts of Package Size in DTP:


  •      When to keep Package Size less than 50000:


1. If we are dealing with lots of look-ups in transformations then keeping the package size smaller helps in faster execution of routines. As we know look-up in       transformation is directly proportional to number of records.

    As we know that routines run at package level, bigger the package size longer the duration for completing the look-up. Simply, MORE SIZE=MORE TIME.

2. If we have large volume of daily loads, then reducing the package size good option. It is obvious that data with high volume requires more time for transfer         process. If the Size is reduced then the processing time of a package falls reasonably eventually boosting DTP.


3. If the parallel processing we are allocating in DTP is less & data volume is huge it is better again to reduce the package size since the parallel processors         allocated are less the processing time for each package can be reduced gradually.


 

  • When to keep Package Size greater than 50000:


1. Obviously, if the data volume is less then keeping the package size more won't affect the performance of DTP.


2. Some times there are no look-ups/routines in a transformation. The mapping is direct. at this situation, increasing the data package size will not hamper the       DTP.




Hope this blog helps to understand the concept of package size in DTP along with it's Impact.


 

 

Author: Shubham Vyas  

Company: Mindtree Ltd. (Bangalore/India)

Created on: 19 June 2015

Courtney Driscoll

SAPPHIRE 2015

Posted by Courtney Driscoll May 1, 2015

With the annual SAPPHIRE NOW conference from May 5-7th in Orlando just days away, I want to share some fantastic Data Warehouse sessions that need to be on your agenda. Here are some of the top data warehouse sessions you shouldn’t miss.

 

 

  • Simplify Your Data Warehouse While Meeting Tough Requirements

When: May 5, 2015 2:30PM- 2:50PM  Theater 5

Customer: AmerisourceBergen

 

Explore ways to meet the new demands expected from data warehouse, including handling greater data volumes with lower latency. See how drug wholesaler AmerisourceBergen consolidated business warehouses and accelerators into one 14 TB instance of the SAP BW powered by SAP HANA application with near-line storage and achieved 100% performance gains.

 

  • SAP BW on SAP HANA migration – A Story of Failed Start to Amazing & Successful Ending

When: Tuesday, May 5, 2015 12:30PM -1:30PM S330G

Customer: Under Armour

 

Under Armour started its SAP Business Warehouse (BW) on SAP HANA migration journey in 2013 but encountered significant issues and failures that resulted in shelving and cancelling the initiative. In 2014, a fresh approach, migration strategy, approach and executed the plan that was able to achieve amazing results zero impact to business, improved speed, and no post go live issues.

 

  • Panel: Get Real-World Insights from Our SAP HANA Innovation Award Winners

When: Wednesday, May 6, 2015 5:30PM- 6:00PM Theater 5

Customers: The SAP Innovation Award winners will be selected from the award finalists

 

Envision the possibilities for transforming work and everyday life with in-memory computing. Meet the winners of the SAP HANA Innovation Award, and learn how they are successfully using the SAP HANA platform to drive innovation and positively impact their workforce, partners, customers, and society.

 

  • Modernize Business with an In-Memory Data Warehouse Architecture

When: Wednesday, May 6, 2015 11:00AM - 11:20AM PT410

 

Recognize quantifiable, observable financial benefits with an in-memory data warehouse. Learn how the SAP Business Warehouse powered by SAP HANA application provides an approach to establishing a business case for modernizing your organization.

 

  • Manage Your Data by Value with Dynamic Tiering

When: Wednesday, May 6, 2015 5:00PM - 5:45PM PT415

 

This discussion looks at how dynamic tiering can transform businesses by managing all data costs effectively while maintaining performance. See how SAP HANA dynamic tiering lets business warehouse users create powerful analytic applications that can operate on large data sets while significantly reducing the in-memory footprint and lowering costs.

 

Of course, these are only a small selection of the many great HANA Platform and Data Warehouse sessions that will be available to you at SAPPHIRE NOW. More information for planning your agenda can be found here:

 

The SAP HANA experience at SAPPHIRE includes hundreds of sessions packed with insights from customers, partners and HANA experts; demonstrations showcasing the latest innovations; Special Interest Activities (SIAs) for networking and fun, and much more.

 

 

But, if I don't see you in Orlando, you can still be part of the action by watching online live or on-demand. Online broadcast

Business Scenario

 

SAP  BW system may needs optimization / performance improvements by which the whole BW system is intact and efficient to fulfill the business needs.

 

The Following are the high level brief summary points which may serve as a starting point , a general approach to see the system and move forward with the optimization techniques. There are number of performance tuning techniques are available in the BW area. We can incorporate all / any/some of that after getting familiarized with these key points.

This may help a new comer who is on board to the project or an existing consultant.These key points may help to define a optimization project for an existing system and independent of the BW versions.


SAP - BW Optimization Approach at a High Level


  1. Understand the System Landscape
  2. Get familiar with the BW Architecture and the reporting Environment
  3. Understand thoroughly on the Business Areas where the Analytics are performed.
  4. Identify the KPI's and Key reports (Management &  Analytical)
  5. Collect, gather information and understand the road blocks and Business Pain points.
  6. Analyze the optimization / performance tuning steps taken so far and understand the changes done in the system landscape.
  7. Gather information on the highly impacted areas and where the business needs the system to be optimized.
  8. Prioritize the areas where optimization process should start.
  9. Segregate the improvement areas into Data Extraction(Includes Source side as well) , Transformation/Staging,  Analytical/Reporting side.
  10. Take the inventory of the objects pertinent to the each areas defined above.
  11. Identify the Key areas where the system can be improved/optimized by taking out the current changes/enhancing the current changes done so far.
  12. Analyze the impact of changing the existing optimization / performance tuning steps taken so far.
  13. Identify the best performance tuning techniques which can be implemented on the areas (Point 9)
  14. Come up with a solution suggestion where the existing changes can be removed / enhanced to the areas defined in Point 9.
  15. Analyze and identify how to sustain the system live ,up and running when about to perform the optimization process and make sure  business would not be impacted at any point of time.
  16. Draw an Optimization/Project plan based on the prioritized  business areas and get the objects list based on the inventory taken segregated by the improvement areas.
  17. Come up with the plan to identify the deliverable , timelines (The project management work will be involved at this point)
  18. Collaborate and explain the plan internally then go for the approval from Business stake holders.
  19. Create a Unit test plan for Dev and Regression test plan for QA to accommodate the finalized plan.
  20. Start implementing the optimization as planned  with extra care and make sure to do multiple testing from the BW perspective and the Business perspective and make sure nothing breaks.
  21. Continue testing in Dev , meet with internal team and prove the performance/optimization ; done so far against the optimization done earlier.
  22. Finalize and confirm the optimization plan works as expected successfully and move on to the next level in the system landscape.

 

Hope this gives an idea to the beginners, entry level consultant how to approach for an optimization/performance tuning projects.

Hi,

 

I decided to write this blog after I realized that it's not so easy to check which queries use given aggregate. There was a need to optimize the aggregates which were created on different occassions but to do this I had to know which queries work with them. 

 

Checking if an aggregate was used by any query is possible in RSA1 in “Maintain aggregates” option against a given cube. We can see here how many times an aggregate was used since its last activation.

1. RSA maintain aggregates.png

 

But we don’t see which exact query used an aggregate.

 

To check that two tables should be used:

1. RSDDSTAT_DM,

2. RSDDSTAT_OLAP.

  

We have to use both tables because none of them contain all the fields that we need i.e. an aggregate number and a query’s technical code.

In RSDDSTAT_DM we have an aggregate number and STEPUID field which we use to join that table with RSDDSTAT_OLAP.

2. RSDDSTAT_DM.png

 

We don’t have in this table the query technical code.

A query's technical code can be found in RSDDSTAT_OLAP as well as STEPUID .

 

3. RSDDSTAT_OLAP.png

The simplest possible way (no ABAP, no need to define BW data source) to combine the data mentioned above is to use SAP QuickViewer functionality (TCode:SQVI).

 

All we need to do now is to define and run query joining the data.

1. Give a query code name

2. Start creating the query by presing "Create a query" button

4.SQVI create query.png

 

3. Write a query title

4. Define Data source as "Table join"

5.SQVI table join.png

 

5. Add both table to project

 

6.SQVI add table.png

 

6. Join tables with STEPID only.

 

7.SQVI connecting tables.png

Press F3 to exit the window after all is done.

 

7. Define which fields should be displayed in a query and which should be used in filter. Use approprate checkboxes to do it.

7.SQVI list of fields.png

What was chosen can be seen here

8.SQVI list of fields ddd.png

 

And here:

9.SQVI list of fields ddd.png

 

8. Set Export as MS Excel

10.SQVI to excel.png

9. Save changes

11.SQVI to excel save.png

 

10. Run query

 

It’s important to restrict “Internal type of query …” to OLAP.

 

11.SQVI ograniczenie.png

 

11. After data is displayed in MS Excel functionality for removing duplicates may be used.

11.SQVI wyswietlenie.png

 

Thank you for reading this blog.

I would be very grateful for any comment on it.

 

I'd like to give spacial thanks to Tomasz Piwowarski for helping me while creating a QuickViewer query.\

 

Regards, Leszek

Hi,

 

I tried to make usage of the fuction module: RSD_IOBJ_USAGE to create a where-used list for Info Objekcts that are relevant for analysis authorization.

 

Could be useful for some of you.

 

I'm thankful for any hint concerning my ABAP.

 

 

Cheers




REPORT ZIOBJ_MATRIX.
**********************************************************************
TYPES:
       BEGIN OF ls_infoobj,
          RSDIOBJNM TYPE RSDIOBJNM,
       END OF ls_infoobj,
       tt_infoobj TYPE STANDARD TABLE OF ls_infoobj,
       BEGIN OF ls_report,
        infoobj   TYPE RSDIOBJNM,
        name      TYPE RSDIOBJNM,
        typ       TYPE STRING,
        counter   TYPE n,
       END OF ls_report.
DATA: lt_infoobj TYPE tt_infoobj,
      wa_infoobj TYPE ls_infoobj,
      lt_cube TYPE TABLE OF rs_s_used_by,
      lt_multi TYPE TABLE OF rs_s_used_by,
      lt_iobj TYPE TABLE OF rs_s_used_by,
      wa_used_by TYPE rs_s_used_by,
      lt_odso TYPE RSO_T_TLOGO_ASC,
      wa_odso TYPE RSO_S_TLOGO_ASC,
      lt_iset TYPE RSO_T_TLOGO_ASC,
      wa_asc TYPE RSO_S_TLOGO_ASC,
      lt_viobj TYPE RSD_T_VIOBJ,
      wa_viobj TYPE RSD_S_VIOBJ,
      lt_report TYPE TABLE OF ls_report,
      wa_report TYPE ls_report.
**********************************************************************
START-OF-SELECTION.
* 1. Part: Collection of relevant elements
CALL FUNCTION 'RSEC_GET_AUTHREL_INFOOBJECTS'
* EXPORTING
*   I_INFOPROV                      =
*   I_IOBJNM                        =
*   I_CONVERT_ISET_NAMES            = RS_C_TRUE
 IMPORTING
    E_T_IOBJ                        = lt_viobj
* EXCEPTIONS
*   COULD_NOT_GET_INFOOBJECTS       = 1
*   OTHERS                          = 2
          .
IF SY-SUBRC <> 0.
* Implement suitable error handling here
ENDIF.
LOOP AT lt_viobj INTO wa_viobj WHERE IOBJNM NS '0TC'.
    wa_infoobj-RSDIOBJNM = wa_viobj-IOBJNM.
    APPEND wa_infoobj TO lt_infoobj.
ENDLOOP.
* 2. Usage of relevant elements
LOOP AT lt_infoobj INTO wa_infoobj.
CLEAR: lt_cube, lt_multi, lt_iobj, lt_odso, lt_iset.
CALL FUNCTION 'RSD_IOBJ_USAGE'
  EXPORTING
    I_IOBJNM                    = wa_infoobj-RSDIOBJNM
*   p_IOBJNM
*   I_IOBJTP                    = RS_C_SPACE3
*   I_TH_TLOGO                  =
*   I_BYPASS_BUFFER             = RS_C_FALSE
    I_OBJVERS                   = 'A'
*   I_INCLUDE_ATR_IN_ISET       = RS_C_FALSE
*   I_INCLUDE_ATR_IN_REF        = RS_C_FALSE
IMPORTING
    E_T_CUBE                    = lt_cube
*   E_T_IOBC                    =
*   E_T_ISCS                    =
*   E_T_ISNEW                   =
*   E_T_TABL                    =
*   E_T_CMP_IOBJ                =
*   E_T_ATR_IOBJ                =
*   E_T_ATR_NAV_IOBJ            =
    E_T_MPRO_IOBJ               = lt_multi
*   E_T_NAIP_IOBJ               =
*   E_T_HIECHA_IOBJ             =
*   E_T_ICE_KYF                 =
*   E_T_AGGRCHA_IOBJ            =
*   E_T_CHABAS_IOBJ             =
*   E_T_UNI_IOBJ                =
    E_T_IOBJ                    = lt_iobj
*   E_T_CMP_KYF                 =
*   E_T_ISMP                    =
*   E_T_ISMP_INT                =
    E_T_ODSO                    = lt_odso
    E_T_ISET                    = lt_iset.
*   E_T_MPRO                    =
*   E_T_UPDR                    =
*   E_T_ANMO                    =
*   E_T_AQSG                    =
*   E_T_QUERY                   =
*   E_T_DAS                     =
*   E_T_KPI                     =
*   E_T_TRFN                    =
*   E_T_DTP                     =
*   E_T_HYBR                    =
*   E_T_DAP                     =
*   E_T_DMOD                    =
*   E_T_COPR                    =
*   E_T_BPF                     =
*   E_T_APPL                    =
*   E_T_FBP                     =
*   E_T_HCPR                    =
*   E_T_QPROV                   =
* EXCEPTIONS
*   ILLEGAL_INPUT               = 1
*   OTHERS                      = 2
IF SY-SUBRC <> 0.
* Implement suitable error handling here
ENDIF.
* Used in InfoCube
LOOP AT lt_cube INTO wa_used_by.
  wa_report-name = wa_used_by-tobjnm.
  wa_report-typ = 'Info Cube'.
  wa_report-infoobj = wa_infoobj.
    APPEND wa_report TO lt_report.
ENDLOOP.
* Used in MultiProvider
LOOP AT lt_multi INTO wa_used_by.
  wa_report-name = wa_used_by-tobjnm.
  wa_report-typ = 'Multi Provider'.
  wa_report-infoobj = wa_infoobj.
    APPEND wa_report TO lt_report.
ENDLOOP.
* Used in InfoObject
LOOP AT lt_iobj INTO wa_used_by.
  wa_report-name = wa_used_by-tobjnm.
  wa_report-typ = 'Info Object'.
  wa_report-infoobj = wa_infoobj.
    APPEND wa_report TO lt_report.
ENDLOOP.
* Used in DSO
LOOP AT lt_odso INTO wa_asc.
  wa_report-typ = 'DSO'.
  wa_report-name = wa_asc-objnm.
  wa_report-infoobj = wa_infoobj.
    APPEND wa_report TO lt_report.
ENDLOOP.
* Used in InfoSet
LOOP AT lt_iset INTO wa_asc.
  wa_report-typ = 'InfoSet'.
  wa_report-name = wa_asc-objnm.
  wa_report-infoobj = wa_infoobj.
    APPEND wa_report TO lt_report.
ENDLOOP.
ENDLOOP.
* 3. Part: Report
WRITE: 'Where-used list:'.
LOOP AT lt_infoobj INTO wa_infoobj.
  WRITE: / wa_infoobj.
ENdloop.
SKIP 1.
WRITE: / '',
       /(30) 'Info Objects', (30) 'Info Provider', (30) 'Typ'.
LOOP AT lt_report INTO wa_report.
  WRITE: / wa_report-infoobj, wa_report-name, wa_report-typ.
ENDLOOP.

Going around various organizations, I often hear complaints about how SAP BI/BW is "slow", "expensive", "rigid", and "carries long development cycles".  In the era of HANA, the prediction of BW's immediate demise resonates seemingly everywhere.

 

Yet, for all the years I have been working with SAP BW, I find it is a tremendous tool for both developers and self-servicing business users.  First coming to mind: It has a model driven GUI, and is database agnostic.  You don't need a DBA every time you want to touch a data model.  Even data staging, governance, and DB statistics come packaged within the graphical system administration tools and transactions.

 

Do you want to create a master data object with Attributes and Texts ? Go ahead open Tx RSD1, and fill in a few "forms" as in wizards.  The underlying tables and joins are created for you in the underlying database.  You don't need to have specific database and SQL knowledge.  Even in the wonderful world of HANA today, you need to bring a couple of wonderful tables into a join, define the join type as Text Join, and remember to add in the language column.

 

What about the "Long Development cycles" ? BW data model, even under the best practice, is a little extra compared to working directly with tables.  We see SAP trying hard to bring the "agile" and "fast" into BW:  Open ODS, ODP, Transient providers are a few.....


That still doesn't sound fast enough ? Well, have you visited a brick-and-mortar warehouse lately ? If you haven't, I encourage you to do so whenever you get a chance.  Large warehouses run by the likes of Fedex or UPS are a thing of beauty, if you enjoy watching well-run processes.

 

When I stopped by a warehouse the first time, I noticed one significant overhead--or a "waste of time": Incoming well-packed pallets were unpacked and had items individually scanned.  New pallets were built from scratch before being put onto the shelves.  It would have been obviously a lot quicker to directly put the inbound pallets onto the warehouse shelves, right ?   For that matter, shipping directly out of the manufacturing plant and skipping the warehouse step would be even faster ?  Yet, unless you are a mom-and-pop store-front operation, warehouse and warehouse processes are the necessary evil to bring out the efficiency, speed, and savings in supply chain management.

 

In many regards, data warehouse represented by SAP BW is the equivalent of physical distribution centers.  When organizations accumulate large amount of information, a well-designed and well-run BW enables the consolidation, self-service, and 360 degree view.  This "overhead" of data warehouse modeling, in turn, provides the streamlined and consolidated information delivery and discovery.

 

Digging in a little deeper into organizations where SAP BW's very existence is challenged, I realized that their BW system is often run under the shadow of SAP ERP.  For example, despite BW's elaborate data level security and task access which can distinguish owners, BW authorizations are setup based on the Transaction-code mode of ECC.   BW developers are starved from many useful BW transactions and capabilities, let alone the business community.

 

On the opposite end of the trajectory, organizations that fell in love with BW are those where the BW system is "open" and "inviting".  There are business users who write and navigate almost all of their own queries , which became extensive analytic workbooks/Web reports. 

 

Not dragging the story too long, bits and pieces came to mind during a recent conversation regarding how to start on the right track towards BI success.  Let me share them here, perhaps invoking some open discussions.

 

Let BI thrive under its own merits.

 

BI has many unique aspects that make it unfit with some traditional IT principles:

 

1) BI is access to ever-changing information and uncovering the dynamic truth underneath, beyond long-lasting static formatted reports.

 

2) BI is a decision support system.  BI is not necessarily a "must-have" to run a business.  The tagline is: ERP runs the business, BI manages the business.  Leadership and senior management sponsorship and adoption can go a long way to jump-start BI success.

 

3) Adoption: The ONLY success criteria for BI is business adoption.

 

ERP projects are measured by delivering on target, per functional requirement and process definitions.  While BI projects can be bench-marked against initial requirement, time, and budget, the true measure of BI success is whether the user communities adopt the BI deliverable as their own.

 

A few years ago, there were studies showing half of the dashboards delivered to users went ignored after a few months.

 

Two conclusions we can derive from the study:

 

      a) Dashboard lifetime may not necessarily be long...The initial design can have some short-term decisions built-in, as the outputs could well need to be updated soon after use.

 

       b) Even when those BI projects came in on-time and on-budget, unless the outputs are truly adopted and truly reflect users' REAL need, we can not claim success.

 

Coming to the execution, a couple of quick sparks:

 

1) BI, more than ERP, is about relationships between IT and business.  We should encourage the business to have a sense of Ownership in the BI process, not making them feel "us vs them". Consequently, the BI project methodology often works better when we allow Joint decisions, Flexibility, and Quick turn-around ---Agile.

 

Many modern BI tools, counting in BPC for example, are designed to allow business to take center stage and ownership.  That is also why we have a mushroom from the likes of SAP Lumira, Tableau, and Qlikview.   One of the three pillars for SAP BI tool-set is agility.

 

BI team needs to directly connect to the business we are supporting.  We need to understand the personalities of the business process and the people.

 

Open dialogues, direct contacts, and regular workshops are key to put BI into the users hands.  Many significant BI requirements come from informal or private inquires/conversations, and observing business operations.

 

If we only allow formal SDLC requirement process in place, we may discourage the business from coming forward with "immature" thoughts or ideas.   The maturity of decisions often comes after uncovering the truth through business intelligence, not before.

 

In practice, the emphasis of formal SDLC and IT ownership had often forced business to grab raw data, opening their own departmental data marts or BI shops...

 

Open is key.....

 

2) Following the Open theme: BI Access and security authorizations should attract people in, not keeping people out.

 

BW's analytic authorization model is a different concept from the ECC's Transaction Code model.

 

BW, BOBJ, and BPC have elaborate and extensive data and task level securities, which can be utilized effectively to protect the information.

 

In the meantime, BW transactions and BI tools should be open at the "maximum" level to BI developers.  Power users should be encouraged to report on their own.

 

Many of the finest improvements in BW came in the form of new transaction codes.  BW developers should be encouraged to explore those functionality to take advantage of the investments.

 

In principal, BW developers should be treated as DBA and ABAP developers in the BW clients, not as simply report developers.  This is especially true when we want to emphasize lean and creativity in the BI groups.

 

 

Rounding back to the top of the paragraph:  BI's success is directly tied to the BI groups's creativity, user experience, and ultimately user Adoption.  The more organizations open up and foster those, the better we will be in the BI journey.

 

More to come.....

When you have to set the customizing of the Aggregation task in /POSDW/IMG transaction, you have to take into account the "Processing and Performance" configuration of this task. For that, you should know how this parameter works, so I am going to explain you how this parameter works.

 

The aggregation task has two steps: one to generate the aggregation itself and another one to send or work with the aggregated data. In Both of these steps you can configure the parameter.

 

 

 

  • Aggregation Task (STEP 1)

 

At this first step, this parameter represents the number of transactions that you are going to tend into account for each LUW.

 

 

 

 

 

 

If you don't know how a LUW is, you can use the F1 help of this field:

 

 

 

 

Badi /POSDW/BADI_AGGR_TRANSACTION (The badi where the aggregation is done) works only with one transaction per iteration, it means that the badi receives one transaction and makes the agregation of its information (retail information for example). When the Badi finishes the treatment of all the relevant transactions, the result is an internal table which contains the aggregation data of all the processed transactions and this information has to be updated in the Aggregation database.


At this point you can think that the parameter are not going to have any effect on this task, but when the aggregation data is going to be updated in the database, the parameter is very important because the method ( of the /POSDW/CL_AGGREGATED_DATA class ) COPY_AGGREGATED_DATA_REC receives the entries of the aggregation internal table and depending on the value of the parameter "Number of items", this method is going to be executed one o more times.






The entries of the internal table are going to be processed for updating the aggregation database in packages. Each package contains a number of entries which are the result of the aggregation data of some transactions. How many entries we are going to have in each package? It depends on the value of the "Number of items" that we chose in the customizing, but this parameter don't represents the number of  aggregation internal table entries that we have to take. Instead of that, it represents the number of transactions that we have to take into account to produce the aggregation internal table entries, so in each package there are going to be "X" entries which have been produced by the treatment of the number of transactions that we chose before. A transaction might produce one or more aggregation internal table entries.



An example of this:

 

  1. We configure the parameter in the customizing as 5 ( 5 transactions). 

  2. We execute the aggregation task for 10 transactions (They produce an aggregation internal table with 50 entries).

 

 

   3.  When the information is going to be updated in the database, the information will be splitted in packages. The first package has 30 entries that have                   been produced by the first block of 5 transactions. These entries are treated and their information is updated in the database. The task status of                       these 5 transactions is "PROCESSED". But we still have more entries in another package. These 20 entries are from the last 5 transactions, and they are         procesed and updated in the database. The task status will change to "PROCESSED".



If the "Number of items" parameter is configured as '0' or INITIAL, it produces that all transactions have to be processed at the same LUW and the package will contain all the entries of the aggregation internal table.








 

  • Aggregation Task (STEP 2)

The second Step is responsible for processing the information of the aggregation database (OUTBOUND task), so for example we can send it by an ABAP_PROXY or we can do what we want with it. At this second step, we have to configure the parameter like the first step:




As you can notice, the value of this parameter can be different to the value of the first step because in this second step we aren't going to work with transactions, instead of that we are going to work with entries of the aggregation database. In the aggregation database there are entries produced by the aggregation task so when we execute the OUTBOUND task for processing these entries, the package size is very important because we are going to treat in each LUW an specific number of database entries. Actually, if we configure the parameter as 20 and we have 50 entries in the database, 3 packages will be created: the first package will have 20 entries, the second will have 20 entries and the last package will have 10 entries.



The badi for the OUTPUT task is /POSDW/BADI_PROCESS_PACKAGE and we select the database entries using the method GET_PACKAGE_DATA. For the example of 50 entries, we would have to call the method 3 times.



 

 

Like the first step, if we configure the parameter as '0' or INITIAL, all entries will be processed at the same LUW.

 

 

 

 

 

 

 

In summary, the package size is very important for processing and performance, so you should choose the best option to your purpose.

EasyQuery, What is it?

 

To put it simply, it is an automated process by which SAP system generates all necessary backend objects so that one can consume the data output from a normal query in BW, programatically from a (local or remote)/(SAP or Non-SAP system) by calling an RFC enabled autogenerated FM.

 

How to?

 

Just tick the EasyQuery check box in the properties of a normal query in query designer and execute the query and all backend FM and structures are autogenerated. All you need to do is to declare a variable by the type of autogenerated structure and pass it in to the autogenerated FM.

 

Transport issues

 

Here we reach the crux of this post. The standard method of transporting an EasyQuery is to simply collect the original query and transport it and then check the EasyQuery checkbox again in the target client and execute again. The issue here is there is no guarantee that the autogenerated objects will have the same names in the target client as they had in source client. This creates problems in code for consumption especially when its consumed from SAP systems. You will have a piece of code which addressed the autogenerated FM correctly in one client and after transport it will obviously error in new client if the autogenerated FM name is different.

 

Below fixes are possible.

 

1. One Option is to open each client and correct the code so that it matches the autogenerated objects. This not recommended at all as this will mean opening the production client too.

 

data: t_grid_data type /bic/ne_4 “number 4 in this type will have to be

                                                            "edited according to what number got

                              “generated after transport in the target

call function /BIC/NF_4          “same number above will have to be

                                                    “updated for function name too

    exporting

      i_s_var_02ix_asondat             
= wa_l_r_asondate

   
tables
      e_t_grid_data                    
= t_grid_data
      e_t_column_description           
= t_col_desc
      e_t_row_description              
= t_row_desc
      e_t_message_log                  
= t_message_log.

 

2. Second option is to use the FM : RSEQ_GET_EQRFC_NAME to fetch the autogenerated FM name for a particular EasyQuery name passed. You will still have to provide the custom structure as input to the autogenerated FM which can be accomplished by using a CASE..WHEN loop to check the nomenclature of the FM name returned below and declare the custom structure according to that. This works because both the autogenerated FM and structure use the same matching numbers as differentiators.

 

lv_eq_name = 'IHRPA_C01_Q0013'.


call function 'RSEQ_GET_EQRFC_NAME'exporting
  I_QUERY
= lv_eq_name
importing
  E_RFCNAME
= lv_fm_name. “Get the substring from this variable after

                         

                                                       “first 3 characters and use that in a CASE

 

*Example – CASE lv_substring.

*              WHEN ‘4’

*              t_grid_data type /bic/ne_4

*              WHEN ‘5’

*              t_grid_data type /bic/ne_5

 

call function lv_fm_name
   
exporting

      i_s_var_02ix_asondat             
= wa_l_r_asondate

   
tables
      e_t_grid_data                    
= t_grid_data
      e_t_column_description           
= t_col_desc
      e_t_row_description              
= t_row_desc
      e_t_message_log                  
= t_message_log.

 

 

3. The third and seemingly both logical and clean method would be to take these autogenerated objects as a template and create custom Z objects and use them in the code for consumption. We can have a single custom FM for all easyqueries created and this custom FM will replicate the functionality inside the autogenerated FM minus some security checks (example: a date check which is there to make sure autogenerated FM matches the latest changes to the query). This custom FM can be made to accept the EasyQuery name, Custom structure(which will have to be created taking the autogenerated structure as sample for each EasyQuery created, seperately) and ASONDATE

 

data: t_grid_data type z_Query_Name_Struct. “created using /bic/ne_4 as template.

                                         "Will have to be repeated for each separate EasyQuery and also

                                         "regenerated easyquery.

call function z_Query_Name_Function “created using /BIC/NF_4 as “template, but can be used for any

                               "EasyQuery going forward

    exporting

      i_s_var_02ix_asondat              = wa_l_r_asondate

i_s_var_eq_name = 'IHRPA_C01_Q0013'

   
tables
      e_t_grid_data                    
= t_grid_data
      e_t_column_description           
= t_col_desc
      e_t_row_description              
= t_row_desc
      e_t_message_log                  
= t_message_log.

 

I will be back with implemantation code for z_Query_Name_Function and steps for z_Query_Name_Struct

Hope the information provided above is useful. Please suggest further as you see fit, on the subject.

 

Regards

Darshan Suku

Ramya huleppa

Update from PSA Issue.

Posted by Ramya huleppa Dec 19, 2013

Hi,

 

Few days back I had searched SDN for the same issue(mentined below) and found many solutions related to it but none of the solutions which i found could help me in resolving the issue.

 

Hope it helps any.

 

 

The PC failed due to update from PSA issue.

 

When checked for error message

 

Details Tab : - last delta upload not yet completed. Cancel

 

In The monitor screen I can find that the load being in yellow status for more than expected time and ended with short dump the  extracted records are (for ex: 0 from 52 records)

In the source system I can find that the job is also  finished.

 

Checks done:-

  • 1.       Set the status to red and tried reloading but failed again with the same error.
  • 2.       Checked for IDOC’s  in BD87 but no IDOCS stuck.
  • 3.       Checked in SM58 if any TRFC’s stuck I could find the below error status saying

 

SQL error 14400 when accessing table /BIC/B00000XX000

 

TRFCs.png

 

When checked in ST22 I can find the below error

 

dump.png

 

Tried doing :-

 

Deleted the PSA table data and tried loading again but the PC failed showing the same error but when changed the info package settings from only PSA to data targets the load was success So it shows that there is some issue with PSA table as mentioned in the short dump.

 

Exact Solution:- As the  error was saying that “Inserted partition key does not map to any partition “in the short dump as a trail and error mathod tried activating the transfer structure using the program RS_TRANSTRU_ACTIVATE_ALL which worked and the loads was fine as normal.

04.Dec.2013 I received my invoice from Amazon: (Please note this can vary a lot depending on your usage, country, etc.)

 

This is the bill for 2 hours of usage: (currency USD)

 

 

 

 

I was looking for a trial version of SAP Lumira to play around in my iPad, I found this page:

 

http://global.sap.com/software-free-trials/index.epx

 

Here's a list of a lot of products SAP offers as free or time trials.

 

Well I found the Lumira free version here, but also I found:

 

SAP NetWeaver Business Warehouse, powered by HANA

 

The temptation was big, I've used 7.3 but never on top of Hana, I've seen post and documents, but I wanted to feel it.

 

So why not ? Immediately after signing up, you will be prompted for your Amazon EC2 account

 

NOTE: You (yes!, you and from your own wallet) will have to pay for AMAZON EC2 hosting, basically EC2 is a service for renting virtual machines. So you will need to rent virtual machines to run your BW on top of HANA. Amazon invoicing model it's pretty complex, but the bottom line is: While your virtual machine is turned on (running) you are spending money. I used my BW 7.3 on top of HANA for 2 hours, and was loading data and playing around the whole time.

 

So far im my AMAZON EC2 account I don't see any charges, but they will come, I will update this blog when I receive the charges.

 

Now I will just add some images and links from my experience and ask you to comment about yours.

 

Cheers!

 

The longest waiting time is 10 to 20 minutes when your amazon virtual machines are being created / activated:

 

Activating.png

 

And after some time (I was refreshing the web page every now and then) you see that's done:

 

Activated.png

 

Then you can click the connect button and a RDP file will download with this name: "Instance of SAP NetWeaver BW 7.3 on SAP HANA SP06 with SAP BusinessObjects BI 4.0 SP07.rdp"

 

That's the great part, your remote desktop connection client will open and you will be connected to your virtual machine server.

(Default passwords here)

 

 

On your remote desktop all your tools will be there installed and ready to use.

 

 

This is an actual screen shot:

Desktop.png

Then just double click your SAP GUI and voilá you are logged into your own BW 7.3 powered by HANA:

 

BW73.png

 

Now just have fun, experiment and remember:If you logout and leave your  server running you will be spending money, so please follow the instruction for shutting down your instance whenever you are not using it.

 

Also share your experiences running BW 7.3 HANA  Trial on AMAZON!

Actions

Filter Blog

By author:
By date:
By tag: