SAP Hana Roadmap: an ASUG Webcast

On Friday, SAP provided an ASUG BI Community webcast, covering the SAP Hana Roadmap, hosted by the Enterprise Data Warehousing Special Interest Group's Allison Levine.  Thanks to SAP speakers Scott Shepard, Matthew Zenus, and Balaji Krishna for providing this presentation and responding to all the questions and answers.  If you have further questions, see related links below or consider attending these ASUG Sessions at ASUG Annual Conference next month.  Please see below.

 

  The general legal disclaimer applies that things are subject to change.

 

1hanaroadmap.jpg

Figure 1: Source: SAP

 

Figure 1 shows the SAP product strategy and plan.

 

Phase 1 was “Introduction”, In-Memory Analytics.  The first phase included BI Suite, SAP BW powered by HANA

 

Next phase is the “innovation” phase.  This includes migrating line of business applications.

 

Longer term is the transformation, combining analytics / transactions on the same data platform.  This phase includes the plan of having the SAP Business Suite optimized to run on SAP Hana.

2hanaroadmap.jpg

 

Figure 2: Source: SAP

 

Figure 2 shows the planned roll-out and benefits; running BI and ERP on the same platform.  The vision is to have HANA is the primary persistence layer for transactions and analytics.

 

3hanaroadmap.jpg

Figure 3: Source: SAP

 

 

Figure 3 shows what is coming in an upcoming release, including text search on structured and unstructured data.  This includes advanced statistics capabilities with R, and predictive and business functions.  HANA is part of the big data story, with the Hadoop environment.

 

4hanaroadmap.jpg

Figure 4: Source: SAP

 

Figure 4 shows the planned innovations surrounding HANA.

 

In ramp-up today is the new BusinessObjects Predictive Analysis software providing visualization capabilities.

 

5roadmap.jpg

Figure 5, Source: SAP

 

Future planned direction is shown in Figure 5.  Real-time data platform strategy is for Sybase and HANA products.    This includes new SaaS applications, big data story and “openness” to third party partners.

 

Related Links:

 

http://www.sap.com/hana - Official SAP HANA page with customer testimonials

http://www.experiencesaphana.com - SAP HANA collaboration space for customers

http://help.sap.com/hana_appliance/ - SAP HANA User Guides

http://service.sap.com/HANA - Installation and Implementation knowledge

http://www.sdn.sap.com/irj/sdn/in-memory - SAP Developer Network for HANA

http://service.sap.com/roadmap - Roadmaps on SAP Service Marketplace

 

Subset of Question & Answer - special thanks to SAP and ASUG Volunteers for a detailed Q&A


Q: Can you define tables as with DDL and ref them in memory with loading and retrievals>?

A: you can use DDL's to create and load tables in HANA and then use BI thru odbc/jdbc/odbo interfaces
________________________________________________________________

 


Q: Can Informatica be used in place of Data Services?

    A: It is not certified by SAP but customers have been using it
________________________________________________________________

 


Q: Can I run BEx queries / workbooks against HANA?

A: yes when your BW is sitting on HANA DB
________________________________________________________________

 


Q: If HANA does what it promises to do, Can SAP BW layer be eliminated if we need to  report data from the central ECC system ?
A: Yes, you do have the option to directly report from ECC data once its replicated into HANA
________________________________________________________________

 


Q: When will Hana BW73  be GA ?

A: BW on HANA was GA'd April10th
________________________________________________________________

 


Q: We have SAP HANA, We have SAP BW 7.0 and now we want to uprade SAP BW to SAP BW 7.3 powered by HANA, do we need additional HANA license?
A: HANA is a separate product, it is a DB platform..so you need to procure license
________________________________________________________________

 


Q: What is the expected compression rate of data for sizing purposes?

A: Here are the sizing notes for BW on HANA and just HANA (non-BW non SAP ERP):

 

https://service.sap.com/sap/support/notes/1514966

 

https://service.sap.com/sap/support/notes/1637145

 

Also includes presentation and quicksizer.

 

________________________________________________________________

 


Q: HANA on BW, without using optimized Infocube and ODS implementation, What's the storage format of other normal BW DB tables? Will be in Columnar/Row format?

A: it'll mostly be in column tables
________________________________________________________________

 


Q: End-user workstations can view HANA content using Excel, Crystal, BOE, BOBJ Dashboards, BOBJ BI suite

    A: correct
________________________________________________________________

 


Q: is there any configuration changes needed on the ECC system to enable SLT?  If so, what is the impact to the underlying system?

    A: There are some additional components that have to be installed but we haven’t seen any impact on the ECC system
________________________________________________________________

 


Q: If we eliminate BW, how can we get the BEx Query functionality?

A: Do not have to eliminate BW...HANA enhances BW..so all ur current bex queries will still run..and run much faster
________________________________________________________________

 


Q: Can you run SAP HANA with BOBJ XI 3.1 ?

A: Yes, webi 3.x can be used with HANA but BI4.x release is optimized..so plan the deployment to get the best out of in-mem platform
________________________________________________________________

 


Q: can you define tables with DDL? I'm interested in how to push an entire Sybase or SQL Server database into HANA.

    A: You have to use the replication techniques supported with HANA


________________________________________________________________

 

Q: Is all  TREX features will be incorporated with HANA?

A: HANA is built on top of trex technology for column store...much of the success with BWA has been reused
________________________________________________________________

 


Q: If  I have 20 million records to update to the same DSO, can it handle all those changes or do I have to break the load in chunks (assume the 20 million coming as changes not as new)

A: BW on HANA has some amazing new capabilities that enable these types of workloads.  For example, with BW on HANA one can easily do remodeling without any of the previous challenges with dropping and reloading data, waiting on data model updates, etc.   We will demo this at SAPphire/ Teched.  It is very cool and extremely useful for BW architects..
________________________________________________________________

 


Q: Did I understand correctly, that HANA will be available for ECC 6.0 at the end of 2012?

    A: Yes, we are working towards it...will be in ramp up end of 2012
________________________________________________________________

 


Q: If we implement HANA, connected directly to ECC, should we still invest time in building SAP  BW cubes/data marts for reporting or change our strategy to get the same reporting out of HANA-ECC ?

    A: If you have an investment in BW currently HANA will enhance the experience..in terms of fast data loads, DSO activation and query response.
________________________________________________________________

 


Q: With HANA as database, do not need any backend data base Oracle, SQL , DB etc? right?

    A: That is the plan we are working towards
________________________________________________________________

 

 


Q: Can you show that Predictive Business Objects solutions?

    A: We have a webcast on Predictive Analytics/HANA in June and presentations at ASUG Annual Conference/SAPPHIRE
________________________________________________________________

 


Q: Where can I get information on SAP CO-PA accelerator, SAP ERP for operational reporting?   

A: experiencesaphana.com
________________________________________________________________

 


Q: Can Customer migrate BW database  into HANA, or we have to get SAP/Partner services?   

A: Recommed going thru a OS/DB migration specialist
________________________________________________________________

Q: Any timeline for ECC 6.0?   

A: it was in the presentation, first limited/restricted shipment is planned for end of this year
________________________________________________________________

 


Q: Will HANA eventually become a "hub" technology that will integrate data access for all SAP applications that now run on independent databases, like ERP, SCM, GTS, CRM ?

A: Yes.  It is planned for the entire business suite to run on HANA.  First will be ECC.
________________________________________________________________

 


Q: What is the ramp-up and GA availability of ECC on HANA?  Will this require 1.0 “SP4” or something beyond 1.0 SP3?

A: Planned for ramp-up Q4 2012.  Most likely with HANA SPS5.
________________________________________________________________

 


Q: What's the role of Information composer compared Hana Studio?

    A: web based and end user can combine personal data (spreadsheets) with already build hana column views
________________________________________________________________

 


Q: When will a roadmap with visibility beyond SP3 be released?   

A: SPS3 was already released. We just covered some key functionality of what is coming and not released yet
________________________________________________________________

 


Q: Is the plan to replace other ERO and BW database oplatforms with HANA

A: The plan is to have HANA as the real-time foundation for ERP and BW and support analytics as well.
________________________________________________________________

 


Q: How does Hana makes BW remodeling easier

A:  The workload will be reduced because of simpler data models, the elimination of aggregates and reduced query tuning.  Also, building and maintaining BW data models is simplified using flattened HANA optimized BW structures.

________________________________________________________________

 


Q: When will multi-tenancy be expanded so that BW and BOBJ/Native HANA use-cases don’t drive separate hardware, technical the multi-tenancy is possible but it is a narrow set of requirements.

A:  Multi-tenancy is supported with BW and other applications (including Custom Data Marts or SAP packaged applications) on the same hardware.

See note for the latest information: https://service.sap.com/sap/support/notes/1661202
________________________________________________________________

 

Q: Multi-tenancy is even more important when BW and BOBJ/Native HANA use-cases are augmented by ECC on HANA.
A: See note for more info on multi-tenancy: https://service.sap.com/sap/support/notes/1661202

________________________________________________________________

 


Q: I have a customer invested heaviliy in informatica to bring in data from non-sap sources and is interested in leveraging HANA for performance reasons. In order to protect the investment already made on informatica, can HANA support informatica?

A:  Third-party ETL products are not supported by SAP with HANA at this time.  However, this is being considered for the future.
________________________________________________________________

Q: Will HANA SLT support extracting data in real time using SAP Business Content Data Sources in ECC?
A:  Yes
________________________________________________________________

 


Q: Visibility into certification of hardware with partners is key, when will more than 512MB per chassis be possible (software constrained) the chassis can handle upto 3TB in the case of the IBM X5 3950.

A:  See SAP Product Availability Matrix for the latest info on the hardware partner certifications.

http://service.sap.com/pam

  ________________________________________________________________

Q: What would be the Back-up strategy and disaster recovery for SAP Hana and how does it work similar to other DB models? Can Hana works in a cluster environment?

A:  See experienceSAPHANA site:

Backup and recovery: https://www.experiencesaphana.com/docs/DOC-1220

 

High Availability and Disaster Tolerance: https://www.experiencesaphana.com/docs/DOC-1221

 


________________________________________________________________

 


Q: Is HANA licensed by data size?  How does that work in BW where the data is replicated at the DSO and Cube layers?

A:  Yes, the SAP HANA software (including for BW on HANA) is licensed by memory size requirements.
________________________________________________________________

Q: Are there any plans for enhancements to allow for nearline storage for some of the data in SAP HANA?

A:  Yes.  That can be done today with Sybase IQ as a nearline datastore and Data Services.
________________________________________________________________


Q: Any plans to open front end tools up to other vendors to use SAP HANA as DB ?

A: Yes, there are plans to do this.

 

________________________________________________________________

 


Q: Can you provide further info on pushing calculations to the DB layer? Help me explain to a mid level manager please.

A:  Instead of pulling data out of the database, you perform the processing and calculation in the in-memory database calculation engine. This allows the parallelization of this calculation being performed.
________________________________________________________________

 


Q: What is the max storage capacity of Hana?

A: No max storage yet; tested with 70 terabytes
________________________________________________________________

 


Q: Can non structured data like Jpg, word docs, txts, pdfs etc be stored in HANA ?

 

A:  Yes.  Structured and unstructured data can be stored in HANA.  This includes multi-media files like .jpg, images, video, etc.

 

________________________________________________________________

 


Q: What’s the minimum cost to have SAP HANA on BW 7.3 hardware cost for SAP HANA and also licenses?

 

A:  From a hardware perspective you would have to confirm with your hardware vendor.  For SAP HANA software licensing, please speak with your SAP account representative.

 

________________________________________________________________

 


Q: What are the options for improving load performance?  Adding more SAP HANA nodes?  Other?

A: There are various options including pushing more processing in HANA via DataServices using ELT.
________________________________________________________________


Q: Will SAP ever let Qlikview be a front end visualizer for SAP HANA backend ?
A: Testing 3rd party programs right now cannot be specific
________________________________________________________________

 


Q: Is SAP planning to bring the replication technologies inside HANA instead of external tools like SLT and BOBJ DS

 

A:  Yes.  There are some new planned data provisioning capabilities and enhancements in upcoming releases of SAP HANA.

________________________________________________________________

 


Q: We did not hear anything in the presentation about HANA and Mobile computing. Are there any points to consider on this topic?

A: As the real-time data platform foundation HANA supports mobility including SAP's mobility via SAP's Mobile Enterprise Application Platform and Business Analytics layer.

________________________________________________________________


Q: Hello...does SAP BW 7.3 come with HANA option inherently (pre-packaged) or to be licensed separately for HANA for BW 7.3?

  1. SAP HANA is a separately licensed component from SAP BW 7.3.
    ________________________________________________________________

 


Q: What is the fastest way to load non-SAP data into SAP HANA?  Flat file loads?  Data Services?  Other?

A:  SAP HANA offers multiple ways to load data from flat files, Data Services, SLT, Sybase Replication Server, etc.  Depends on what you want to do, how much data to be loaded, how frequently, etc.


________________________________________________________________


Q: How about other suites (CRM, SCM etc) on HANA?

A:  First business suite on HANA will be ECC.  The other business suite solutions (e.g. CRM, SCM, PLM, etc.) will be after ECC and timing and order is TBD.
________________________________________________________________

 


Q: With NW Business suites running on HANA, how much transaction history will we able to keep in business suites, say 3,5,7,10 years.  A seperate DW is typically where historical transactions are kept.

A: HANA can scale so it will be capable of storing a lot of data, if necessary.

 

________________________________________________________________

 


Q: Is Log based replication through Sybase Replication, Support more than one ERP DB?
A:  See product availability matrix.  http://service.sap.com/pam
________________________________________________________________

 


Q: What is the planned timing to support multiple SAP BW systems leveraging the SAP HANA Production instance?
A:  it is still under testing..You'll hear more at SAPPHIRE
________________________________________________________________

 


Q: What types of intellectual property from the Sybase DB is planned to be included in the SAP HANA DB

   A: Some of the things related to Hadoop integration will take advantage from IQ's existing technology

Introduction

In my last blog, I introduced the topic of ABAP Secondary Database Connection and the various options for using this technology to access information in a HANA database from ABAP. Remember there are two scenarios where ABAP Secondary Database Connection might be used.  One is when you have data being replicated from an ABAP based application to HANA. In this case the ABAP Data Dictionary already contains the definitions of the tables which you access with SQL statements.

 

The other option involves using HANA to store data gathered via other means.  Maybe the HANA database is used as the primary persistence for completely new data models.  Or it could be that you just want to leverage HANA specific views or other modeled artifacts upon ABAP replicated data.  In either of these scenarios, the ABAP Data Dictionary won’t have a copy of the objects which you are accessing via the Secondary Database Connection. Without the support of the Data Dictionary, how can we define ABAP internal tables which are ready to receive the result sets from queries against such objects?

 

In this blog, I want to discuss the HANA specific techniques for reading the Catalog and also how the ABDC classes could be used to build a dynamic internal table which matches a HANA table or view.  The complete source code discussed in this blog can be downloaded from the SCN Code Exchange.

 

HANA Catalog

The first task is figuring out how to read metadata about HANA tables and views.  When access these objects remotely from ABAP, we need to be able to prepare ABAP variables or internal tables to receive the results.  We can’t just declare objects with reference to the data dictionary like we normally would.  Therefore we need some way to access the metadata which HANA itself stores about its tables, views, and their fields.

 

HANA has a series of Catalog objects.  These are tables/views from the SYS Schema. Some of the ones which we will use are:

 

  • SCHEMAS – A list of all Schemas within a HANA database.  This is useful because once we connect to HANA via the Secondary Database Connection we might need to change from the default user Schema to another schema to access the objects we need.
    SCHEMAS.png
  • DATA_TYPES – A list of all HANA built-in data types. This can be useful when you need the detail technical specification of a data type used within a table or view column.
    DATA_TYPES.png
  • TABLES – A list of all tables and their internal table ID.  We will need that table ID to look up the Table Columns.
    TABLES.png
  • TABLE_COLUMNS – A listing of columns in a Table as well as the technical information about them.
    TABLE_COLUMNS.png
  • VIEWS -  A list of all views and their internal view ID.  We will need that View ID to look up the View Columns. We can also read the View creation SQL for details about the join conditions and members of the view.
    VIEWS.png
  • VIEW_COLUMNS - A listing of columns in a View as well as the technical information about them.
    VIEW_COLUMNS.png

 

Now reading these views from ABAP can be done exactly as we discussed in the previous blog.  You can use the Secondary Database Connection and query them with ABDC, for example. Here is the code I use to query the SCHEMAS view:

gr_sql_con = cl_sql_connection=>get_connection( gv_con_name ).
 create object gr_sql
 exporting
 con_ref = gr_sql_con.
data lr_result type ref to cl_sql_result_set.
 lr_result = gr_sql->execute_query(
 |select * from schemas| ).
data lr_schema type ref to data.
 get reference of rt_schemas into lr_schema.
 lr_result->set_param_table( lr_schema ).
 lr_result->next_package( ).
 lr_result->close( ).


Personally I figured it might be useful to have one utility class which can read from any of these various catalog views.  You can download this class from here. Over the next few blogs in this series I will demonstrate exactly what I built up around this catalog utility.

ZCL_HANA_CATALOG_UTILITIES.png

 

ABAP Internal Tables from ABDC

I originally had the idea that I would read the TABLE_COLUMNS View from the HANA catalog and then use the technical field information to generate a corresponding ABAP RTTI and dynamic internal table. My goal was to make queries from tables which aren’t in the ABAP data dictionary much easier.  As it turns out, I didn’t need to directly read this information from the catalog views because the ADBC already had functionality to support this requirement.

 

The ADBC result set object (CL_SQL_RESULT_SET), has a method named GET_METADATA. This returns an ABAP internal table with all the metadata about which every object you just queried.  Therefore I could build a generic method which takes in any HANA Table or View and does a select single from it.  With the result set from this select single, I could then capture metadata for this object.

 

METHOD get_abap_type.
  DATA lr_result TYPE REF TO cl_sql_result_set.
  lr_result = gr_sql->execute_query(
  |select top 1 * from { obj_name_check( iv_table_name ) }| ).
  rt_meta = lr_result->get_metadata( ).
  lr_result->close( ).
ENDMETHOD.


For example if I run this method for my ABAP Schema on table SFLIGHT I get the following information back:

SFLIGHT_METADATA.png

 

Of course the most value comes when you read an object which doesn’t exist in the ABAP Data Dictionary.  For example, I could also read one of the HANA Catalog Views: SCHEMAS

SCHEMAS_METADATA.png

 

This metadata might not seem like much information, but its enough to in turn generate an ABAP RTTI (RunTime Type Information) object. From the RTTI, I now can generate an ABAP internal table for any HANA table or view in only a few lines of code:

 

DATA lr_tabledescr TYPE REF TO cl_abap_tabledescr.
 lr_tabledescr = cl_abap_tabledescr=>create(
 p_line_type  = me->get_abap_structdesc( me->get_abap_type( iv_table_name ) ) ).
 CREATE DATA rt_data TYPE HANDLE lr_tabledescr.


This all leads up to a simple method which can read from any HANA table and return an ABAP internal table with the results:

 

METHOD get_abap_itab.
*@78\QImporting@  IV_TABLE_NAME  TYPE STRING
*@78\QImporting@  IV_MAX_ROWS  TYPE I  DEFAULT 1000
*@7B\QReturning@  value( RT_DATA )  TYPE REF TO DATA
*@03\QException@  CX_SQL_EXCEPTION
DATA lr_result TYPE REF TO cl_sql_result_set.
IF iv_max_rows IS SUPPLIED.
   lr_result = gr_sql->execute_query(
   |select top { iv_max_rows } * from { obj_name_check( iv_table_name ) }| ).
ELSE.
   lr_result = gr_sql->execute_query(
   |select * from { obj_name_check( iv_table_name ) }| ).
ENDIF.
 DATA lr_tabledescr TYPE REF TO cl_abap_tabledescr.
 lr_tabledescr = cl_abap_tabledescr=>create( 
   p_line_type  = me->get_abap_structdesc( me->get_abap_type( iv_table_name ) ) ).
 CREATE DATA rt_data TYPE HANDLE lr_tabledescr.
 lr_result->set_param_table( rt_data ).
 lr_result->next_package( ).
 lr_result->close( ).
ENDMETHOD.


CLOSING

Between the HANA Catalog objects and the ADBC functionality to read type information, I’ve now got all the pieces I need to perform dynamic queries against any HANA table or view. Ultimately I could use this functionality to build all kinds of interesting tools. In fact I’m already playing around with a generic catalog/data browser; but that’s something to look forward to in a future blog.

HANA_CATALOG_BROWSER.png

Hello Everyone,


In this document, I tried to explain "Currency Conversion" functionality. Through this, I am trying to provide the general functionality and idea about the Currency translation in SAP HANA.


Note:

While using this option you need to replicate the standard table into SAP HANA that are TCURR, TCURC, TCURX, TCURF, TCURT, TCURV . If these standard tables are not available then you will not be able to perform Currency Conversion.


Procedure:


1. Create an Analytical view

  • Right Click on Analytical View > New

 

 

2. Enter “AN_TEST” for the name of the view

  • Select Schema under the “Schema for conversion” having all the relevant tables used for currency conversion.
  • Click Finish

 

 

3. Select Table and click finish.

 

 

4. Select the measures and attributes to be included in the analytical view.

 

 

5. Now, Right click on Calculated Measures and choose New.


 

6. Create the Calculated Measure, "Profit".

  • Select Decimal data type with length 13,2
  • Double click on the desired measure for it to appear in the expression editor. Either type in the minus sign or double click on the Operator.
  • Click OK.

 

 

7. Create second Calculated Measure, "Profit_IN_USD".

  • Write the same expression as written under previous calculated measure.
  • Select “Currency/Unit of Measure” tab.
  • Enter the values as shown in below figure.
  • Click OK.

 

Note: you can select either attribute or fixed Currency according to the requirement.

In this example, I am using “USD” that is fixed currency and “Currency” that is an attribute.

 

 

8. Save and Activate Analytical View.

 

 

9. Display the data in your Analytical View.

  • Right click on Analytical View and select Data Preview

 

 

Hope it is helpful.

 

Thanks for reading the document.

 

Regards,

Neha Singla

Dear Members,
As you all aware that SAP HANA is new BI Strategy and Roadmap for SAP and it is moving faster speed than expected. After many of my friends' demand I decided to write some generic tips and helpful guidelines for available HANA Certification (HANA 1.0 SP03) at this point. Recently I have been certified and would like to share some thoughts:    
    
As you all know, HANA is the gateway for Innovative BI Strategy and SAP Roadmap. No more DB bottleneck and much more.... benefits or ROI (out of scope for this Blog). Also would like to make sure I am not crossing the boundary so these tips are very generic but very helpful for your next milestone (Certification in HANA 1.0 SP03 as of 20th April, 2012).
 
Please follow the following guidelines:
1. Your hands-on will play a big role in certification questions – either from SAP Sandbox, Partner training or SAP training programs….
2. TZHANA training material is good start but make sure you cover all the scopes of this Certification (many members already posted valuable tips and
documents in many places - I would suggest to Google it for more details or see LinkedIn HANA group (If you are member).
Take some extra precaution for each area - to make sure you have prepared well before facing all 80 questions:
1) Business Content = <8% – (Research on it – you will not find much in documents)  
2) Data Modeling = >12% (Lot of questions with mix and match with other areas too)
3) Data provisioning = >12% 
4) Optimization = 8-12%
5) Reporting = 8-12%
6) Security = <8% – (You need to be careful – be very clear and precise, prepare well)
Misc.:
  • Make sure to prepare mix and match questions for some areas (like 3 & 6; 2 & 6 is common combinations)
  • Read more about new/ongoing changes (like Persistency Layer, Data Replication etc….)
  • Make sure you are very clear with external interfaces with HANA systems (SQL, MDX, BICS) – which one runs at what scenario; ODBO/ODBC/JDBC configuration and capabilities…
  • Pay more attention on BO 4.0 & BODS feature and capabilities 
  • Make sure you are very clear about all BO 4.0 reporting tools (and think interfaces methods, IDT and other used components) & Excel features with HANA
  • Make sure you understand many analytical capabilities in HANA system itself

 

4. Focus on HANA internal components (Architecture inside HANA box, is very important) - must be very clear about each and every components and their role), it looks simple but try to go in details – each component plays crucial role (you will see complete architecture in some of the documents

I have suggested in end of this post - do not leave any components untouched)

 

5. Pay attention on multiple correct answers questions - need very precise knowledge about internal components (it is always known as classical

answers); Remember even you miss one right answer will make no score for any particular question - this is crucial in test

 

6. Do not forget, you prior experience (if you are from SAP BW background ECC --> BW --> BO) as well as many logical questions about SAP architecture and connectivity & applications (like RFC, iDoc,  COPA knowledge etc…) and how will it change with new HANA DB architecture - if you are seasoned SAP

Consultant and have worked on many full life cycle projects – definitely it is easier for you….

 

7. At last but not least; I would suggest, keep going with all questions but note down all in-doubt questions and re-visit with doubtful questions after you have completed all 80 questions. Once you will finish all 80 questions many doubts will get clear in test itself – I know 3 hours is too long but believe me you will get some bonus points before pushing final submit button in your test and chances are you will get good percentage too  

 

Suggested links: 

 

eLearning: http://www.sdn.sap.com/irj/scn/elearn?rid=/webcontent/uuid/e04da73b-3a72-2e10-969a-b032c46e7509

 

SCN  Blog: http://scn.sap.com/community/in-memory-business-data-management/blog

 

Couple of good books/documents:

 

Rest, I would like to say good luck and get ready for next milestone. 

 

I hope this blog will be helpful to your next journey of "Certification in HANA technology".

 

 

Best Regards, 
Ravikar Prasad.
 

Well by now you have heard about HANA from SAP.  In-memory technology is a significant revolution in database and real-time analytic technology.  The rest of the industry appears to be playing catch up because the relevance of this technology is significant and can’t be ignored.

The HANA database resides in memory in a columnar architecture.  All read/write activity is occurring in memory.  The system also writes to a traditional hard drive based database.  The real purpose of this traditional database is for system restarts. So it can load the database back into memory.

HANA technology has also taken the ceiling off of a single server resource limitation by offering multi-node scale-out configurations.  Now you can aggregate multiple systems’ memory together to create massively scalable in-memory database.  Having a 100 terabyte+ system is not out of the question.  You will still require a storage system for the traditional database store and of course now it is going to get bigger.  The benefit is that you won’t need a separate data warehouse to do your analytics, since you will run it against the HANA database that’s in-memory.  For a hosting company or even your enterprises datacenter, this means that you will consume half the amount of infrastructure as you used to do in a more traditional model.

This last week a friend of mine who is a veteran and pioneer of IT infrastructure was telling me about a new storage array system he recently deployed.  Michael Glogowski is the AVP of Core Network Systems at North Island Credit Union.  The storage array was from a company called Violin Memory.  It is a storage system that writes not to hard disk, but to NVRAM.  There is no disk or even any cache involved.  All data is written directly to NVRAM.  The performance gains were significant for Michael.  Jobs that ran for 5 hours now completed in 90 minutes.

I was so impressed.  I arranged a conversation with the folks at Violin Memory.  Beyond the throughput and performance potential of the system, the more remarkable piece was the footprint. Remember that 100 terabyte HANA database that would also write data to a traditional storage array.  Well that storage array could consume 5 full racks of hard drive storage equipment.  The same 100 terabytes of storage in a memory based storage system would consume a quarter of a single rack.

Violin Memory was founded by Ex Fusion-io and EMC executives.  This start-up has a very experienced leadership from the data storage space.  In our discussions about the company and its investors, an interesting relationship was revealed.  SAP Ventures is a significant investor of Violin Memory. Then it all started to make sense.

In one single rack of equipment you could have the computing power to run the largest enterprise business systems.  With a footprint this small, cloud providers can now operate greener, and more efficiently scale their infrastructures.  This also means the age of the on-premise SaaS appliance has just become a reality.

Nowhere in this solution would a hard drive exist. This investment and direction in storage technology has tremendous implications for the IT industry. From my view this completes a vision that SAP has for its own future. I guess it’s safe now for me to dust off my box of old electronics in the garage that holds my View Finder, Polaroid camera, Sony Walkman, VHS machine, floppy drive, and now toss in my hard drive.  If you think the next generation of computing has yet to occur, think again.

The SAP HANA database team published an article for the IEEE Data Engineering Bulletin, Volume 35. We gave a short overview of the architecture and focused on the column store engine.

 

You can find the bulletin here:

http://sites.computer.org/debull/A12mar/issue1.htm

 

Abstract

Requirements of enterprise applications have become much more demanding. They require the computation of complex reports on transactional data while thousands of users may read or update records of the same data. The goal of the SAP HANA database is the integration of transactional and analytical workload within the same database management system. To achieve this, a columnar engine exploits modern hardware (multiple CPU cores, large main memory, and caches), compression of database content, maximum parallelization in the database kernel, and database extensions required by enterprise applications, e.g., specialized data structures for hierarchies or support for domain specific languages. In this paper we highlight the architectural concepts employed in the SAP HANA database. We also report on insights gathered with the SAP HANA database in real-world enterprise application scenarios.

 

 

If you're interested in other publications of our team, have a look at our list.

Dear all,

 

I haven't been able to come up with a blog on SCN now for quite some time, just because HANA really keeps me pretty busy.

Anyhow, this piece is something I already published SAP internally and since it was well liked and the work was already done, I thought: why not spend another 5 minutes and put it up to SCN?

 

Hope you like it as well, although it really is a kind of quick'n'dirty post.
Here we go:

 

By chance I found out that our development colleagues now implemented the syntax assistence feature that is available in many Eclipse-based editors also in the HANA SQL- and Stored Procedure-editors.

 

By pressing the [CTRL]+[SPACE] keys together while your coding your SQL you get:

  • a lookup for db objects in all schemas you've access to (you must have typed
    in 3 characters already to have this)
  • syntax help for any SQL commands or CE_-functions
  • example statements with dummy data to demonstrate the usage of commands

 

This picture gives you an example of how this looks like:

image_png.png

 

Personally I totally love this little well-hidden feature and hope it serves you as well as it does serve me.

 

What's also pretty cool is that you can modify the syntax help list on your own.

 

Just open up the preferences window and navigate to "Administration Console" - "Templates":

image_png2.png

 

Isn't this cool?

 

Cheers, Lars

Some years ago I have been invited to a due diligence weekend in the SAP Headquarter in Walldorf to see and experience a financial planning solution. The backend – MOLAP cubes on SQL servers – served the needs. The authorization concept wasn’t ‘enterprise ready’. But the modeling capabilities with strong power user enablement was impressive. With me was the SAP colleague Holger Faber. The leading IT architect for our internal corporate financial planning process based on an SAP SEM Business Planning and Simulation (SEM-BPS) installation in the central SAP Business Warehouse system. A very powerful solution and perfect for corporate processes, while the tool of choice for de-central and regional planning was still Microsoft Excel.

Now – some years later – I run the SAP internal BW powered by SAP HANA basis and infrastructure setup in the New BI program. Holger Faber is the project lead to design a new planning solution in this environment and the product became SAP BusinessObjects Planning and Consolidation 10.0 running as BW add-on.

 

The first challenge: Our new SAP internal BW system landscape moved to SAP HANA in September 2011 already. At this time SAP HANA SP3 included standard BW optimizations for data flow and reporting. So Holger and his team started to use, architecture and optimize BPC on BW powered by SAP HANA a long time before the solution was announced in March this year (see also this blog from Jens Koerner). In November 2011 the solution was running – very impressive from my point of view and the proof what you can achieve with a shared Business/IT-Vision and a high performance team of experts.

 

The second challenge: You can find a lot of blogs and chats asking one question: ‘Should I use SAP BW Integrated Planning OR SAP BusinessObjects Planning and Consolidation?’ With BW IP you can achieve Stability, promise Reliability, guaranty Integration and provide Standardized Planning Functionality. With BPC you strife for Flexibility and Scalability, but still with full Compatibility to standard BW data-flows and reporting. The first one is for IT experts, the second enables business power users. It is corporate vs. federated. But – do you work in a company which needs one part only? I do not.

The idea is simple, the solution architecture a little bit more complex. Use both – and use them integrated with an Interface Layer and powered by the SAP HANA.

 

The package:

  • BW IP planning cubes for the corporate and core processes
  • BW DSO (HANA enabled) in the interface for the transactional data exchange
  • BW architecture for central master data, customizing and analysis authorisation
  • BPC planning cubes for flexible and de-central processes
  • BW Virtual Infoprovider to view joins without data redundancy
  • SAP HANA for sustainable performance and views on real-time data

 

BPC1.jpg

A planning application may not sound like a typical big-data solution. But I am sure some of you experienced already the typical lifecycle of staged planning data with several versions for periodically forecast processes and simulation needs. Then you know the challenge to provide a stable and reliable performance and user experience in a fast growing environment.

In this way SAP HANA as a database will not only enable reporting and planning performance, but bridge between business processes. The need to find a solution for two significant different business requirements was the driver for Holger Faber and team to combine two strong products into one innovative solution. Now this solution will enable various business areas to cooperate much closer for budget and forecasting – and creates the need to align their processes and timeline much more than in the past.

One more highlight: The user interface is fully web and Microsoft Excel enabled on IP and BPC. Did you hear rumors about a BPC mobile app? It will be available soon – stay tuned.

 

User Interface Example: Cell based comment feature

BPC3.jpg

Important note: When you move your BW 7.30 on a HANA database you might have the BPC 10.0 add-on already installed. It is very important to upgrade to BPC SP6 to leverage the full performance improvements!

 

Get connected: I will be available at the SAP runs SAP booth at SAPPHIRE Orlando on May 14-16. Step by if you want to get more insights in our BW powered by HANA projects and the setup of the High-Availability environment.

 

Best regards,

Matthias Wild - proud to be part of SAP Global IT where SAP runs SAP.

Adeel Hashmi

Preparing for Sapphire

Posted by Adeel Hashmi Apr 16, 2012

 

  So many weeks have passed since I was mulling over the SQLScript problems. Eventually it turned out that SAP HANA does not support nested queries over OLAP views. In slighter simpler words, it means that if you make an Analytical view in HANA and try to make a Calculation view with a nested query, the compiler will block it. As per SAP internal developer comments, this feature will be supported in future versions of HANA.

 

I went through lot of tribulations to complete my HANA design, develop a complex load program and then work on BOBJ for making the Xcelsius dashboard shine. Therefore, I am thinking of writing a short tutorial to take a newbie from A to Z through HANA and associated systems. Time is scarce as my HANA use case is on top priority.

 

These days I am working on a re-design of my original SAP ISU use case. It has gone through several rounds of management review by IBM executives and looks like it will shine at SAP Sapphire conference next month. Keeping my fingers crossed.

The 2012 ASUG Annual Conference is getting closer each day (May 13-16). With so many sessions to choose from, your ASUG Volunteers have selected these sessions, highlighting SAP NetWeaver Business Warehouse, SAP HANA and Sybase.


Additionally, there are two pre-conference sessions (also called "In Depth" sessions) on Sunday May 13th related to this theme as well:

BusinessObjects Business Intelligence (BI) 4 Feature Pack 3 Tools - in a Single Day

This is a hands-on session led by SAP Mentor Ingo Hilgefort of SAP Canada

Full-Day Seminar
Sunday, May 13, 8:00 a.m. - 5:00 p.m.
$495.00 ASUG Member/$595.00 Non-Member

Register today

 

Getting Started with HANA: A Primer

Half-Day Seminar
Sunday, May 13, 1:00 p.m. - 5:00 p.m.
$295.00 ASUG Member/$395.00 Non-Member

Register today

 

The following is a snapshot of the education sessions hosted by the ASUG BI Community and its related Special Interest Group, Enterprise Data Warehousing (including SAP Hana):

#

Title

3801

Enterprise Data Warehouse (EDW) Influence Council

4401

SAP NetWeaver Business Warehouse: Powered by SAP HANA (2 HOUR Session)

403

Overview SAP NetWeaver BW 7.3 powered by HANA and further Roadmaps

404

SAP BW Powered By HANA: Understanding the Impact of In-Memory Optimized Infocubes

405

Sybase Analytics Solutions: Enabling Intelligence for Everyone

408

In-Sourcing Marketing Campaigns with SAP NetWeaver BW and SAP BusinessObjects Data Services

409

Johnson & Johnson's Demand Planning & Reporting - Lessons Learned & Best Practices

410

How To Do Load Data from SAP and Non-SAP Data Sources into SAP HANA in Batch Mode (ETL) Using Data Services.

4409

SAP Runs SAP NetWeaver BW powered by HANA - Experiences, Roadmap, and Strategy

4410

Upgrade to SAP NetWeaver Business Warehouse 7.30

412

Eby-Brown Leverages SAP NetWeaver Business Warehouse on SAP HANA to Enhance Decision-Making

413

SAP HANA Project Implementation at The Charmer Sunbelt Group

414

NEW! SAP HANA™ Information Composer - SAP HANA Modeling for Non-Technical Users

4413

Supercharge SAP NetWeaver BW with SAP HANA™

 

 

If you have already registered, great, we are looking forward to seeing you there. If you have not registered, take a look at the sessions and see how these sessions will help you and organizations now and into the future.

 

Don’t miss these Community Lounges:

Tuesday: May 15 12:15-1:15: BEx Quo Vadis? (Where is BEx Going)

Join SAP BEx/Analysis Product Owner Eric Schemer to find out the future of BEx.


Tuesday: May 15 4-5 pm : Ask a SAP Mentor Community Lounge Session

Ask a SAP Mentor Community Lounge Session – come with your most pressing SAP BI/ERP/BW/BusinessObjects Integration questions and ask SAP Expert Ingo Hilgefort


Wednesday, May 16 – noon Web Intelligence/BW Round Table

Come ask SAP Solution Management your most pressing BW/Web Intelligence questions


  Register Now for the 2012 ASUG Annual Conference

 

Follow ASUG Annual Conference on Twitter at #ASUG2012

This article provides an understanding of how using SAP HANA as part of SAP BW can reduce the time IT and your system spend on data loading, maintenance, and modeling.


SAP HANA provides Key changes and has the potential to alter the business set up. The key changes for business are improved reporting performance, real-time data access, and the ability to simulate and plan faster.

Today, majority of cost associated with managing and maintaining a data warehouse has been in the time and staffing it takes for IT departments to manage the business warehouse environment.

By opting for SAP HANA, IT departments can save time on the management and maintenance of SAP NetWeaver BW. This results in potential cost savings through reduction in expensive operational tasks (e.g., indexing and tuning), increased modeling flexibility, simplified maintenance, and increased loading performance. These points have been discussed here in detail:

Increased Modeling Flexibility

SAP HANA provides the ability to change data models to evolve with business requirements.

Currently, within SAP BW on a traditional RDBMS, modelling requires specialized skills as there are Dimensions and  cardinality going around. Any changes to Line Item Dimensions or cardinality should exclusively be handled. (Remodeling aspect of BI 7 has covered this though!!!)

However, with SAP NetWeaver BW on SAP HANA, the dimension tables are no longer part of an InfoCube definition, and the data is primarily stored in columnar format. This format allows you to quickly remodel by going to the Administrator Workbench (via transaction code RSA1), dragging and dropping dimensions in and out of InfoCubes, and activating the object. Performing this remodeling operation issues the change directly within the SAP HANA database and reorganizes the data as required.

As no aggregates are required with SAP NetWeaver BW on SAP HANA, there is no need to rebuild aggregates once the data model is changed — meaning IT departments can respond to business requests that involve changing the data model more quickly.

 

Decreased Load Windows

Business gets fast reporting, planning, and real-time data from SAP HANA.

However, what about the existing load processes? One common ailment in SAP NetWeaver BW environments has been the load window required to make data available as part of nightly batch loading cycles. With ever-increasing data volumes and pressure from business to make the data available in tighter service level agreements (SLA), IT departments struggle with meeting the load window SLAs must provide.

Most of the BW engagements,  take beating in the availability SLA's due to delays in existing load processes. Ever increasing data volumes have also fairly contributed to this. Ultimately, IT departments are facing challenges with meeting the load window/availability SLA's.

Thanks to SAP HANA offerings. It's  two new " In-Memory" objects offer a variety of improvements to loading performance:


  • The in-memory optimized DataStore Object (DSO) has its delta calculation and activation logic implemented in SAP HANA instead of in the ABAP application layer (as shown in below figure). Moreover, all  the DSO data resides directly with in-memory column tables within SAP HANA. This results in leveraging the in-memory and massive-parallel processing (MPP) capabilities of SAP HANA to speed up the delta calculation and activation logic of a DSO.

       BW 7.3  - DSO new feature.png    

  • The in-memory optimized InfoCube has come with a healthy changes in terms of schema. It has a simplified schema for optimizing data loads, in which dimension tables are no longer generated as part of the InfoCube schema (as depicted in below figure). Additionally, SAP NetWeaver BW InfoCubes traditionally stored compressed data in an E Fact table and uncompressed data in an F Fact table. With in-memory optimized InfoCubes, the E/F Fact tables are consolidated and partitioned as part of the InfoCube schema. This storage mechanism is internal to SAP HANA and doesn’t require any configuration or management by IT departments. This new schema provides faster loading time into these InfoCubes as dimension IDs> Thus they no longer need to be generated by the system as part of the load process.

      HANA schema changes.png

 

The significance of in-memory optimized InfoCubes and DSOs is that there is improved performance within every step of the load process as follows:

  • When data loads into a DSO within SAP NetWeaver BW, the data is loaded directly into memory, as memory is the primary persistence for SAP NetWeaver BW on SAP HANA. The loading of data into a DSO provides performance improvement for the loading portion of the extraction, transformation, and loading function.
  • When activating the DSO to consolidate the changed data, the activation is processed within SAP HANA instead of the ABAP application tier, improving performance due to the activation taking place in memory and the activation being parallelized as part of the MPP computing architecture of SAP HANA.
  • When loading data from the in-memory optimized DSO to the in-memory optimized InfoCube, there are performance improvements when extracting from the DSO (as the data is being read from memory), as well as loading into the InfoCube. This is because dimension IDs are no longer required to be generated and the data is being loaded into an in-memory persistent column table.

We need not make any changes to the existing schedules. Rather, few migration steps would turn our existing objects as "In - Memory" objects.Thus, we can get significant reductions in our loading times, which helps meet SLA criteria for loading.

Additionally, because SAP HANA’s in-memory architecture does not require indexing and aggregate tables to speed query response, this portion of the load time is reduced. Also, in the past, once the loading was complete, users had to roll up the data into aggregates or SAP Business Warehouse Accelerator (BWA) to achieve good reporting performance. With SAP NetWeaver BW on SAP HANA, this portion of rolling up data into SAP BWA is eliminated, further reducing the data load times.

Simplified Maintenance

Maintenance is simplified with SAP HANA because there is no special effort for indexing or database statistics maintenance to guarantee fast reporting. All the time spent building aggregates (for companies that didn’t have SAP BWA) is also not required, so there is simplification of maintenance activities as well.

Columnar-based storage with high compression rates reduces the database size of SAP NetWeaver BW.

With SAP HANA, SAP BWA is no longer required. Thus eliminating the need for IT departments to maintain separate BWA's.The SAP NetWeaver BW application server remains separate from SAP HANA, but the role of the application server is diminished because data-intensive logic is pushed from the server to SAP HANA. Therefore, users likely need fewer application servers as part of their overall sizing.

SAP has also simplified administration via one set of admin tools (e.g., for data recovery and high availability). Finally, companies need to consider their overall landscape topology. Within SAP NetWeaver BW environments, the landscape setup usually involves a central database server and numerous application servers to distribute workload. This workload is specifically for user queries and data loading. With the reduction in data load times and the acceleration of reporting queries, the overall workload on the system is reduced (i.e., the time that each operation takes), which leads to less concurrency and the ability to scale down use of some of the application servers.

Migrating to SAP HANA

SAP has standard migration tools available for the migration process enabling us to migrate from existing environments.

As part of the OS/DB migration process, SAP HANA generates specific tables as column tables instead of row tables for objects that are read intensive, and for which there is a large data compression benefit from storing the data in a columnar format. Once the OS/DB migration is complete, SAP NetWeaver BW InfoCubes and DSOs remain unchanged. It is up to the user to convert these objects to in-memory optimized versions.

The conversion process can be done one-by-one or en mass and is available through running ABAP program RSMIGRHANADB.


From April 10 to April 11, my team (Anne, Juergen and myself) host an InnoJam in Boston. It was a really great event, but the data provided by the City of Boston wasn't exactly in the best shape, so we took a lot of efforts (with a help of the SAP Guru's that helped us) to sanitize the data.

 

At some point, as I was asked to use my Regular Expressions skills (everybody knows I'm crazy about RegEx) to sanitize some data that came into a really weird format inside a field...something like this (Can't post the real data for obvious reasons):

 

Type: [This is a test] Area: [My Area] Description: [This data is not right]

 

What I need to do was basically, took each record and generate a new table with those 3 fields and end with something like this...

 

TypeAreaDescripton
This is a testMy AreaThis data is not right

 

I began to think on which language could be the best for the job...I thought on ABAP, Python...and then of course...R...so I choose R.

 

The problems that immediately arise were simple, how to pull data from HANA and how to send it back, also, even when I believe myself a RegEx Hero, R uses a not very standard RegEx schema. I thought on downloading the data from HANA as .CSV, clean it up and then upload the .CSV back to HANA...but then I thought that the extra work was not worth it...so then...my good old friend RODBC came into the show...even when it's not supported by SAP, I decided that for this particular case, it would be just fine...I could read the data back and forth and have everything back into HANA in a very fast way.

 

Let's create a table and call it BAD_DATA...and just create 10 dummy records (I know...I'm lazy)...

 

HANA_Sanitizer_01.png

 

So this is the script:

 

R_Sanitizer.R

library("RODBC")

ch<-odbcConnect("HANA_SERVER",uid="P075400",pwd="***")

query<-("select case_description from P075400.BAD_DATA")

CRM_TAB<-sqlQuery(ch,query)

 

SR_Type<-c()

SR_Area<-c()

Description<-c()

 

for(i in 1:nrow(CRM_TAB)){

  mypattern = '\\[([^\\[]*)\\]'

  datalines = grep(mypattern,CRM_TAB$CASE_DESCRIPTION[i],value=T)

  getexpr = function(s,g)substring(s,g,g+attr(g,'match.length')-1)

  g_list = gregexpr(mypattern,datalines)

  matches = mapply(getexpr,datalines,g_list)

  result = gsub(mypattern,'\\1',matches)

  var<-0

  i<-0

  for(i in 1:length(result)){

    var<-var+1 

    if(var==4){

      break

    } 

    if(var==1){

      SR_Type<-append(SR_Type,result[i])

    }

    if(var==2){

      SR_Area<-append(SR_Area,result[i])

    }

    if(var==3){   

      Description<-append(Description,result[i])

    }

  }

  if(length(SR_Type)>length(Description)){

    Description<-append(Description,NA)

  }

  if(length(SR_Type)>length(SR_Area)){

    SR_Area<-append(SR_Area,NA)

  } 

}

 

GOOD_DATA<-data.frame(SR_Type,SR_Area,Description,stringsAsFactors=FALSE)

sqlDrop(ch,"GOOD_DATA",errors=FALSE)

sqlSave(ch,GOOD_DATA,rownames="id")

odbcClose(ch)

 

So basically, but we're doing is to grab all the information inside the brackets, then pass it to vectors to finally create a Data.Frame and send it back to HANA. If you wonder why I'm comparing the length of the different vector to add NA values, it's very simple...we can have something like this...

 

Type: [This is a test] Area: [My Area] Description: [This data is not right |

 

The last bracket...it's not a bracket! It's a pipe, so the RegEx is going to fail and this will provoke the vector to be empty and that would be messy...if that happens with can add an NA and at least have a value on it...

 

So...when we run our program...a new data called GOOD_DATA is going to be created with all the data clean and sanitized.

 

HANA_Sanitizer_02.PNG

Nice, right?

 

See you in my next blog!

Introduction

In this first edition of this HANA Developer's Journey I barely scratched the surface on some of the ways which a developer might begin their transition into the HANA world. Today I want to describe a scenario I've been studying quite a lot in the past few days: accessing HANA from ABAP in the current state.  By this, I mean what can be built today.  We all know that SAP has some exciting plans for ABAP specific functionality on top of HANA, but what everyone might not know is how much can be done today when HANA runs as a secondary database for your current ABAP based systems.  This is exactly how SAP is building the current HANA Accelerators, so it’s worth taking a little time to study how these are built and what development options within the ABAP environment support this scenario.

 

HANA as a Secondary Database

The scenario I'm describing is one that is quite common right now for HANA implementations.  You install HANA as a secondary database instead of a replacement for your current database.  You then use replication to move a copy of the data to the HANA system. Your ABAP applications can then be accelerated by reading data from the HANA copy instead of the local database. Throughout the rest of this blog I want to discuss the technical options for how you can perform that accelerated read.

Replication.png

 

ABAP Secondary Database Connection

ABAP has long had the ability to make a secondary database connection.  This allows ABAP programs to access a database system other than the local database. This secondary database connection can even be of a completely different DBMS vendor type. This functionality is extended to support SAP HANA for all the NetWeaver release levels from 7.00 and beyond. Service Note 1517236 (for SAP Internal) 1597627 (for everyone) lists the preconditions and technical steps for connection to HANA systems and should always be the master guide for these preconditions, however I will summarize the current state at the time of publication of this blog.

 

Preconditions

  • SAP HANA Client is installed on each ABAP Application Server. ABAP Application Server Operating System must support the HANA Client (check Platform Availability Matrix for supported operating systems).
  • SAP HANA DBSL is installed (this is the Database specific library which is part of the ABAP Kernel)
  • The SAP HANA DBSL is only available for the ABAP Kernel 7.20
    • Kernel 7.20 is already the kernel for NetWeaver 7.02, 7.03, 7.20, 7.30 and 7.31
    • Kernel 7.20 is backward compatible and can also be applied to NetWeaver 7.00, 7.01, 7.10, and 7.11
  • Your ABAP system must be Unicode or Single Code Page 1100 (Latin 1/ISO-8850-1) -See Service note 1700052 for non-Unicode Support instructions

 

 

Next, your ABAP system must be configured to connect to this alternative database. You have one central location where you maintain the database connection string, username and password.  Your applications then only need to specify the configuration key for the database making the connection information application independent.

 

This configuration can be done via table maintenance (Transaction SM30) for table DBCON. From the configuration screen you supply the DBMS type (HDB for HANA), the user name and password you want to use for all connections and the connection string. Be sure to include the port number for HANA systems. It should be 3<Instance Number>15. So if your HANA Database was instance 01, the port would be 30115.

DBCON.png

 

DBCON can also be maintained via transaction DBACOCKPIT. Ultimately you end up with the same entry information as DBCON, but you get a little more information (such as the default Schema) and you can test the connection information from here.

DBACOCKPIT.png

 

Secondary Database Connection Via Open SQL

The easiest solution for performing SQL operations from ABAP to your secondary database connection is to use the same Open SQL statements which ABAP developers are already familiar with. If you supply the additional syntax of CONNECTION (dbcon), you can force the Open SQL statement to be performed against the alternative database connection. 

 

For instance, let’s take a simple Select and perform it against our HANA database:

 

  SELECT * FROM sflight CONNECTION ('AB1')
    INTO TABLE lt_sflight
   WHERE carrid = 'LH'.

 

The advantage of this approach is in its simplicity.  With one minor addition to existing SQL Statements you can instead redirect your operation to HANA. The downside is that the table or view you are accessing must exist in the ABAP Data Dictionary. That isn't a huge problem for this Accelerator scenario considering the data all resides in the local ABAP DBMS and gets replicated to HANA. In this situation we will always have local copies of the tables in the ABAP Data Dictionary.  This does mean that you can't access HANA specific artifacts like Analytic Views or Database Procedures. You also couldn't access any tables which use HANA as their own/primary persistence.

 

Secondary Database Connection Via Native SQL

ABAP also has the ability to utilize Native SQL. In this situation you write you database specific SQL statements.  This allows you to access tables and other artifacts which only exist in the underlying database.  There is also syntax in Native SQL to allow you to call Database Procedures.  If we take the example from above, we can rewrite it using Native SQL:

 

EXEC SQL.
    connect to 'AB1' as 'AB1'
  ENDEXEC.
  EXEC SQL.
    open dbcur for select * from sflight where mandt = :sy-mandt and carrid = 'LH'
  ENDEXEC.
  DO.
    EXEC SQL.
      fetch next dbcur into :ls_sflight
    ENDEXEC.
    IF sy-subrc NE 0.
      EXIT.
    ELSE.
      APPEND ls_sflight TO lt_sflight.
    ENDIF.
  ENDDO.
  EXEC SQL.
    close dbcur
  ENDEXEC.
  EXEC SQL.
    disconnect 'AB1'
  ENDEXEC.

 

Its certainly more code than the Open SQL option and a little less elegant because we are working with database cursors to bring back an array of data.  However the upside is access to features we wouldn't have otherwise. For example I can insert data into a HANA table and use the HANA database sequence for the number range or built in database functions like now().

 

    EXEC SQL.
      insert into "REALREAL"."realreal.db/ORDER_HEADER" 
       values("REALREAL"."realreal.db/ORDER_SEQ".NEXTVAL,
                   :lv_date,:lv_buyer,:lv_processor,:lv_amount,now() )
    ENDEXEC.
    EXEC SQL.
      insert into "REALREAL"."realreal.db/ORDER_ITEM" values((select max(ORDER_KEY) 
        from "REALREAL"."realreal.db/ORDER_HEADER"),0,:lv_product,:lv_quantity,:lv_amount)
    ENDEXEC.

 

The other disadvantage to Native SQL via EXEC SQL is that there are little to no syntax checks on the SQL statements which you create. Errors aren't caught until runtime and can lead to short dumps if the exceptions aren't properly handled.  This makes testing absolutely essential.

 

Secondary Database Connection via Native SQL - ADBC

There is a third option that provides the benefits of the Native SQL connection via EXEC SQL, but also improves on some of the limitations.  This is the concept of ADBC - ABAP Database Connectivity.  Basically it is a series of classes (CL_SQL*) which simplify and abstract the EXEC SQL blocks. For example we could once again rewrite our SELECT * FROM SFLIGHT example:

 

****Create the SQL Connection and pass in the DBCON ID to state which Database Connection will be used
  DATA lr_sql TYPE REF TO cl_sql_statement.
  CREATE OBJECT lr_sql
    EXPORTING
      con_ref = cl_sql_connection=>get_connection( 'AB1' ).
****Execute a query, passing in the query string and receiving a result set object
  DATA lr_result TYPE REF TO cl_sql_result_set.
  lr_result = lr_sql->execute_query(
    |SELECT * FROM SFLIGHT WHERE MANDT = { sy-mandt } AND CARRID = 'LH'| ).
****All data (parameters in, results sets back) is done via data references
  DATA lr_sflight TYPE REF TO data.
  GET REFERENCE OF lt_sflight INTO lr_sflight.
****Get the result data set back into our ABAP internal table
  lr_result->set_param_table( lr_sflight ).
  lr_result->next_package( ).
  lr_result->close( ).

 

Here we at least remove the step-wise processing of the Database Cursor and instead read an entire package of data back into our internal table at once.  By default the initial package size will return all resulting records, but you can also specify any package size you wish thereby tuning processing for large return result sets.  Most importantly for HANA situations, however, is that ADBC also lets you access non-Data Dictionary artifacts including HANA Stored Procedures.  Given the advantages of ADBC over EXEC SQL, it is SAP's recommendation that you always try to use the ADBC class based interfaces.

 

Closing

This is really just the beginning of what you could with this Accelerator approach to ABAP integration into SAP HANA. I've used very simplistic SQL statements in my examples on purpose so that I could instead focus on the details of how the technical integration works.  However, the real power comes when you execute more powerful statements (SELECT SUM ... GROUP BY), access HANA specific artifacts (like OLAP Views upon OLTP tables), or database procedures.  These are all topics which I will explore more in future editions of this blog.

Definition of Predictive Analytics

Predictive analytics is an area of statistical analysis that deals with extracting information from data and using it to predict future trends and behavior patterns. The core of predictive analytics relies on capturing relationships between explanatory variables and the predicted variables from past occurrences, and exploiting it to predict future outcomes. It is important to note, however, that the accuracy and usability of results will depend greatly on the level of data analysis and the quality of assumptions.

Introduction of R statistical programming language

R is a programming language and software environment for statistical computing and graphics. The R language is widely used among statisticians for developing statistical software and data analysis. R is an open source project initiated by academics in New Zealand in the mid-1990s. It is a statistical language without a graphical user interface. A number of vendors, including SAS, have been adding support for R.

With the announcement of Business Objects Predictive Analysis on Apr 3, 2012, SAP will join the Likes of SAS Institute, IBM's SPSS division and Oracle, as well a number of startups, such as Revolution Analytics, that are based on supporting R which are the base model for Predictive analytics products

Information Builders, Tibco Spotfire, IBM SPSS, SAS, and, most recently, Oracle have all embraced R, not to mention startups like Alpine Data Labs and Revolution Analytics that have built entire companies around the language.

Other Predictive Analytics products

Statisticians have worked on offline data files—whether SAS datasets or flat files--to analyze their data and build models.

Oracle Advanced Analytics that bundles Oracle R Enterprise, providing support for R models to be processed in the Oracle database.

SAS will push the processing into Teradata for processing, and now it supports in-database processing with Netezza, EMC Greenplum, DB2, and AsterData .

With the new SAP Business Objects Predictive Analysis, processing can be pushed into Hana, the vendor’s new in-memory appliance that includes its own statistical function library.

Other Predictive Analysis products gives an advanced data-visualization capabilities, which include scatter plots, cluster analyses, parallel-processing charts and other visualization options that have emerged in the big-data era.

Overview of SAP Business Objects Predictive Analytics

The whole goal of SAP is giving R open-source software an easier-to-use graphical user interface designed for SAP and SAP Business Objects customers for an easier-to-use graphical user interface, advanced visualizations & providing links to BI suites Also SAP Business Objects Predictive Analysis is designed to work with HANA in-memory technology, tapping into the same data sources and extending visualization and predictive capabilities into decision-support scenarios.

With the SAP Predictive Analysis, there is nice integration with the BI platform in that the tool can access a universe, either the version 3 format (.UNV) or the new version for format (.UNX). For example, the SAP Smart Meter Analytics uses clustering and segmentation algorithms on energy consumption.

Release strategy for SAP Business Objects Predictive Analytics

According to sources, SAP Business Objects Predictive Analysis is on a ramp-up mode for early-release testers OR it will be on an approach to release this production software to a limited number of customers. SAP has begun rolling out a series of specialized analytic applications that run on top of HANA, but also intends to have HANA support transactional workloads such as those generated by SAP's core Business Suite ERP system.

End of the day, what we will see is a set of predictive algorithms and also allows the use of algorithms from the open-source R language for statistical analysis. This product is expected to be generally available later this year.

Target Audience

The target audience for Business Objects Predictive Analysis is line-of-business users and business analysts who want to slice and dice data and, using advanced visualization techniques, uncover business opportunities, insights and risks. Big Data and HANA are the two areas SAP Business Objects Predictive Analysis will focus.

Useful Links

SAP Predictive Analytics Consulting

SAP - Predicting the Future Out of a Pile of Data

R (programming language)

SAP Jumps On Predictive Analytics Bandwagon

Five Things You Need to Know About SAP’s Move in Predictive Analytics

SAP Goes After SAS, IBM With Predictive Analytics Software

Oracle Set to Update Analytics Strategy, Go After SAP Customers

Conclusion

It’s too early to say if this will have any impact on either ‘SAS’ or ‘ORACLE’ share in this space. However, it certainly will improve the capabilities of the analytic applications and will make SAP a natural addition to any customer’s short list new to predictive analysis.


 



This blog provides an insight in to the how SAP Netweaver BW 7.3 is going to benefit from the SAP’s In Memory technology SAP HANA. I have tried to collate all the information available into this one blog to help members get an overview about SAP NW BW7.3 on SAP HANA.

I have explained briefly what are the New additions in SAP Netweaver BW 7.3 and What is SAP HANA. Then we take a look at how BW 7.3 is going to get benefited when using SAP HANA. The blog also highlights briefly various structural changes that the BW objects will be undergoing when SAP HANA is used as an underlying database


SAP Netweaver BW 7.3 Features

 

SAP BW 7.3 is the latest version of SAP BW. With the enhanced modeling capabilities of SAP BW 7.3 development efforts and maintenance overheads can be reduced considerably. It comes with lot of features like:

  • Allows creating graphical design of the data flows thereby giving a visual interpretation of how the flow would look like, when fully developed.
  • Wizard based data flow modeling and generation of all related data flow objects.
  • With BW 7.3 existing data flow can be copied using the Data Flow Copy tool to create similar data models.
  • BW 7.3 provides a workbench like UI to migrate data flows.
  • BW 7.3 introduces the concept of Hybridprovider which combines the real time transaction data with historical data.
  • BW 7.3 also introduces a new type of modeling object called SPO(Semantic Partition Object)

 

SAP HANA Overview        

SAP HANA is a flexible, multi-purpose, data-source-agonistic in-memory appliance that combines SAP software components optimized on hardware It includes a number of integrated SAP software components including the SAP in-memory computing engine, real-time replication service, data modeling and data services.  SAP HANA allows: accelerated BI scenarios off any data source; better operational planning, simulation and forecasting; fast analysis and better decision making off accelerated SAP ERP transactional data, better storage, search and ad-hoc analysis of very large data volumes.

  • SAP HANA has the capability to analyze information in real-time at unprecedented speeds on large volumes of non-aggregated data.
  • SAP HANA can create flexible analytic models based on real-time and historic business data.
  • SAP HANA minimizes data duplication.

 

SAP BW 7.3 on SAP HANA

Till now SAP HANA was running side by side SAP NetWeaver BW. But in November 2011 SAP has launched SAP NetWeaver BW 7.3x running on HANA as the underlying In-Memory DB Platform. With version 7.30 SP5 it is possible to run run SAP BW on SAP HANA as database platform. This enables us to leverage the In-Memory capabilities of HANA and create SAP-HANA-optimized BW objects.


SAP BW 7.3 on SAP HANA:

  • Provides an integrated engine for all the data management and in memory processing of analytical capabilities
  • Allows the Database and BWA to merge in once instance. BW on HANA delivers BWA functionality for BW objects locally. There is no need for a separate BWA.
  • Simple administration of the database and singe set of administration tools.
  • Improved load performance for the DSO’s.
  • Excellent query performance
  • Accelerated In-Memory planning functions.
  • HANA supports column based storage, hence the compression rate will be higher and less data has to be materialized.

 

Difference between BW 7.3 running on HANA as against running on any DB

SAP Netweaver BW 7.3 on any Database

SAP Netweaver BW 7.3 on HANA

Supports Standard Datastore Objects

Supports In-Memory based Datastore objects

Includes both Database server and SAP Netweaver BWA

Includes just the SAP HANA In-Memory Platform

Support Standard InfoCubes

Support In-Memory based InfoCubes

Support BW Integrated Planning

Supports In-Memory Planning Engine

HANA data marts running side-by-side BW

Objects created in HANA studio and BW Staging from HANA.

 

SAP BW 7.3 powered by SAP HANA

To migrate to SAP HANA as database pure DB conversion is possible. No separate implementation required.

Once the DB conversion activity has been completed, conversion of InfoCubes and DSOs into new Hana-optimized object types needs to be performed so as to make optimal use of new In-Memory technology. Objects can be converted on an object-by-object basis.

 

Below diagram depicts how the system architecture will look like after the migration

Pic 1.jpgAfter the DB conversion, BW objects like DSO, InfoCubes can also be converted to leverage the In-Memory capabilities. These BW objects are then referred as In-Memory optimized DSO and In-Memory optimized InfoCubes respectively.

 

In-Memory optimized DSO

DataStore Objects forms an integral part of a data model in SAP BW. It is usually utilize to store information or data at a detail level. A DSO is also useful if we want to extract Delta information from datasources. At times with huge amount of data the activation process and the reporting on a DSO takes a lot of time, thereby affecting the overall performance.

In the current architecture the activation process calculates changes for each record thereby creating load on the database. The Delta calculations are performed on the Application server itself. A lot of lookup/roundtrips happen between the application server and database to calculate the delta information

Pic 2.jpg

With an In-Memory optimized DSO’s:

  • Delta calculation has been completely integrated in HANA.
  • Uses In-Memory data structures which helps in faster access.
  • As the calculations are done in HANA itself, there are no round trips to application server.
  • Helps avoiding storage of redundant data.
  • The existing dataflow of the DSO remain unchanged. The DSO definition also remains unchanged after migration.
  • To activate In-Memory capabilities in a Standard DSO a new setting ”In Memory Optimized” is available, when checked and activated the standard DSO is converted to In-Memory Optimized DSO.
  • In-built tool provided to convert the existing Standard DSO’s to In-Memory optimized DSO’s

 

In-Memory optimized Infocube

InfoCubes also forms an integral part of a data model in SAP BW. They describe a self contained dataset. In the current architecture an Infocube is a set of relational tables arranged according to star schema. This star schema has a big fact table surrounded by Dimension tables. The Dimension tables are then linked to Master Data tables through SID’s

Pic 3.jpg

With an In-Memory optimized InfoCubes:

  • In Memory optimized info cubes are structured to represent flat structures without Dimension tables and the E table.
  • The Data Modeling process gets simplified as the dimensions are not physically present.
  • Faster data loads to cube as there are no Dimension Id’s.
  • To activate In-Memory capabilities in a Standard Infocube option is available in Infocube property, when Selected and activated the standard Infocube is converted to In-Memory Optimized Infocube.
  • In-built tool provided to convert the existing Standard Infocube’s to In-Memory optimized Infocube

 

In-Memory Planning:

For a SAP BW server on any database the Traditional Planning runs planning function on Application server. With In-Memory the planning functions are executed on SAP HANA

Pic 4.jpg


  • This provides a performance boost for planning capabilities like aggregation, disaggregation, conversion, revaluation.
  • Performance boost for plan/actual analysis.
  • No changes required to the planning models

 

Query Performance on In-Memory :

With SAP HANA as the underlying database the query performance also improves significantly.

 

Query performance on InfoCube

  • Indexes on InfoCube and master are no longer required.
  • In-Memory based calculation engine.

Query performance on DataStore Objects

  • Acceleration via In-Memory Column storage.
  • Additional acceleration via Analytical views on top of DSO

 

Pic 5.jpg

Conclusion:

With SAP HANA as underlying database, SAP BW 7.3 system can achieve:

  • Excellent query performance.
  • Accelerated In-Memory planning capabilities.
  • Performance boost for ETL processes.
  • DB and BWA merging in one instance.
  • Simplified administration via one set of admin tools.
  • Column based storage with highly compression rates and significantly less data to be materialized.
  • Simplified data modeling and reduced materialized layers.
  • Integrated and embedded flexibility for Data Marts

 

Related Content

Reference 1

Reference 2

Reference 3

Adeel Hashmi

SQLQuery Thoughts

Posted by Adeel Hashmi Apr 9, 2012

Sitting at the airport again. Wondering more about the latest SQLScript problem than one of my troublesome projects. Now that the solution is there for the complex inner join query, it won't work too well with HANA's column store; there are plenty of dos and donts for this. So the research is going on into making some stored procedures. Haven't done these procedures in ages, so struggling to find some clues. BO is not connecting properly to HANA. Best part is that the error its not reproducing for the Basis team. Read plenty of pdfs on SQLScript. After reading the SQLScript developers guide on service.sap.com/hana, the three different HANA views make sense. To bring the business logic closer to the database engine, SAP is rolling out SQLScript big time. Most of my ABAP team members are shocked they have to jump so much into another version of SQL. That bring said, I like all the flexibilities provided by SQLScript over regular SQL.

 

 

 

Till next time!

The biggest, best, and most inquisitive

The mission? To research, teach, and heal. Charité Berlin, the biggest university hospital in Europe, provides 150,000 inpatient and 600,000 outpatient treatments per year. The hospital’s 3,800 doctors and scientists are committed to the highest levels of healthcare and research – and the organization is equally committed to providing the accurate, timely reports and analysis required for success.

 

Charité already has a mature analytics program. This enables them to think creatively about how they use patient data, medical records, and study results within their business. Researchers wanted to look at millions of data points and ask questions in a flexible reporting environment – and they wanted to make their in-house analytics systems as fast and easy to use as a Google search. To make this possible, the hospital invested in SAP in-memory technology designed to harness the big data associated with medial records. Already, more than 600 users are taking advantage of the technology.

 

Getting creative within budget constraints

When the SAP HANA platform was first introduced at Charité, the hospital held workshops to brainstorm use cases, focusing on several key questions: What is possible, and where might the university make improvements? Where could the technology have the biggest impact on patient care and healthcare research? Can we use it to look for trends in patient cases?

 

For its first project with SAP HANA, the team decided to focus on the hospital’s cancer database and its use in selecting patients for clinical trials. A typical clinical trial involves a research partnership with a commercial sponsor such as a medical device or pharmaceutical company; the hospital has only a small window of opportunity in which to get the study, participants, and funding in place. The faster Charité can identify suitable study participants, the greater its chances of landing the study and conducting the research.

 

Rapid identification of clinical trial candidates

Charité now uses the SAP HANA Oncolyzer to analyze data merged from its cancer and medical admin databases to find the best candidates for each new trial. The Oncolyzer searches and examines information such as tumor types, gender, age, risk factors, treatments, and diagnoses – to find the best candidates based on the study criteria. In the future, when DNA is added to the data set, the Oncolyzer will analyze up to 500,000 data points per patient. Both structured and unstructured data is analyzed, accelerating the identification process greatly – and giving Charité a competitive advantage over other prospective research partners.

 

Creatively applying HANA to solve intriguing business problems

In an industry other than healthcare? SAP HANA is equally applicable to thousands of other business situations. For example, an oil company seeking to evaluate the potential of each well in its inventory can quickly analyze data that includes geography, flow history, current output, producer, supporting firms, and more, then match individual wells against particular objectives. Need to identify a well with certain characteristics or generate insights into deployed assets and maintenance cycles? No problem. Or suppose your company manages a large and thriving port. Need to optimize harbor management? An in-memory solution can analyze incoming ship data, meter information, average terminal time, and more. The applications are endless. Whatever the industry, whatever the type of information, SAP HANA can pull together structured and unstructured data and rapidly generate additional value out of every asset.

How do I get started?

What can SAP HANA do for you? Start by gathering your organization’s best minds to think about common industry problems. Think of assets in new ways, and ask new questions. What would you want to know if you could? SAP HANA combined with creative minds can take your company where it hasn’t gone before.

Need more inspiration? For ideas on how different industries have used SAP HANA and related technologies, check out the Experience SAP HANA site. Get inspired by insights from experienced practitioners – or get involved and share your experience.

Learn more

Business Analytics Services from SAP can help you explore the value of SAP HANA for your organization or industry. Now bring SAP HANA’s revolutionary in-memory technology together with your most thoughtful and innovative minds – and watch the insights, performance, and productivity thrive.

For more information on SAP HANA, please visit us online or watch a video on Charite.

 

About the author

What makes Ralph Richter run: Discovering new places.

What Makes me Run.jpg

 

Coming from the healthcare industry, I joined SAP in 2002 and currently head the SAP EMEA Business Intelligence Competency Center. I have been working with Charité since 2009 on in-memory technology, but am intimately familiar with hospital settings from my previous work as a controller in a university hospital. As a registered nurse (RN) with a diploma in hospital management, I am able to support this great work at Charité from a uniquely blended healthcare and technical perspective. Today I am doing strategy consulting in the BI area for oil and gas companies, hospitals, and many others – helping customers “discover new places” with in-memory computing. As a nurse, I learned to treat and take care of patients. At SAP, I’ve learned how to solve our customers’ business problems, bring value to their operations, and make them happy. This is what makes me run.


Hello Folks,

 

We all know that SAP (NYSE:SAP) is expanding its business intelligence application offerings with new predictive analysis software that helps businesses tap into big data.


The SAP BusinessObjects Predictive Analysis package includes predictive analysis algorithms, and development and visualization tools for building predictive data models used by analytical applications. SAP is making the software available to select customers now with general availability expected in September.

 

I thank SAP HANA Team for keeping the PAL Library reference guide, which is self explanatory and a very good guide to start with.

For PAL guide, refer to https://www.experiencesaphana.com/docs/DOC-1402

 

If you are using HANA Sandbox @Cloudshare, you can access PAL Libraries. If you find any problems in accessing these PAL Libraries and if you receive error message “No ScriptServer available" then you need to implement Note: 1650957 as a work around or you can request inmemorydevcenter@sap.com to enable the script server on your Cloudshare account

Well coming to my example, we will discuss about a sample scenario in which I will use “ABC” analysis to classify the data.

For more information, refer to http://en.wikipedia.org/wiki/ABC_analysis

 

Now let us get back to our example:

The data set on which we are going to examine contains 150 instances, which refers to a type of plant.

  1. Number of Instances: 150
  2. Number of Attributes: 4 numeric, predictive attributes and the class
  3. Attribute Information:
  • Sepal length in cm
  • Sepal width in cm
  • Petal length in cm
  • Petal width in cm

   4. Missing Attribute Values: None

 

We will now try to classify/distribute 33.3% for each of 3 classes.

 

Now I have created a table TESTDT_TAB as shown below:

 

1.png

As per the reference guide given by SAP, below is the required information for using PAL libraries in SAP HANA.


Prerequisites:

  1. Input data cannot contain null value.
  2. The item names in the Input table must be of string data type and be unique

 

Interface (abcAnalysis):

Function: pal::abcAnalysis

This function performs the ABC analysis algorithm.

 

L Function Signature:

pal::abcAnalysis ( Table<...> target, Table<...> args, Table<...> result)

 

Input Table:

2.png

Parameter Table:

3.png

Output Table:

4.png

Now basing on the “SepalLengthCM” we will try to classify the data into 3 Classes.

 

As we have PAL Libraries already installed, now we need to use these Libraries and write a procedure and call it to classify the data. Execute the following SQL Script:

 

SQL Script:

DROP TYPE DATA_T;

CREATE TYPE DATA_T AS TABLE("PLANT" VARCHAR(100),"SepalLengthCM" DOUBLE,"SepalWidthCM" DOUBLE,"PetalLengthCM" DOUBLE,"PetalWidthCM" DOUBLE);

DROP TYPE CONTROL_T;

CREATE TYPE CONTROL_T AS TABLE("Name" VARCHAR(100), "intArgs" INT, "doubleArgs" DOUBLE,"strArgs" VARCHAR(100));

DROP TYPE RESULT_T;

CREATE TYPE RESULT_T AS TABLE("ABC" VARCHAR(10),"ITEM" VARCHAR(100));

 

DROP PROCEDURE palAbcAnalysis;

CREATE PROCEDURE palAbcAnalysis( IN target DATA_T, IN control CONTROL_T, OUT results RESULT_T )

LANGUAGE LLANG

AS

BEGIN

export Void main(Table<String "PLANT", Double "SepalLengthCM",Double "SepalWidthCM",Double "PetalLengthCM",Double "PetalWidthCM"> "target" targetTab,

Table<String "Name", Int32 "intArgs", Double "doubleArgs",String "strArgs"> "control" controlTab,

Table<String "ABC", String "ITEM"> "results" & resultsTab) {

pal::abcAnalysis(targetTab, controlTab, resultsTab);

}

END;

 

DROP TABLE #CONTROL_TBL;

CREATE LOCAL TEMPORARY COLUMN TABLE #CONTROL_TBL ("Name" VARCHAR(100), "intArgs" INT, "doubleArgs" DOUBLE,"strArgs" VARCHAR(100));

INSERT INTO #CONTROL_TBL VALUES ('START_COLUMN',0,null,NULL);

INSERT INTO #CONTROL_TBL VALUES ('END_COLUMN',1,null,null);

INSERT INTO #CONTROL_TBL VALUES ('THREAD_NUMBER',2,null,null);

INSERT INTO #CONTROL_TBL VALUES ('PERCENT_A',null,0.33,null);

INSERT INTO #CONTROL_TBL VALUES ('PERCENT_B',null,0.33,null);

INSERT INTO #CONTROL_TBL VALUES ('PERCENT_C',null,0.33,null);

 

DROP TABLE RESULT_TBL;

CREATE COLUMN TABLE RESULT_TBL("ABC" VARCHAR(10),"ITEM" VARCHAR(100));

 

CALL palAbcAnalysis(TESTDT_TAB, "#CONTROL_TBL", RESULT_TBL) with overview;

SELECT * FROM RESULT_TBL;

 

Execute SQL Script by pressing "F8".

Now we get the RESULT_TBL filled with the data which is classified as per our requirements as shown below:

5.png

In this example, I took a data set of 150 records (Very less amount), it took approximately 203ms for SAP HANA to analyze the data and classify it as shown above.

6.png

With SAP planning to offer Predictive Analysis tool set as a package combining the predictive analysis software licenses and HANA licenses. With this tool set Companies can spot fraudulent transactions and predict /forecast the sales.

With High processing capabilities coupled with high performance SAP HANA together with Predictive Analysis tool set can revolutionize the market standards and help companies run better.So am looking forward for the general availability of this tool.

 

Also read my blog on

SAP HANA: My Experiences with SAP Business Objects Predictive Analysis tool

http://scn.sap.com/community/in-memory-business-data-management/blog/2012/05/31/sap-hana-my-experiences-with-sap-business-objects-predictive-analysis-tool

 

Thanks for your time for reading this blog. Do comment your views.

March 9, 2012

 

The activation of SAP in-memory optimized Data Store objects (DSOs) may be controlled by an administrator to limit overall memory consumption and/or manage CPU resources to optimize system performance.  SAP has provided access to system parameters collectively called Memory Consumption Reduced (MCR) that allows the fine tuning of the system during DSO activation.

 

By design, all DSOs are subject to the MCR’s initial parameters settings (“_DEFAULT”) and as a result there is no need for any modifications of these parameters unless standard DSO activations lead to errors due to lack of memory resources and/or exceed time estimates due to too few parallel processing threads be created to process the load.  Fortunately,  standard SAP functionality provides access to define specific system settings to achieve the desired performance.

 

Specific MCR settings can be made for a specific DSO object via the transaction code SM30 and the subsequent modification of table view V_RSODSOIMOSET.  Separate table entries may be created with unique system parameter settings that will be invoked for a specified DSO.  In the table below, the system delivered default “_DEFAULT” is listed along with specific entries for DSOs HSD_O01B, HSD_O01C, HSD_O01D, and HSD_O01E:

1.png

Additional entries may be added by entering the table maintenance mode and selecting the “New Entries” option:

2.png

When creating a new entry, the user will specify the technical name of the target DSO, the package size to be processed, and the number of packages to be processed in parallel.  A check box is provided to instruct the system not to  apply any MCR processing to the specified DSO. 

  3.png

Example of a completed parameter entry for DSO HSD_O01E specifying 200,000 records per package and 6 concurrent execution threads:

4.png

Test Sequence:

A series of tests were conducted to illustrate some typical results for various MCR settings.  These tests were conducted on a BW 7.3 SP07 system connected to SAP HANA appliance 1.0 with approximately 30 million records being processed.  :

   5.png

 

BW memory usage was determine by monitoring overall database memory consumption and time using the transaction code STAD.  Filter on the program "RSODSACT1" and (optionally) the user id executing the data activation:

6.png

 

Test Procedure:

Table V_RSODSOIMOSET was modified for each test, changing the package size and/or number of package parameters.

The specified DSO was loaded with approximately 30 million records then the data was activated.  Database memory consumption and time was monitored using the transaction code STAD.  Filter on the program "RSODSACT1" and the user id executing the data activation. 

The number of activation processing streams is directly related to the "Number of Packages" parameter when using MCR.  To view the generated multiple process streams enter the HANA system and select the top system identification node (the node in the Navigator system hierarchy directly above your CONTENT and CATALOG folders) by double click it.  This will open the system overview information screens.  Click on the Performance tab and sort on the column "Application User".   The id of the user executing the activation process can now be monitored to view the generated multiple process threads.  Note:  if you set the "Number of Packages" parameter to 10, you can expect to see 10+1 process threads in the Performance display.  The extra thread allows for a communication link back to BW.  As each process thread completes the data waiting for activation is eventually consumed.  As a result the number of threads will tend to decrease as the process is executed.

  

 

Example Data:

Test Number:   DEFAULT

MCR Id:  _DEFAULT

Package size:  100,000 (system default)

Number of Packages:  8 (system default)

 

DSO loaded and waiting for data activation:

7.png

 

HANA Administration view prior to starting activation:

8.png

 

HANA Administration view after starting activation (note the 8 new ActivationQueuePackage threads generated:

9.jpg

 

Transaction code STAD in BW system to monitor memory consumption:

10.gif 

 

BW log for the DSO activation, used to determine overall activation time:

11.gif

 

Test “E” No MCR: 

A trial run was executed where MCR was turned off for the specified DSO:

  12.png

 

An out of memory condition on the BW (ABAP) side was created.  The error appeared in the DSO activation log :

13.png

Error log text:

14.png

 

The STAD trace shows only the initialization of the activation process:

15.png

 

 

TEST F:

 

This test tried to push the limits of processing capability of the HANA Db by forcing very large package sizes to be processed.  The net result was many additional HANA processing threads being created (which appear to be for master data lookups) that eventually overloaded the system (I counted over 1100 process threads being created that eventually overwhelmed the system and created an out of memory condition on the HANA side):

16.png

Over 1100 processing threads were created….resulting in an out of memory condition on the HANA side:

17.png

 

  

Test Summary/Conclusions:

  19.png

Conclusions:                                                                                                      

1.       Test E (No MCR) illustrates the need for implementing MCR with large data to prevent BW overload      

2.       Comparing Tests DEFAULT and A: decreasing package size increases activation time                        

3.       Comparing Tests DEFAULT and B: assigning fewer processors will increase activation time.  

4.       Comparing Tests DEFAULT and D: increasing package size reduces activation time but may increase memory consumption.                                                                                                                                   

5.       Test F (very large package size) shows there are limits to how much data the HANA system can process without exceeding memory constraints.                                                                                                

 

 

Additional material on this topic:

Note 1646723:  “BW on SAP HANA DB: SAP HANA-opt. DSO Activation Parameters”

 

 

 

 

 

 

 

 

 

All readers are encouraged to ask questions or express their own views on this topic.

Hi,

 

I got a access to SAP HANA Sand Box.

 

Based on the online documents created tables and Attribute view.

 

I want to create tables which is used for Attribute, Analytic and Calculation views.

 

Please advise me what is best approch to learn so that I can write the ceritification.

 

Thanks

Kanna

Hi,

 

How to generate Business Object report using HANA Analytic view from HANA SANDBOX?

 

Is HANA SandBox have Business Object designer?

 

Thanks

Kanna

Hello Folks,

In this document I would like to explain about, “export” and “import” of system “landscapes”.

In our project, we have different architectures for landscapes like” quality”, ”development”, ”production”. So this can be done by adding “multiple hana systems” which connects to different server instances.

  1. This feature helps to monitor the systems from single hana studio interface. In this example you can see below, that I have 2 systems named DCC and DCD. Both are connecting to 2 different server instances. Now let us see how do we “export” and “import” our landscapes. In case if you want to use a new HANA studio installed and you want to get your existing landscapes into your newly installed “studio” you can use this feature.

As shown above, you can see 2 systems named DCC and DCD.

 

Now go to “quick launch” and select “export” and choose “landscapes” as shown below:

1.jpg

Now press “Next”.

2.jpg

This will lead you to the below screen, where we need to select our destination as shown below:

3.jpg

Now  system generates “landscape.xml” as shown below:

4.jpg

Now let us check on how to import the landscape, so for this am deleting the systems from my landscape as shown below:

6.jpg5.jpg

Now you can see my “navigator” which is empty as shown below:

7.jpg

Now let us try importing the previous landscape using “import” feature. By selecting “landscape” as shown below:

8.jpg

Now select the source where you have your “landscape.xml” file saved.

9.jpg

Now you can see the “import” is in progress as shown below:

10.jpg

But my navigator is showing as below, because the user name and passwords cannot be imported. So we have to give our credentials to make the system “start” again.

11.jpg

Now am giving my credentials as shown below:

12.jpg

Now you can see that both the systems are running fine as shown below:

13.jpg

Hope you understood the feature and the importance of it.

Thank you for reading my blog. Please do comment your views.

Actions

Filter Blog

By author:
By date:
By tag: