1 2 3 44 Previous Next

SAP HANA and In-Memory Computing

654 Posts

Introduction

Are you interested in getting certified as SAP HANA database administrator? In this blog, I will explain how you can prepare for the SAP Certified Technology Associate - SAP HANA certification exam. This exam covers the traditional DBA topics like monitoring, operations, security, backups, etc.

 

The SAP Certified Technology Associate - SAP HANA certification is a prerequisite for the certification SAP Certified Technology Specialist - SAP HANA Installation, which you will need if you wish to perform SAP HANA installations as part of the Tailored Data Center Integration (TDI) program.

 

This blog is part of a series.

 

About the Certification

 

At the time of writing there are two editions of the SAP Certified Technology Associate - SAP HANA certification:

 

Typically you would go for the most recent exam: 141, i.e. 1st edition 2014. There is not a lot of difference between both exams, the distinction concerns mainly the topic database migration using the Database Migration Option (DMO) of the Software Update Manager (SUM) tool, which is new for exam edition 141.

 

For the latest information, see the blog by Tim Breitwieser on SCN: SAP HANA Education: Course & Certification Program 2014. The SPS 08 edition can be expected at the end of this year.

 

Topic Areas

 

There are 80 questions divided over 10 topic areas. Cut score = 59%, which means that you need to answer at least 47 questions correct. Below the different topics and their relative weight.

 

10 questions or more (The Big Four)

 

  • Monitoring - Set-up and execute effective monitoring of SAP HANA using DBA Cockpit, SAP HANA Studio, and SAP Solution Manager.
  • Operations - Design and implement an administration and operation strategy for SAP HANA, e.g. transport management, patching and updating, etc.
  • Security and Authorization - Describe the authorization concept of SAP HANA and implement a security model using analytic privileges, SQL privileges, pre-defined roles and schemas. Perform basic security and authorization troubleshooting.
  • System architecture - Design a system architecture for a SAP HANA implementation including hardware sizing, network requirements and integration with existing architecture.

 

6 to 10 questions

 

  • Installation - Evaluate prerequisites for a SAP HANA installation, verify hardware and operation system, and describe the installation and post-installation tasks.
  • Troubleshooting - Troubleshoot SAP HANA and system performance by debugging SAP HANA, analyzing system locks, system files, and traces.

 

6 questions or less

  • High Availability and Disaster Tolerance - Design and implement a strategy for high availability and disaster tolerance.
  • Data Provisioning - Describe possible scenarios and tools for replicating and loading data into SAP HANA from different data sources, e.g. SAP Landscape Transformation (SLT), SAP Data Services, or Direct Extractor Connection (DXC).
  • Backup and Recovery - Design and implement a back-up and recovery strategy.
  • Database Migration to SAP HANA - Prepare and execute database migration to SAP HANA using DMO.

 

When you prepare of the exam, best to focus on The Big Four topics first. When I took the exam, there were 13 questions just on the topic of monitoring, whereas the four topics in the last category combined only had slightly more: 18 questions. Of course, your mileage may vary, but this may give you an idea.

 

Resources

 

The main resource is the SAP Education training HA200 - SAP HANA - Installation & Operations. This 5-day training covers all the topics mentioned above. Most topics correspond directly to the chapters from the course, e.g. Operations (Unit 8) or Backup and Recovery (Unit 9). Some topics are bundled (Security) or split (Monitoring and Troubleshooting). The only exception is the topic Data Provisioning, which is covered in the HA100 SAP HANA Introduction training.

 

Because of the direct relation of the exam with the HA200 training, participation is certainly recommended. However, it is not a requirement. I passed the first edition of the exam, the now retired E_HANATEC1, just by reading the SAP HANA documentation and working with the product. I have to say, I was a bit surprised at the time by the number of questions on topics like DBA Cockpit, CTS+, SAP Solution Manager, etc. While this material is clearly covered in the HA200 course with exercises and all, you will have to look for it a bit in the SAP HANA guides. So if you decide to skip the HA200 training, make sure you do not skip those topics.

 

Additional resources as preparation for the exam (in order of importance) on the publicly available SAP Help Portal (help.sap.com):

 

The Database Migration Option (DMO) of the Software Update Manager (SUM) tool is not included in the SAP HANA documentation set but part of the System Landscape Toolset on the SAP Service Market Place (requires login). A good start is the SCN document Database Migration Option (DMO) of SUM. There are only a handful of questions about this topic; no need to attend the 2-day course HA250 Migration to SAP HANA using DMO training for the exam.

 

Note that the SAP Help Portal on http://help.sap.com/hana_platform only shows the latest documentation, SPS 08 at the time of writing. For SPS 06 and SPS 07, you need to go to the SAP Service Marketplace on http://service.sap.com/hana.

 

You also may want to take a look at SAP note 1905389 - Additional Material for HA200 and HA200R. This note contains additional information and documents. Most of the SAP Notes mentioned are also listed in the Server Installation and Update Guide under Important SAP Notes.

 

SAP HANA Academy

 

To help you prepare for the exam, I have recorded some tutorial videos on SAP HANA administration topics.

 

Sample questions

 

On the certification page, a link to a PDF with sample questions is included. Below I marked the answers in bold and included a reference to the source with some tips and to do's.

 

==

 

1. Which of the following can you use to analyze an expensive SQL statement? There are 2 correct answers to this question.

a. Open the Plan Visualizer.

b. Open the Plan Explanation.

c. Open the hdbcons tool.

d. Open the SQL Plan Cache.

 

Source: This information can be found in the Troubleshooting and Performance Analysis Guide: When you have identified a critical SQL statement and identified its overall key figures from the SQL plan cache analysis you can have a closer look at the actual runtime behavior of the SQL statement.The following tools can be used for a more detailed analysis:

  • Plan Explanation - Creation of an execution plan
  • Plan Visualizer - Detailed graphical execution plan

 

More details are provided further in the guide: Analyzing SQL Execution with the Plan Visualizer and Analyzing SQL Execution with the Plan Explanation.

 

The hdbcons tool serves another purpose and is documented in the Administration Guide. See also SAP Note 1786918 and note 1758890 - SAP HANA: Information needed by Product/Development Support.

 

The SQL Plan Cache is documented in the Administration Guide and the Troubleshooting and Performance Analysis Guide: As the SQL plan cache collects statistics on the preparation and execution of SQL statements, it is an important tool for understanding and analyzing SQL processing. For example, it can help you to find slow queries, as well as measure the overall performance of your system.

 

Personally, I find this question to be a bit tricky. The idea is that the SQL Plan Cache should be used to identify expensive statements while the Plan tools are to analyse. However, the documentation is less rigid on the distinction. Fortunately there are not too many questions of this kind.

 

To do: I can highly recommend to read the SAP HANA Troubleshooting and Performance Analysis Guide. Monitoring and troubleshooting is an important topic in the exam accounting for almost 25% of the questions. Make sure you are familiar with the tools by trying them out on your own system.

 

==

 

2. You are reviewing the execution plan of an SQL statement. You want to find out which plan operators (POPs) have been executed in parallel and for how long each of them has been active. How can you accomplish this?

 

a. Use the Visualize Plan functionality in the SAP HANA studio.

b. Use Job Progress in the SAP HANA studio.

c. Use Performance Trace in HDB admin.

d. Use EXPLAIN (graphically) in the DBA Cockpit.

 

Source: There is actually no reference to plan operators (POPs) in the SAP HANA documentation but by elimination (see below) the Visualize Plan functionality of the SAP HANA studio is the only right answer. The tool is documented in the Troubleshooting and Performance Analysis Guide, as mentioned above.

 

One reference I found to POPs is in a response to a question by CSA Expert Lars Breddemannin this forum post: The SAP HANA Engines? As Lars also wrote a presentation on Understanding SAP HANA Performance he may just have come up with the question. Lars?

 

You can certainly monitor the progress of jobs using HANA Studio but this will not give you any information about the execution plan of an SQL statement. Job monitoring is documented in the Administration Guide.

 

The mention of HDB admin is a bit curious. When executed from the command line as <SID>adm user and with a X-Windows environment available it will start a UI similar to the TREX admin tool used for TREX/BWA. The tool is not documented for HANA, hence not supported, but in case you are interested, see the blog How to use HDBAdmin to analyze performance traces in SAP HANA by John Appleby.

 

About the EXPLAIN function in the DBA Cockpit, see the DBA Cockpit for SAP HANA documentation. The tool is similar but does not work with plan operators.

 

To do: Did I mention that monitoring and performance analysis is an important topic? Same advice as with question number one: Read the guide.

 

==

 

3. Which of the following update scenarios can be selected for execution in the SAP HANA lifecycle manager? There are 2 correct answers to this question.

 

a. Perform automated updates of SAP HANA and SAP HANA components

b. Apply Support Package Stacks

c. Update the SAP HANA studio on local machines

d. Update SAP HANA replication technology components.

 

Source: SAP HANA lifecycle manager is documented in the SAP HANA Update and Configuration Guide (SPS 07). This guide has been merged with the installation guide for SPS 08 as there have been some significant changes (see above under Resources where to download the previous documentation set).

The lifecycle manager UI can be displayed inside SAP HANA studio or a browser and there is also a command line interface (HLM), not to be confused with the SAP HANA lifecycle management tools, documented in the LCM Tools Reference. Below a print screen with the different menu options.

You can apply complete support package stacks (SPS) or update single components to a certain revision. This can be automated in the sense that the tool will download and apply the update after some user interaction. You cannot fully automate a scheduled update with this tool.

 

You can use lifecycle manager to update the SAP HANA studio version on the central update site, typically this will be the HANA server. However, you cannot use the tool to update local installations. Updating local SAP HANA studio installations is self-service. Users can configure studio to check for a new version with the update site at startup or at an interval. However, they will need interact with a dialog to perform the update.

 

SAP Replication Server (SRS) and the SAP LT Replication Server (SLT) are examples of replication technology components. SRS is Sybase technology, SLT is Netweaver-based. You cannot use SAP HANA Lifecycle Manager to update these components.

 

 

How to perform automated updates of SAP HANA and SAP HANA components using SAP HANA lifecycle manager

 

To do: It certainly helps to be familiar with installing SAP HANA for this type of questions. I can recommend to get yourself a developer system and practice with the HLM and LCM tools. If this is not an option, you may want to review the playlist on SAP HANA installations - SPS 07 on the SAP HANA Academy:

 

==

 

4. Which combination of authorisations is required for this user?

 

a. USER ADMIN, SERVICE ADMIN, DATA ADMIN

b. USER ADMIN, CREATE STRUCTURED PRIVILEGE, RESOURCE ADMIN

c. USER ADMIN, SERVICE ADMIN, ROLE ADMIN

d. USER ADMIN, CREATE STRUCTURED PRIVILEGE, ROLE ADMIN

 

Source: SAP HANA system privileges are documented in the SAP HANA SQL and System Views Reference. It is not clear to me where "this" in "for this user", refers to. I hope that on the exam the question had some context as all combinations are valid. For a user administrator, the privileges service and resource admin are not relevant, this leaves only answer d. as correct.

 

To do: Security is a big topic for the exam. On the HA200 course, two units are dedicated to this topic, security (authentication, encryptions, auditing) and maintaining users and authorizations (user management, roles and privileges). To prepare for this topic, read the Security Guide and chapter 3 of the Administration Guide.

 

==

 

5. Which of the following are pre-delivered template roles? There are 2 correct answers to this question.

 

a. MONITORING

b. IMPORT

c. SAP_HANA_INTERNAL_SUPPORT

d. MODELING

 

Source: Documented in the Security Guide, Standard Roles. There is a role named SAP_INTERNAL_HANA_SUPPORT, not HANA_INTERNAL, but this role is not a template role.

 

To do: As mentioned, security is a big topic. Do you know the 7 restrictions for the SAP_INTERNAL_HANA_SUPPORT, for example? What purpose it serves? It is a good idea to be familiar with the different type of privileges, e.g. object, system, analytic, etc. How to display privileges granted to a user, and what tools you can use.

 

==

 

6. A user cannot query an information model because of missing authorizations. What is the fastest way to find out which authorization is missing?

 

a. Query the system view EFFECTIVE_PRIVILEGES.

b. Investigate the authorisation trace.

c. Use the Authorization Dependency Viewer.

d. Check the assigned roles in the user editor.

 

Source: The Authorization Dependency Viewer is documented in the Administration Guide. As mentioned there, you can use the authorization dependency viewer as a first step in troubleshooting the following authorization errors and invalid object errors for these object types: NOT AUTHORIZED, INVALIDATED VIEW and INVALIDATED PROCEDURE.

The other answers are not wrong and may be needed as a second or third step but are certainly not the fastest.

 

To do: Did I mention security is a big topic? See above at question 4 and 5.

 

==

 

7. Which of the following actions are required in the SAP HANA studio to use the Enhanced Change and Transport System (CTS+)?

 

a. Create a delivery unit that contains all of the runtime objects.

b. Create the HTTP connection named CTSDEPLOY.

c. Configure the connection to the CTS in the preferences.

d. Use an authorised user to attach SAP HANA to the transport request.

 

Source: Transporting changes, including HANA Application Lifecycle Management (HALM) and CTS+ are covered in the Operations unit of HA200. This is where you will learn about delivery units and packages and how to import and export content. HALM is documented in the SAP HANA Developer Guide but not CTS+. See Resources on CTS+ on SCN and in particular the How-To Guide How to Configure SAP HANA for CTS (SPS 07). Chapter 8 of that how to guide shows you all the Studio print screens.

 

How to configure the CTS Deploy Web Service is documented in Software Logistics for SAP Netweaver. This step is performed outside of the SAP HANA studio.

 

To do: You probably will get a few (but not many) questions about transporting changes. It is good to be familiar with the HANA Application Lifecycle Management topic documented in the Developer Guide. CTS+ in my view is more of a Netweaver ABAP topic. If you are not familiar with Netweaver or ABAP this will be a bit of a learning curve. When short on time, I would focus on the Big Four topics mentioned above. Mastering Data Provisioning in a short time will be challenging as these all touch different technologies.

 

==

 

8. Where in the SAP HANA Studio can you change the path of the backup folder? There are 2 correct answers to this question.

 

a. Backup Catalog

b. executor.ini

c. global.ini

d. backup editor

 

Source: Documented in the SAP HANA Administration Guide, chapter Backup and Recovery. The backup editor in HANA Studio provides a user-friendly interface to make backup-related configuration changes. These changes are recorded in configuration files with INI extension. For backup, this will be the global INI file for most settings.

 

The Backup Catalog stores location and time stamp plus some additional information about all the data and log backups made but does not contain any configuration.

 

With the executor.ini file you can enable the Executor Trace to collect internal details about SQL statement execution.

 

To do: Read the Administration Guide or watch the SAP HANA Academy playlist on Backup and Recovery. Expect a couple of questions on the topic. They are not too difficult if you are familiar with the topic and so can help to boost your score.

 

 

How to change the path of the backup folder.

 

 

The Backup Catalog tutorial video may covers an exam question or two.

 

==

 

9. Which of the following can be performed in the SAP HANA lifecycle manager? There are two correct answers to this question.

 

a. Uninstall an SAP HANA system.

b. Rename an SAP HANA system.

c. Add an additional SAP HANA system.

d. Copy an SAP HANA system.

e. Change the SAP HANA license type.

 

Source: The SAP HANA lifecycle manager (HLM) tool is documented in the SAP HANA Update and Configuration Guide (SPS 07). See above, question 3, for more information about this guide and for a tutorial video about what you can do with this tool.

 

You can uninstall SAP HANA from the command line with the HANA Lifecycle Management (HCM) tool hdbuninst but not with the lifecycle manager. Significant changes were made to HCM and HLM with SPS 08 and I expect this to happen again with SPS 09.

 

How to copy a SAP HANA database is documented in the Administration Guide, chapter Availability and Scalability, section Backup and Recovery. A database copy is only possible using file-based backups or storage snapshots. There is no lifecycle management tool for this.

 

You can view and edit the SAP HANA License in HANA Studio by selecting a system in the Systems View and select Properties and then License. A license can be of type Permanent or Temporary.

 

To do: same advice as above with question 3.

 

 

Uninstall an SAP HANA system

 

 

Change the SAP HANA license type

 

==

 

10. What is the maximum number of master name servers that you can define in a distributed landscape?

 

a. 1

b. 8

c. 2

d. 3

 

Source: This is documented in the the SAP HANA Administration Guide, chapter System Administration, section Monitoring SAP HANA systems:

 

When you install a distributed system, up to three hosts are automatically configured as master name servers. The configured namesever role of these hosts is MASTER 1, MASTER 2, and MASTER 3. Additional hosts in your system are configured as slave name servers. The configured nameserver role of these hosts is SLAVE.

 

A configuration table showing a typical configuration for a distributed system is documented in the LCM Tools Reference Guide, chapter SAP HANA System Types.

 

To Do: Although this is documented in the monitoring section, this probably would be considered an architecture question. In HA200 this is discussed in Unit 3 Installation, Performing a Distributed Installation. To get a good picture of the SAP HANA architecture, the SAP HANA Master Guide is a good start.

 

 

SAP HANA Master Guide on HANA architecture

 

 

More Questions?

 

Unfortunately, but very reasonably, those that have passed the exam are not allowed to share the questions with anyone else, so I can't share any particular question with you here. However, a close study of the mentioned resources should provide you with enough knowledge to successfully pass the exam.

 

 

Did you succeed?

 

Feel free to post a comment about how the exam went. If there is any information missing, please let me know.

 

Success!

Suite on HANA, Simple Finance, and a really cool explosion video as a bonus.


The recent parallel announcements of SAP Simple Finance and HP Converged System 900 for SAP HANA (CS900) motivated my blog on saphana.com about the increasing business value of Suite on HANA and the ability to run such a mission critical system in a low risk fashion. The blog covers following topics in quite some detail:


  • SAP Simple Finance
    The blog explains the new SAP Simple Finance solution, the “trick” that it is using and the resulting business and IT benefits.

  • Customer Example: HP IT
    To put the move to SAP Business Suite on HANA into perspective, the blog gives a glimpse into the data sizing and preparation underway within the HP IT team.


  • HP ConvergedSystem 900 for SAP HANA (CS900) - see diagram below
    It also talks about the breakthrough HP CS900 system, the largest and most robust of its type with up to 12TB of RAM in a single system.


  • SAP HANA Datacenter readiness including comparison of storage and system level replication - see table below
    Most importantly, the blog busts any doubts that SAP HANA is not data center ready. With a broad offering of high availability and disaster recovery offerings from HP and other vendors, business continuity of SAP HANA is ensured; if need be in a fully automated fashion.

 

And the explosion? – A video shows a live datacenter failing over to a remote site within seconds, triggered by a small load of TNT.

For the full content, please read the detailed blog on saphana.com. – Kindly share your thoughts on SAP Simple Finance, HANA datacenter readiness, and HP Converged System 900 for SAP HANA and I will be sure to reply to your comments.

Thanks,

Swen

 

CS900 family picture.jpg

Picture: System family for HP ConvergedSystem 900 for SAP HANA



 

Storage Replication

System Replication

Vendors

HP, IBM, Hitachi, CISCO, Dell, Fujitsu, NEC, VCE, Huawei, Lenovo (China only)

- HP ServiceGuard

- SUSE Linux Cluster (in beta)

Supported HANA use cases

- Scale-up

- Scale-out

- Scale-up

- Scale-out (only HP)

Replication strategies

- Synchronous

- Asynchronous

- Synchronous

- Asynchronous

Bandwidth requirements

Higher

-  Replication of partial transactions results in costly roll-backs when transaction is cancelled, e.g. due to failure

Lower

- No transmission of cancelled transactions; replication only after full commit

- Sync mode: only log files are transferred continuously with transaction commit, rest async driving lower bandwidth

Disaster recovery (*)

- Performance optimized:

- Cost optimized:

 

- Slow

- Slow

 

- Fast

- Medium

Openness

Hardware vendor dependent

Infrastructure agnostic

Additional capabilities

n/a

- Zero downtime management (aka NetWeaver connectivity suspend)

- Cascading multi-tier system replication (only HP)

Key roadmap capabilities

n/a

Active/Active Operation (read only reporting on secondary fail over site)

(*)

Performance optimized

- Secondary system completely used for the preparation of a possible take-over

- Resources used for data pre-load on secondary

- Take-overs and Performance Ramp shortened maximally

Cost optimized

- Operating non-prod systems on secondary

- Resources freed (no data pre-load) to be offered to one or more non-prod installations

- During take-over the non-prod operation has to be ended

- Take-over performance similar to cold start-up


Table: High level comparison between storage and system level replication with SAP HANA

The premise of this blog is that Hana Cloud Platform (HCP) is going to be related to Suite on HANA in HANA Cloud the same way as ABAP Workbench was related to SAP on premise ERP Suite. One of the reasons for SAP ERP becoming the leading application for large enterprises 20 years ago was that it provided a comprehensive ready to customize and use application for all areas of the business. In addition it came with a really mature development environment to further extend the SAP supplied functionality for the customer specific requirements. Even though it was a robust development environment, it was not used by customers to build stand alone applications that had nothing to do with SAP ERP application.

 

Fast forward to the present and the reality of full SAP ERP functionality in the cloud is becoming more real everyday. One way to extend the cloud functionality will be to extend in the HANA Cloud Platform. Imagine the increased security and stability of multi-tenant SAP ERP in the cloud with no customer specific development (Z-Code for some), combined with the flexibility for each customer to extend their business process by developing extensions in their personal space on HCP. It is like having your cake and eat it too. I think SAP's strategy is brilliant. Hopefully the execution can be equally smooth.

 

Checkout http://marketrealist.com/2014/08/why-saps-cloud-strategy-will-capture-cloud-space/ for a very high level overview of why SAP's cloud strategy will deliver results for customers and investors.

We all know that BI applications generally contain many calculations.

 

As developers we encounter multiple errors while handling such business calculations. One of such frequently encountered and rather silly or irritating error is the 'Divide by zero' error.

 

In different technologies we have different methods using which we can handle or suppress the 'Divide by zero' error and display BLANK or zero in the result.

 

For SAP BI/BW, In the BEx Query Designer we have a specific data function which is designed to handle this scenario when division by 0 occurs.

 

NDIV0 (<Expression>) - results in 0 if the <expression> causes a division by zero. Otherwise we get the result of the <expression> as the output.

Example: NDIV0 (100/5) would give us 20. NDIV0 (100/0) would give us 0.

 

The same situation can also be handled by using NOERR (<Expression>) function which suppresses the error and would again give us 0 in case of an erroneous or undefined calculation.

 

I recently used a division expression in a Calculated Column in my Calculation View, and I did not bother to handle the division by zero. So here goes, I immediately get a 'Divide by zero' error.

Divide by Zero Error.png

I was browsing through SCN to see if this has been encountered and handled by anybody in SAP HANA, but unfortunately I couldn't find any post/blog/document for such an encounter in HANA.

 

Hence this blog for my fellow developers who possibly encountered the same issue or are going to encounter this issue showing a simple solution to handle this error.

 

Though I know its a very simple way and others might have the same idea, I thought I could just put across my approach.

 

In order to avoid the 'divide by zero' error in an expression like <numerator / denominator>, we just add a condition to check if the denominator is zero, then result = 0, else the calculation.


Example: if my calculation is variable1/variable2, I would use the formula as below:


"IF ( variable2 = 0, 0, variable1 / variable2 )" - this would simple give the output as zero in case of division by zero, else perform the calculation.

 

Edit:

I was aware that we could also use the CASE function to achieve the same functionility, but just realized that SAP suggests not to use the if function to avoid the divide by zero error. check the link.


We would have to use the CASE function. case(B, 0, 0, A/B).


Hope this helps somebody!

 

Good Day!

hanaLT.png

More and more SAP customers want to migrate their applications to the in-memory database SAP HANA – but they don’t know how. They were advised to watch the presentation that Bernd Noll, a member of SAP Active Global Support, held at the beginning of July in Orlando at the SAPPHIRE conference. Together with SUSE, the SAP transformation specialist invited experts to hear an overview of the current SAP HANA migration options – “powered by SUSE and driven by SAP Landscape Transformation (LT) Software.”


It is no wonder that SAP appeared together with SUSE at this event, especially since SAP recommends and supports the SUSE Linux Enterprise Server operating system for optimal use of SAP HANA. “SAP has already registered thousands of satisfied SAP HANA customers – which means there are also thousands of satisfied SUSE Linux customers – and this number is increasing every day,” stated Bernd Noll while summing up this close partnership.


Complex System Landscapes, Huge Data Volumes


For most companies, switching to SAP HANA presents the challenge of transforming complex system landscapes with huge data volumes. There are three approaches to this:


  1. “Lift and Shift”: This is regarded as the standard approach, whereby all master data and transaction data from one system is migrated to SAP HANA.
  2. “Carve Out”: Only selected data from one system is migrated.
  3. “Consolidation”: Only selected data from multiple systems is migrated.

Several Birds with One Stone

“Both the carve-out and the consolidation approach offer companies an advantage in that they migrate only the data that is actually required to SAP HANA, without having to migrate superfluous data,” emphasized Bernd Noll in his presentation. “Both approaches are optimally supported by the SAP Landscape Transformation (LT) tool, most notably the SAP Landscape Transformation Server.”

By using the SAP LT tools, customers can:

  • Combine multiple activities into one step, such as selective migration, data harmonization, system consolidation, etc.
  • Ensure process continuity
  • Retain flexibility by combining this migration with operating system and database migration, upgrades, and Unicode migrations.
  • Standardize and consolidate business processes    
  • Minimize downtime
  • Accelerate project success
  • Reduce TCO
  • Minimize risks


Watch the YouTube video.

VMware has been an SAP global technology partner for a few years and we have had a lot of co-innovation, joint go to market activities and events in the past. The biggest area of co-innovation has been Virtualization of SAP applications leveraging VMware. But over the past 2 years we have expanded our partnership dramatically especially around SAP HANA & its simplification.

 

SAP HANA & VMware:

VMware has become an important SAP HANA ecosystem partner with the ability to virtualize SAP HANA to create a lower cost, higher flexibility SAP HANA instance. Leveraging the two together customers can innovate and simplify their data centers by achieving faster time-to-value, higher service levels and lower total cost of ownership (TCO).

This started out in Nov 2012 when SAP & VMware announced the capability of vSphere 5.1 to virtualize SAP HANA for test & dev purposes. Over the next 2 years we worked together very closely leading to the announcement of controlled availability of Virtualized SAP HANA for productive use in May 2014. Very soon after this Pat Gelsinger (VMware CEO) announced in Bernd Leukert’s keynote at SAPPHIRE 2014 the GA for SAP HANA virtualization in production environments using VMware vSphere 5.5.

This development helps enable IT to run at the speed of business and let the overall business focus on leveraging the immense power of SAP HANA & Intel to create true competitive advantage with simplicity and flexibility of Virtualization.

 

SAP @ VMworld 2014:

VMworld is VMware’s annual conference which has over the years become a mecca of virtualization learning & news. It’s the place to explore VMware’s complete portfolio of tools and technologies for automating virtualized data centers and extending them to the cloud. Discover how you can deliver apps efficiently and provide secure virtual workspaces. Gain insights from organizations like yours and a vibrant ecosystem of partners and industry experts on how you can redefine IT with software.

This year, SAP will be a part of the VMworld 2014(Aug 24th-28th) together with Intel, VMware by bringing in experts from the 3 companies to help answer all your questions and share the latest and greatest around SAP HANA, Intel E7 chipset which powers SAP HANA & VMware Virtualization.  Please visit the SAP Booth #1641.

 

We will have SAP speakers share more details around this at the following sessions:

60 Min breakout session: “Architecture for a Real-time Business in an era of big data” (Monday, Aug 25, 4:00 PM - 5:00 PM – Moscone West, Room 2024) https://vmworld2014.activeevents.com/connect/sessionDetail.ww?SESSION_ID=3526

 

20 Min Theatre session: “Innovating the Modern Data Center with SAP HANA” (Monday, Aug 25, 11:50 AM - 12:10 PM – Solutions Exchange Theater Booth 1901)

https://vmworld2014.activeevents.com/connect/sessionDetail.ww?SESSION_ID=3571

 

If you need to setup a 1:1 meeting with some of our experts to discuss your specific case please reach out to Diane McSweeney: diane@bootstrap-mktg.com

 

Apart from this there are a number of SAP-VMware ecosystem partners who have additional sessions around SAP products & solutions, which will be a great opportunity to get even more details around our offerings and their implementation. You could visit the SAP booth to get details on all those sessions too.

Wenjun Zhou

View and WITH GRANT OPTION

Posted by Wenjun Zhou Aug 14, 2014

In this blog, I will show some examples of granting privileges on views to others and explain in what situation we need "WITH GRANT OPTION".

 

Motivation

The motivation of writing this blog comes from this question Re: insufficient privilege to select from database view The scenario in that thread is kind of complex. I will not explain that scenario in details here. If you are interested, you can take a look there.

 

Problem

Here is a simpler scenario/problem with the following steps.

1. There are three user A, B, C and each user has his/her own schema.

2. User A creates "table A" in schema A and grants the select privilege on "table A" to user B.

3. User B creates "view B" in schema B and "view B" is based on "table A".

4. Now here comes the question. Can user B grant the select privilege on "view B" to user C? Can user C select data from "view B"?

 

To answer the questions, let's first do some tests in SAP HANA. I am using SAP HANA SPS 08 Rev. 80.

Example 1

Step 1: SYSTEM creates three users, USER_A, USER_B and USER_C.

CREATE USER USER_A PASSWORD Initial1;
CREATE USER USER_B PASSWORD Initial1;
CREATE USER USER_C PASSWORD Initial1;






 

Step 2: USER_A creates TABLE_A under schema USER_A and grants the select privilege on that table to USER_B.

CREATE COLUMN TABLE USER_A.TABLE_A (ID INTEGER);
GRANT SELECT ON USER_A.TABLE_A TO USER_B;






 

Step 3: USER_B creates VIEW_B under schema USER_B and VIEW_B is based on TABLE_A.

CREATE VIEW USER_B.VIEW_B AS SELECT * FROM USER_A.TABLE_A;



 

Step 4: USER_B tries to grant the select privilege on VIEW_B to USER_C but fails.

GRANT SELECT ON USER_B.VIEW_B TO USER_C;



 

1.PNG

 

So why can USER_B not grant the select privilege on VIEW_B (which is created by himself/herself) to USER_C???

 

The reason is very obvious. Although VIEW_B is created by USER_B, VIEW_B is based on TABLE_A which USER_C has no privilege to select. Imagine if USER_B managed to execute the Grant SQL, privileges would be nothing. Users (e.g. USER_C) could use this "workaround" to get everything (e.g. TABLE_A) through others (e.g. USER_B).

 

The solution is also very simple. We just need to let USER_A "say something" to USER_B, something like:


"Hey buddy, you can play my basketball (TABLE_A) yourself and if you have a game (VIEW_B) with others (USER_C) you can also use my basketball (which means you can let others (USER_C) to touch my basketball (TABLE_A) in your game (VIEW_B))".

 

Hope you can understand this sentence well. It took me some time to create it. Now "WITH GRANT OPTION" can play a role here which can let grantee to grant the privilege to others further or something like "cascade connection" in this view scenario. So, let's try it.

 

Step 5: USER_A grants the select privilege on TABLE_A to USER_B WITH GRANT OPTION.

GRANT SELECT ON USER_A.TABLE_A TO USER_B WITH GRANT OPTION;

 

Step 6: USER_C can select VIEW_B successfully.

SELECT * FROM USER_B.VIEW_B;

 

Example 2

Now let's try another example which is similar with the scenario in Re: insufficient privilege to select from database view In this example, we will let USER_A grant select privilege on TABLE_A to USER_C first.

 

Step 1: SYSTEM creates three users, USER_A, USER_B and USER_C.

CREATE USER USER_A PASSWORD Initial1;
CREATE USER USER_B PASSWORD Initial1;
CREATE USER USER_C PASSWORD Initial1;

 

Step 2: USER_A creates TABLE_A under schema USER_A and grants the select privilege on that table to USER_B and USER_C. Notice: There is no WITH GRANT OPTION in this step.

CREATE COLUMN TABLE USER_A.TABLE_A (ID INTEGER);
GRANT SELECT ON USER_A.TABLE_A TO USER_B;
GRANT SELECT ON USER_A.TABLE_A TO USER_C;

 

Step 3: USER_B creates VIEW_B under schema USER_B based on TABLE_A and grants the select privilege on the whole schema USER_B to USER_C.

CREATE VIEW USER_B.VIEW_B AS SELECT * FROM USER_A.TABLE_A;
GRANT SELECT ON SCHEMA USER_B TO USER_C;

 

Step 4: USER_C tries to select VIEW_B but fails.

SELECT * FROM USER_B.VIEW_B;

 

2.PNG

 

Again why??? Maybe you are confused now as follows.

1. Since USER_A grants select privilege on TABLE_A to USER_C, USER_C can select TABLE_A. It's true. USER_C can run the following SQL successfully.

SELECT * FROM USER_A.TABLE_A;

 

2. Since USER_B grants select privilege on the whole schema USER_B to USER_C, USER_C should be enabled to select everything under schema USER_B. But is it true? From the error message, point 2 is not true. But why???

 

We can still use the basketball example. Imagine the following.

1. USER_A says to USER_B "Hey USER_B, you can play my basketball yourself."

2. USER_A says to USER_C "Hey USER_C, you can play my basketball yourself."

3. USER_B says to USER_C "Hey USER_C, you can always play basketball with me."

 

There is no problem if USER_C joins USER_B's games in which USER_B uses his own basketball. But if USER_B uses USER_A's basketball in a game, can USER_C join this game? Nope, since USER_A does not say to USER_B "If you have a game (VIEW_B) with others (USER_C) you can also use my basketball (which means you can let others (USER_C) to touch my basketball (TABLE_A) in your game (VIEW_B))". That's the reason. Hope you can also understand it well.

 

If you do not understand the reason. Here is another reason for you. Imagine the following if you still think there should be no error in step 4.

1. If there were no error in step 4, USER_B would know USER_C could select TABLE_A.

2. If there were error in step 4, USER_B would know USER_C could not selct TABLE_A.


Users (e.g. USER_B) could use this "method/workaround" to know/infer some privileges of others (e.g. USER_C).

 

But why can USER_B know/infer this??? Does USER_A tell him? Nope. Does USER_C tell him? Nope. The privileges of USER_C should be a secret to USER_B!!! That's why USER_C cannot select VIEW_B so far. So, we still need "WITH GRANT OPTION" to solve the problem.

 

Step 5: USER_A grants the select privilege on TABLE_A to USER_B WITH GRANT OPTION.

GRANT SELECT ON USER_A.TABLE_A TO USER_B WITH GRANT OPTION;

 

Step 6: USER_C can select VIEW_B successfully now.

SELECT * FROM USER_B.VIEW_B;

 

Example 3

If we say there is USER_D now. USER_C wants to create VIEW_C based on VIEW_B under schema USER_C and let USER_D select VIEW_C. What will happen and how does the SQL look like? I will not explain more about this example. You can take this as an exercise.

 

I just pasted my code here.

--SYSTEM
CREATE USER USER_A PASSWORD Initial1;
CREATE USER USER_B PASSWORD Initial1;
CREATE USER USER_C PASSWORD Initial1;
CREATE USER USER_D PASSWORD Initial1;
--USER_A
CREATE COLUMN TABLE USER_A.TABLE_A (ID INTEGER);
GRANT SELECT ON USER_A.TABLE_A TO USER_B WITH GRANT OPTION;
--USER_B
CREATE VIEW USER_B.VIEW_B AS SELECT * FROM USER_A.TABLE_A;
GRANT SELECT ON USER_B.VIEW_B TO USER_C WITH GRANT OPTION;
--USER_C
CREATE VIEW USER_C.VIEW_C AS SELECT * FROM USER_B.VIEW_B;
GRANT SELECT ON USER_C.VIEW_C TO USER_D;
--USER_D
SELECT * FROM USER_C.VIEW_C;

 

Conclusion

Based on the above examples, we can answer the question at the beginning.

1. If your view is based on other objects which is not created by you and you want to let others read your view, you need "WITH GRANT OPTION" from the owner of your dependent objects.

2. In addition, you have the select privilege on the whole schema does not mean you can select everything under the schema.

 

You can also find it from SAP HANA Developer Guide Object Privileges - SAP HANA Developer Guide - SAP Library

"Some database objects depend on other objects. Views, for example, are defined as queries on other tables and views. The authorization for an operation on the dependent object (the queried tables and views) requires privileges for the dependent object and the underlying object. In case of views, the SAP HANA database implements the standard SQL behavior. A user has the authorization for an operation on a view if the following is true:

  • The privilege for operations on the view has been granted to the user or a role assigned to the user.
  • The owner of the view has the corresponding privileges on the underlying objects with the option to grant them to others."

 

NOTICE: This mechanism/principle should be applied not only in SAP HANA but in other databases as well, e.g. Oracle.

 

Hope you enjoyed reading my blog and doing the exercise.

based on SAP HANA rev. 81

 

Just shortly before my vacation starts I thought I leave you with another pearl of knowledge... *cough*cough*

Don't expect suspense or a proper three act structure - it's just one of those techie blogs that you might put on your "read later" list and then forget about it...

 

Anyway, that's how I tell this story:

 

Last week a colleague reached out to me and presented the following case:

 

"We have a data warehouse system with fact tables and master data tables.

Between fact and master data tables, foreign key constraints have been set up to ensure data consistency.

Now, whenever we load master data, the transaction (sic!) tables grow in size, while the number of records stays the same."

 

What could be going on here?

Quantum entanglement effects on SAP HANA column store tables?

(and this really is all I come up with to rectify the super-cheesy title... )

 

When I read this, I first thought this likely was a misobservation.

But, alas, sometimes you just have to try things out.

 

And so I did this:

 

1. Setup the tables

CREATE COLUMN TABLE masterdata (id INTEGER PRIMARY KEY, md_data NVARCHAR(20));

 

CREATE COLUMN TABLE transactions (id INTEGER PRIMARY KEY, data NVARCHAR(20)

                                , md_id INTEGER

                                , FOREIGN KEY  (md_id) REFERENCES masterdata ON UPDATE CASCADE);

 

2. Load some dummy data

-- load some masterdata

INSERT INTO masterdata VALUES (1, 'MD1');

INSERT INTO masterdata VALUES (2, 'MD2');

INSERT INTO masterdata VALUES (3, 'MD3');


-- load some transactions

insert into transactions values (1, 'TX1', 1);

insert into transactions values (2, 'TX2', 2);

insert into transactions values (3, 'TX3', 3);

insert into transactions values (4, 'TX4', 1);


-- do some storage cleanup

UPDATE masterdata WITH PARAMETERS ('OPTIMIZE_COMPRESSION' = 'FORCE');

UPDATE transactions WITH PARAMETERS ('OPTIMIZE_COMPRESSION' = 'FORCE');


MERGE DELTA OF masterdata WITH PARAMETERS ('FORCED_MERGE'='ON');

MERGE DELTA OF transactions WITH PARAMETERS ('FORCED_MERGE'='ON');

 

3. Check the table storage

SELECT table_name, memory_size_in_total,

       record_count rec_cnt,

       raw_record_count_in_main rec_cnt_main,

       raw_record_count_in_delta rec_cnt_delta

FROM   m_cs_tables

WHERE

    table_name IN ('MASTERDATA', 'TRANSACTIONS')

AND schema_name=current_schema

ORDER BY table_name;

 

TABLE_NAMEMEMORY_SIZE_IN_TOTALREC_CNTREC_CNT_MAINREC_CNT_DELTA
MASTERDATA12295330
TRANSACTIONS14863440

 

4. Check the column storage for TRANSACTIONS table

 

SELECT column_name, count, distinct_count

FROM m_cs_columns

WHERE

    table_name='TRANSACTIONS'

AND schema_name=current_schema

ORDER BY column_name;


COLUMN_NAMECOUNTDISTINCT_COUNT
DATA44
ID44
MD_ID43



So, up to here everything is normal and as expected.

 

Now, we want to load some new master data.

 

A common approach is to run a full update and that's what I will do here as well.

 

To make things a little more handy, I set up a second table with our new master data, called MD_STAGING.

It contains the same records that are already present in table MASTERDATA, except for one updated record, plus two "new" records.

 

CREATE COLUMN TABLE md_staging (id INTEGER PRIMARY KEY, md_data NVARCHAR(20));


INSERT INTO md_staging VALUES (1, 'MD1');

INSERT INTO md_staging VALUES (2, 'MD2');

INSERT INTO md_staging VALUES (3, 'MD3_NEW');


-- the "new" data

INSERT INTO md_staging VALUES (4, 'MD4');

INSERT INTO md_staging VALUES (5, 'MD5');

 

5. Now let's "load" the new data

Loading the new master data basically consists of two steps:

 

  1. INSERT any actually new records and
  2. UPDATE the ones that we already have with the current data.

 

A well known ETL software (Data Services and Data Quality) would probably do something similar to this:

 

UPDATE masterdata SET  id = new.id,

                       md_data = new.md_data

                  FROM

                      md_staging new

                  WHERE masterdata.id = new.id;


INSERT INTO masterdata

        (SELECT is, md_data FROM md_staging

         WHERE id NOT IN (SELECT id FROM MASTERDATA));

 

 

So, let's do this...

 

Statement 'UPDATE masterdata SET id = new.id, md_data = new.md_data FROM md_staging new WHERE masterdata.id = ...'

successfully executed in 134 ms 456 µs  (server processing time: 97 ms 711 µs) - Rows Affected: 3

 

 

Statement 'INSERT INTO masterdata (SELECT id, md_data FROM md_staging WHERE id NOT IN (SELECT id FROM ...'

successfully executed in 97 ms 91 µs  (server processing time: 58 ms 243 µs) - Rows Affected: 2

 

 

Checking the numbers for the affected rows we see that 3 existing records have been UPDATED, although only one of them had actually been changed and 2 records have been INSERTed.

 

Looks OK to me, I'd say (for now)...

 

Next, let's check the table storage again:

 

TABLE_NAMEMEMORY_SIZE_IN_TOTALREC_CNTREC_CNT_MAINREC_CNT_DELTA
MASTERDATA34349535
TRANSACTIONS38205444

 

Compare that with what we had before:

 

TABLE_NAMEMEMORY_SIZE_IN_TOTALREC_CNTREC_CNT_MAINREC_CNT_DELTA
MASTERDATA12295330
TRANSACTIONS14863440

 

No surprise for table MASTERDATA, but look what happened on the TRANSACTIONS table!

SHOCK, AWE and WONDER!

There are four records in the delta store now, although we didn't actually changed any referenced data.

 

Checking on the column statistics for table TRANSACTIONS we find this: 

 

COLUMN_NAMECOUNTDISTINCT_COUNT
DATA84
ID84
MD_ID83

 

Now there are 8 entries for every column, although we only have 4 distinct ID values and, as we know, only 4 records in total.

 

What is going on here?   

 

This actually is the combined effect of two features in SAP HANA.

  1. UPDATEs are stored row-wide in the delta store and performed regardless if any data was actually changed.

    Whenever we issue an UPDATE command, SAP HANA has to identify/find the record(s) to be updated first.
    Once this is done, the whole record is copied, all SET-parts of the UPDATE command are applied to the copied record and the record is stored in the delta store. Finally the old record gets marked as invalid and the new record becomes the new valid record.
    This is commonly called insert only database storage.

    For our blog what's interesting is that SAP HANA does not check whether anything actually changes.
    Even if the SET-part of the UPDATE command sets the exact same values this change gets executed and stored in the delta store (and of course also in the redo log).

  2. The UPDATE action for the referential constraint is set to CASCADE.
    So every update on a referenced column will lead to an update on the referencing table as well.

 

Alright then.

So far we've learned that performing a full update on the MASTERDATA table could lead to a lot more records to be touched then what we would intuitively think.

 

Now you should be asking: "What could be done to prevent this"?

 

There's a couple of options:

a) Go without foreign key constraints for your data warehouse.

That's what most DW vendors do, since FKs really tend to complicate things with data loading once more than a few tables use the same master data.

E.g. SAP BW does it that way.

 

b) Drop and recreate the foreign key constraints before/after data loading.

SAP HANA does currently not allow to disable FK constraints or to re-validate them.

This however is a nonsense option as exactly during the time of data modification - the time when you want the constraint to be active - it would just not be there.

 

c) You ensure that the referring column(s) - ID in our example - does not get updated.

This is actually not too difficult to achieve.

 

A small change in the UPDATE command we used above already would do the trick:

UPDATE masterdata SET  md_data = new.md_data                 

                  FROM

                      md_staging new

                  where masterdata.id = new.id;

 

The downside is that the otherwise practical UPSERT statement won't work here since it needs to have the values for ALL columns in any case.


That's it again.

 

I bet you didn't expect this or did you?

 

Cheers,
Lars

I just completed the Introduction to SAP HANA Cloud Platform
in https://open.sap.com/. This is a great
course to provide an introduction to the SAP HANA world and how it works. It
gives an overview of SAP HANA Cloud Platform. The course details how to create
an account on the SAP HANA Cloud Platform and how applications work within an
account.

 

It provides the basics in Eclipse, how to use the Eclipse
IDE, debugging and logging and how to set up and use the Console Client. As
well as watching the videos and reading the documents I was able to install
Eclipse IDE and do all the configuration that is detailed in the course. This
provided me with a better understanding of how it works.

 

Each week is quite detailed and brings you through connecting
and using multiple databases, sharing applications, user authentication and
security and the services provided.

 

I would recommend it as a starting point for everyone who is
interested in finding out more about SAP HANA Cloud Platform.

 

Thanks to Rui Nogueira and the OpenSAP Team for providing
this brilliant course.

Hi HANA Experts,

 

In this blog, I want to share my experience and approach to gather requirements and implement for NON-SAP source data in HANA.

 

HANA has the interoperability strength for various NON-SAP sources like ORACLE, MSSQL, and DB2 by supporting those data types.

 

Majority of HANA folks are traditionally from SAP background like ABAP, BW would have strong insights of tables and their relationships between those tables.

 

In case, if we don’t know the table names and relationship in SAP, still there is SD11 T.code for providing the information. 

 

Problem arises, when there is requirement to report/model on NON-SAP source.

 

How do we gather requirements from Business and Typical IT Non-SAP super users?

 

Below are the few points, which may help in our implementations:

 

Initial and most crucial step during the implementation is determination the type of data replication methodology to be used.

 

We know that there are several ways to data replicate like SLT, DXC, SAP data services and Sybase replicator.  I will not get into the best options and procedure of this various replications. 

 

DXC is not possible for Non-SAP sources systems.  Depending upon on various factors, we choose either of one option SLT, SAP Data services and Sybase replicator.

 

Once the data replicator is decided, next major steps are to be worked with NON-SAP source team:

 

     1. Identify the required tables, fields and joins between various tables. (very crucial step)

     2. Identify the measures and key attributes.

     3. In case of real-time reporting, either SLT or Sybase replicator will be opted.

 

Once the tables are in HANA, then next important step is to identify the JOINS between the tables.

 

Join type like Inner, left outer, right outer, full join will have different result sets. So it is next very crucial step in the modeling the HANA objects.

 

After the identification Tables, attributes, key attributes, measures, next step is data modeling in HANA, based on the requirement design the attribute, analytical and calculation models. There after reporting using some reporting tool like BO.

 

Looks simple, but many problems arise during implementation and also want to share common issues faced and checks to applied for NON-SAP source data.


Common Issues faced with NON-SAP source data:

  1. Identification of Primary keys in different data sources.

  •       Identify the primary key and convert them into proper data type like string, date , numeric . example Account number, sales order might be different size and type. 

  1. Challenge to synchronize data types.     
    • Date format of Oracle, GDW were different formats when compare to SAP and they were appearing as text.
  1. Make proper, common formats/types for fields like currency , amount and date.

 

CHECKS to any for NON-SAP source:


COLUMN TABLE CHECK: Ensure that every table created for NON-SAP source system should be COLUMN table type.


FIELD CONSISTENCY CHECK: Ensure every field created in the table from NON-SAP Source system, which is used to merge with SAP system field should be same and common data type. As best practice convert into SAP field format.


SCHEMA CONSISTENCY CHECK: Ensure Schema names across the different HANA systems (Like PRODUCTION, DEV and QUALITY) are same and it will be helpful for the easy maintenance of the models.

 

I hope you had good reading and please provide comments on this blog. In my next blog, I’m planning to brief about the best practice and naming standards.

 

Regards,

Santosh.

Seth Godin put it so nicely: Seth's Blog: Analytics without action

 

     "If you're not prepared to change your diet or your workouts, don't get on the scale."

 

meaning that if you don't want to actually do something with the insight you gain with analytics then better don't bother doing them.

 

Yet, here we are, working mainly on technicalities of yet another platform that will revolutionize business as if we hadn't have that often enough yet (The &quot;Mad Men&quot; Computer, Explained By Harvard Business Review In 1969 | Fast Company | Business + Innovation).

 

Carsten Nitschke just reminded us withToo Busy to Innovate that we're focusing on the wrong stuff.

 

If there really should be a major difference for how your organization does business it won't help to just migrate to a new platform or tool.

Maybe funny to realize, but if the migration to the new technology platform is everything your team wants to do, then it actually doesn't matter what platform you choose. You may as well stick with your old one.

 

Looking at it from the vendor point of view (and that is the view I have to have of course), every sold license is a good and generates a stream of revenue.

For vendors it's perfectly fine to focus on speed, features and shiny UIs. That's what we all sell.

 

However, that's not the whole story. And it's the easy part of the story, too.

 

The hard part: mainly on your side, as you have to change.

Frighteningly similar to having a session with your personal trainer, ain't it?

It has been a collection of events that lead me to finally blog again here on SCN, apologies for the absence!. In correct order was a nice mail from Mark Finnern which helped me to find the right subject and angle. Then I came across this photo which I found on Linkedin and I thought it was simply brilliant since it described a reality in such direct beauty that it was almost brutal.

too busy tu improve?.jpg

What you see is the reality of many managers in IT. They are too busy to "keep the lights on" that they cannot focus on innovation. I come across those situations almost everyday at customers. In a former job I used to say that I will write the book "1001 reasons to not do anything" in reference to an attitude of people who were not willing to think outside of the box or maybe even not capable. Just in case somebody feels this is a Rant, wrong!!! Simply when we talk to customers or partners we find often times motivations on their side which leads them to say "No" and it is important to understand what those motivations could be and what could be "goodies" we can offer in order to drive the motivation in the right direction.

Hint: There is not Penicillin at this point in time. Most customers have different motivations and triggers which will lead them to drive change in their organisation, for sure this will change over the time as adoption is growing.

I saw yesterday on Twitter a link to the following ASUG Survey: ASUG Member Survey Reveals Successes, Challenges of SAP HANA Adoption Thought it is just the preview but I did find some things that I were really interesting.

  • 3/4 of the respondents who said that they have not purchased yet SAP HANA said it is because they have not found the right business case yet for it
  • 65% of the respondents started their SAP HANA journey with BW on HANA

Please take the time to read the full document in the link provided. Whilst I have seen already comments that those numbers are "not good" I would be more positive and say well we have some points on which to work and there must be a starting point somwhere.

 

The first point to me is not really a surprise which does not mean that it is not an area that deserves much more focus. In a recent conversation with an IT Director of a Retailer he told me "You know, HANA is really expensive and I do not think that we need it". This is a very standard answer that I hear, yet it is the moment when the emotions rise and I like to ask some questions. Especially in a retailer who have many times 60-90.000 cost centers, how do you measure the profit of your sku? Can you tell me at what profit you are selling for example a 1.5l Water Bottle of Brand X in your stores, differentiated by store. The answer is in 99.9999% of the case "No". Taking this a step further it means that an IT Department today is not able to respond to the business with one of the que KPI for a retailer (profit of good sold). Yet he says that he does not need it. Where am I going?

 

I agree that customers need more guidance in order to understand what are the fundamental differences he / she can achieve with HANA. There are many and this is actually the beauty of it. Cost is an important factor? For sure but if you look at how customers are operating today with many disperse systems, ETL etc there is huge cost in there as well which after all delivers very little to no added value. Being able to drive down cost is very important and, yes we are doing it!

In most of my customer engagements we are doing a HANA journey and use the DesignThinking in order to help them identify the areas of where HANA would be driving value for them. It is a very important step in a sales cycle since this moves us out of pure IT but talks to business.

 

Screen Shot 2014-08-08 at 11.58.56.png

What I like about HANA is the fact that allows you to take the journey. It is your decision on how much you get out of it. Just like you can take a sports car to just drive in the city or on a speed limit highway at 100km/h, but you can take it much further and this is when you really get to the most of it.

 

Most of my customer have started the conversation with Cost Reduction and Increased Productivity. By reducing the IT Stack significantly we are helping to lower the overall cost. Productivity is being increased by use cases like BW. If a BW System is slow the tendency of the users is to use it only if there is not other way around or not at all. This makes the investment even more expensive since there is no ROI!

I am very critical on using the speedfactor of HANA as an argument. But yes it is also an argument but by far not the only one! If the BW is fast and has a simplier structure (allowing more drill down to the user) this will increment their satisfaction level = usage level!

 

These two points alone already justify the move towards HANA by themselves. But then we come to the areas of Innovation and Innovation (DataDriven - BigData) and Transformation where you do really disruptive things. Remember the phrase of Henry Ford "If I had asked people what they wanted, they would have said faster horses" yet he drove the revolution!

 

Screen Shot 2014-08-08 at 11.58.10.png

 

With traditional solutions most people try to answer the three dark squares. The real value however lies in the area of "I don't know that I don't know". Many times this is related to IT. Coming back to my example of the retailer and the profit calculation. Why do they not have this on their #1 priority list? Mainly because it was not thinkable to do such complex and data intensive operations in a timely and cost-efficiently manner, though it is a core question of retail!

 

There is a lot more todo and even more to explain and educate but it is really on us.

 

Last but not least: This might be a good analogy

 

35db59ca-1e71-11e4-beac-22000ab82dd9-original.jpeg

Of course this should not be done by one person alone!

This article is a continuation of the first part that is about javascript in SAP HANA XS. 
The examples of new opportunities will be discussed here, as well as standard options, that allows the article be interesting for the beginners. The difference between the first part is that here we reveal no code execution results. Why should we demonstrate it if something interesting worth a trying by ourselves, while the obvious things are understandable by itself?

 

Code, description, so on..

The first example is an opportunity to observe the object. This method allows to track any changes in the properties. The code will explain my point better than me:

var o = { p: 1 };
var temp='';

o.watch("p", function (what, oldVal,newVal) {
    temp = temp + "o." + what + " changed from " + oldVal + " to "+ newVal +"<br>";
    return newVal;
});

o.p = 2;
o.p = 3;
delete o.p;

o.p = 4;
o.unwatch('p');
o.p = 5;
$.response.setBody(temp);    
$.response.contentType = "text/html";

This example is an alternative function declaration. After all you couldn't need it:

var a = "return "+ "x*function() {return y}()";
var multiply = new Function("x", "y", a);
$.response.setBody(JSON.stringify({test:multiply(2,4)}));
$.response.contentType = "application/json";

The field of view is not always obvious for javascript beginners. And this example perfectly demonstrates that:

var a = 1;
function b() {
    a=10;
    function a() {return 5};
}
b()
var g = a;
$.response.setBody(g);    
$.response.contentType = "text/html";

 

 

But here the <let> comes for help. Could you guess what the result will be here?

var m=0;
var x=1;
var y=2;
let (x = x+10, y = 12) {
  m=x+y;
}
m=m-(x+y);
$.response.setBody(m);    
$.response.contentType = "text/html";

 

 

In this example I compare the performance of the array elements filling level. What do you think will work faster, the standard method "push" or writing a value to a nonexistent index? It seems that push and [<name>.length] works the same way ... And that is not entirely correct:

var res1=[];
function times() {
var a = [];
var time1=Date.now();
for (var i=1;i<10000000;i++){
    a.push(i);//ex1
 a[a.length]=I; //ex2
}
var time2 = Date.now();
return (time2-time1)/1000;
}
for (var i=0;i<10;i++){
 res1.push(times());
}
$.response.setBody(uneval(res1));    
$.response.contentType = "text/html";

 

[2.035, 1.918, 1.939, 1.935, 1.933, 1.924, 1.945, 1.94, 1.948, 1.934] - push

[1.595, 1.564, 1.571, 1.556, 1.57, 1.554, 1.567, 1.57, 1.57, 1.574]- length

Collections - support the uniqueness and therefore sometimes could be more useful than arrays:

var tem = new Set([1,2,'3']);
$.response.setBody(tem.has('3'));    
$.response.contentType = "text/html";

 

Default values for functions:

function av(a=1) {return a};
$.response.setBody(av());    
$.response.contentType = "text/html";

When I attended the first job interview in my life, I had this question: "How do you change the values ​​of two variables without using a third one"

Then I used only two ways: math way and Boolean “or”. I did not know then that it is possible to do like that:

var foo=1,bar=2;
[ foo, bar ] = [ bar, foo ];
$.response.setBody(‘foo - ’+foo+’bar - ’+bar);    
$.response.contentType = "text/html";

Is there any difference between “in” and “of” in an iteration? Of course it is, and here's an example:

var arry=[1,2];

arry.someprop = 123;
var t=[];
for(var x in arry) {  // for(var x of arry)
 t.push(x);
}
$.response.setBody(uneval(t));    
$.response.contentType = "text/html";

If you are interested in the problem of using the multiple parameters with the usual named parameters, here's an example for you:

function mult(m, ...th) {
  return th.map(function (x) {
    return m * x;
  });
}
var arr = mult(4, 1, 2, 3); 
$.response.setBody(uneval(arr));    
$.response.contentType = "text/html";

This is one of the options to define getter – it is nothing new, but for somebody it could be useful:

var o = {a: 7, get b() {return this.a + 1;},set b(a) {this.c=a}};

o.a=2;
o.b=1;
$.response.setBody(o.b);    
$.response.contentType = "text/html";

Function parameter is the object. And this is possible:

var names = function({n1:n1,n2:n2}) { 
 return n1+' '+n2;
}
$.response.setBody(names({n1:1,n2:2}));    
$.response.contentType = "text/html";

What do you think about the result of this function?

var a=1;
var b = (254,function() {return a+1}());
$.response.setBody(b+' a- '+a);    
$.response.contentType = "text/html";

Some labels:

var a=1;
xxx: {
        a=2;
        break xxx;
 a=3;
}
$.response.setBody(a);    
$.response.contentType = "text/html";

It is not obvious for everyone that there is also finally, isn’t it?

var a=0;
try {
    a+=1;
} 
catch(e) {
    a+=2;
}
finally {
    a+=3;
}
$.response.setBody(a);    
$.response.contentType = "text/html";

This example is a little quiz, because three lines of code are missed. The returned result is “a = 2”. We can’t assign the value directly to a variable. What kind of lines are here? What do you think? I would like to note immediately that there is no error. Neither parameters nor brackets <kill_my_mind> doesn’t have!

var a=0;
…
…
…
kill_my_mind;
kill_my_mind;
$.response.setBody(‘a=’+a);    
$.response.contentType = "text/html";

Сonclusion:

Thx for your time.

 

P.S. The number of likes and followers encourages authors to write more;)

In this blog, I would like to provide a basic introduction to the Except set operator function which has been introduced with HANA 1.0 SPS07. This blog provides basic understanding of what is Except set operator function and how to write a simple Except set operator function. On following this blog, I feel any reader would be able to write simple Except set operator function.

 

 

Let me first try to explain the use of the SQL EXCEPT clause/operator -  the Except set operator function  is used to combine two SELECT statements and returns rows from the first SELECT statement that are not returned by the second SELECT statement. In other words, it only returns rows from the first SELECT, which are not available in the second SELECT statement.

 

The syntax of Except set operator function is:

   

     SELECT column1 [, column2] FROM table1 [, table2] [WHERE condition]

     EXCEPT

     SELECT column1 [, column2] FROM table1 [, table2] [WHERE condition]

 

For Example:

 

Except.png

 

 

The above SQL returns all artnr except the artnr for which kndnr = 255.

 

I feel after reading this blog, the readers would be able to use EXCEPT in situations, where ‘minus’ operation needs to be performed. I would encourage the readers to try out EXCEPT set operator.  The readers could stay tuned for my upcoming blog as I introduce GROUPING SET function and how to use it in HANA system.

 

 

 

A win-win situation for SAP & SAP HANA customers.


That's how I can easily describe the HANA operation expert summit in one sentence. My blog is late (some months) but that doesn't matter, the message it brings out stays relevant and important. I hope this event becomes an annual appointment on my agenda.

 

The event was concentrated around the operations part of SAP HANA and since many SAP experts are in or near SAP Walldorf, SAP gathered them up and delivered a two day event including a keynote, networking dinner, roadmap sessions and group discussions with the experts from SAP.

 

I've told this before and I really had this impression: All the customers running SAP HANA who were present where happy about the fact they are running on SAP HANA. That's an important pointer. I hope all customers who have it and leverage it are happy about it. If not, more reason why you should be at this type of event as it provides the opportunity to give direct feedback and discuss issues with SAP. Merely being there and giving feedback might already solve your problem.

 

The topics were picked with the help of surveys done by SAP (organizer Kathrin Henkel) up front to check what attendees were looking for in terms of information and around which topics they wanted to discuss. Unfortunately I missed most of the keynote because I had to do a workshop presentation at customer side, in Belgium still. I arrived late in Walldorf and I walked in near the end of the keynote (like last 10 minutes) but I could still get a sense of what was shown during the keynote. I saw some examples of non-traditional SAP HANA use. With that I mean, non SAP BW or SAP ERP based scenarios.

 

After the keynote, the networking dinner started and it was organized in a way that each table had two SAP HANA experts present to provide an entry point for discussion and allow participants and experts to get to know each other. The concept was good and it was an enjoyable evening. One of my Belgian colleagues was there with me so we ended up furthering the discussion in a local pub near the center of Walldorf.

 

The next day, presentations were on the agenda where the experts explain the current status and roadmap for the different topics related to SAP HANA. Again, well organized and pretty interesting from a participant point of view.

 

After the presentations, break-out sessions were planned in to discuss topics with the relevant experts, network and exchange knowledge. The discussions were interesting and most of them brought interesting information and insights. Notes were being taken by SAP to ensure follow-up.

 

Networking was left mostly to the participants because of rules / regulation (sharing personal information) and therefore I do believe there is room for improvement here. Perhaps participants could be allowed to opt-in up front to share their contact details with other participants early on or create a SAP JAM group for longer lasting social contact around the topic.

 

Participants received (optionally) a SAP HANA operation expert polo (check the video) which I personally like a lot and a doggy bag to have food for the road back home. I really liked both.  Provide me with (good looking) sponsored clothing like that and I wear it proudly. Why? Because I'm proud to be part of SAP's ecosystem and I believe in SAP HANA. I enjoy spending time on SAP HANA, its as simple as that. Its exciting, new technology that has already started to make an impact on the world around us.

 

You can have a look at the overview video to get an idea of what the event was about:

hana_expert_summit.jpg

 

Actions

Filter Blog

By author:
By date:
By tag: