1 2 3 54 Previous Next

SAP HANA and In-Memory Computing

806 Posts

The SAP HANA SQL and System Views Reference was updated to reflect the new licensing options available for SAP HANA. All together it documents 137 SQL statements, 155 SQL functions and 308 system views.

 

To make it easier to locate the information relating to options the structure of the guide was changed to include separate sections for SQL Reference for Options and System Views Reference for Options.

 

sql_ref_options.png

Some of the highlights include:

 

Partitioning

New type of multi-level partitioning called range-range, this allows you to use a year as the partition specification and create a number of partitions for each year, for example all records from 1- 20,000 and records greater than 20,000. Additional data types for range partitioning include BIGINT and DECIMAL.

 

Table re-distribution

Table re-distribution now allows you to assign tables or partitions to a particular node in a distributed SAP HANA system.

 

Regular Expressions

SAP HANA SPS 09 supports regular expression operators in SQL statements. The search pattern grammar is based on Perl Compatible Regular Expression (PCRE).

 

Table Sampling

The TABLESAMPLE operator allows queries to be executed over ad-hoc random samples of tables.

Samples are computed uniformly over data items that are qualified by a columnar base table.

For example, to compute an approximate average of the salary field for managers

over 1% of the employee (emp) table you could run the following query:

 

SELECT count(*), avg(salary) FROM emp TABLESAMPLE SYSTEM (1) WHERE type = 'manager'

 

Note that sampling is currently limited to column base tables and repeatability is not supported.

 

Number Functions
Number functions take numeric values, or strings with numeric characters, as inputs and return numeric values.


BITCOUNT
Counts the number of set bits in the given integer or VARBINARY value

 

BITXOR
Performs an XOR operation on the bits of the given non-negative integer or VARBINARY values

 

BITOR
Performs an OR operation on the bits of the given non-negative integer or VARBINARY values

 

BITNOT
Performs a bitwise NOT operation on the bits of the given integer value

 

More information on all of these features can be found in the SAP HANA SQL and System Views Reference on the SAP Help Portal.

 

Additional Resources

 

The HANA Academy also has a number of videos on these new SQL features and many more topics. Be sure to check them out:


SAP HANA Academy - SQL Functions: String_Agg


SAP HANA Academy - SQL Functions: TABLESAMPLE


SQL Functions: PERCENTILE_CONT and PERCENTILE_DISC


SAP HANA Academy - SQL Functions: BITCOUNT Bitwise Operation


SAP HANA Academy - SQL Functions: BITOR Bitwise Operation


SAP HANA Academy - SQL Functions: BITXOR Bitwise Operation

 

Additional SQL guides include:

 

SAP HANA Search Developer Guide


SAP HANA Spatial Reference


Backup and Recovery commands in the SAP HANA Administration Guide

We all noticed it: SAP HANA has grown.

A lot, actually.

 

Let's begin with a little sentimental look into the past

 

What had begun as an in-memory database management system with some funky development artifacts like attribute-, analytic- and calculation views has grown in a huge feature packed platform for data management and processing.

 

With it, the description of what SAP HANA does and how it can be used had to grow: the documentation.

Checking my Docu OLD folder, this is what I find:

old_docs.png

That's more than 18 times as much documentation content for SPS 8 than there was for SPS 2 back in 2011.

 

Size does matter or does it?

 

But it's not just about the sheer size of the documentation.

 

The documentation page itself had been improved quite a bit:

docu_head.png

Besides the better layout and content organization, the documentation page (as well as the documents themselves) now contain the revision this documentation belongs to and the date of the last update.

While it may not seem to be a big deal, in fact this is not the "usual SAP" way of updating the documentation only together with SPS levels.

This means, new and improved documentation can get to you much faster.

 

Suprises long wished for

 

The next thing is something I really love to have it: the option to simply download the whole documentation package as a single zip file.

 

As you can tell from my Docu OLD folder above, I like to have the documentation locally available and not just online or inside SAP HANA Studio.

With the search functionality of Windows 7 having all the documents in one folder makes it real easy to run a quick search in the documentation for some information.

(Note: depending on your Windows and Acrobat Reader version, you might need to install the PDF iFilter 64Bit from Acrobat to index the contents of PDF files in Windows Search)

 

If you're like me and barely ever use the table of content links shown above, but just scroll down the page to find the actual links, then this is the section you are looking for from now on:

compl_docu_download.png

 

As far as I can tell, this section is now available on nearly all documentation packages, mostly "somewhere down the page"...

Maybe this can be changed to a better location some time.

 

Both of those features were things I (and I am sure others did as well) had asked for and seeing them now I feel a little bit proud, too .

 

Spoilt for choice

 

Another new "thing" with SAP HANA SPS 9 is that there are now additional features and functionality that is not part of the core SAP HANA package.

 

These SAP HANA Options can be licensed separately and thus require their separate set of documentation.

 

In order to keep the level of confusion as low as possible, the documentation developers put the a new sub-section into the left-hand sided navigation bar.

All SAP HANA Options documentation can be found just there.

options_docu.png

 

 

And that's it again!

 

I really do like the improvements even though I wonder why I always have to find out about them by accident.

After all, we're all busy and if there's something new worth reading, I don't want to wait to find out about it a year later...

 

It would be a great thing to have more pointers to such things, just like the recent blog from the SAP HANA documentation developer team about the Information Map (Quick Access to the SAP HANA Development Documentation (for SAP HANA Studio))

 

There will always be more points on the wishlist (how would you feel about an RSS feed on the documentation page, announcing new document versions when they get uploaded? I'd love that!). And I find it a great thing to see them turning into reality one by one.

 

There you go - now you know!

 

Cheers, Lars

Bart Crider, Director of Enterprise BI, shares how Dell has revolutionized their sales and planning processes with real-time data for their entire sales force using a variety of real-time data sources.

 

HE10.jpg

 

We hope you enjoy hearing Dell’s first-hand experience with mission-critical SAP HANA.  Please let us know your feedback in the comments below.

To get more real-world customer HANA stories, subscribe to our iTunes or SoundCloud feed for weekly podcasts that will cover multiple in-production customer use case scenarios for SAP HANA.

Also, if you’ve got a killer SAP HANA scenario and would like to share it on the HANA Effect podcast, please let us know.

 

Transcript: SAP HANA Effect Episode 10

Sponsored by:

xeon_i_ww_rgb_3000

Have you seen the newest version of the SAP HANA Developer Information Map? This version is available in the SAP Help Portal now at help.sap.com/hana_platform in the Development Information section.

 

This feature is available only in the online help version of the document.

 

In SAP HANA version SPS 08, SAP published the first SAP HANA Developer Information Map to help you to locate the correct document based on your specific needs. With the latest version of this guide (released in March this year) SAP provides direct access to the specific section of the SAP HANA Developer Guide and related reference guides directly from the image in the HTML version. (A copy of the image only follows. The links are active in the online help.) Give this a try and let us know (using comments or 'Like' for this blog) whether or not you find this helpful and if you would like to see more of these types of navigation aids in future documentation.

InterGraph.png

 

If you do not see the Note on the linked page to the guide, then clear your browser's cache and try the link again.

Following on fromHow to Install the Automated Predictive Library in SAP HANA

I thought it would be useful to share when and how to uninstall the Automated Predictive Library (APL).  Removing is a lot easier and quicker but it is not documented anywhere that I have seen.

 

When you update HANA revisions, for example from 93 to 94 then you need to uninstall and re-install the APL.


You can't just install the APL again because the installer detects that it is already installed (even though it won't work).  So you need to manually remove APL by deleting it.

 

I have done this via these commands

 

rm -rf /usr/sap/IAN/exe/linuxx86_64/plugins/sap_afl_sdk_apl_1.1.0.20.0_1

rm -rf /usr/sap/IAN/SYS/global/hdb/plugins/sap_afl_sdk_apl

 

Now you can easily re-install with the original APL installer


mo-dad69e91a:/tmp/apl-1.1.0.20-linux_x64/installer # ./hdbinst


APL Uninstall.png

Having done some work with the unstructured text engine within the SAP HANA Platform I wanted to capture and share how to do this.  For this example I have used Twitter data looking at Formula One hashtags and F1 accounts.

 

The linguistic engine is just one of the engines in the HANA Platform but is not often talked about but it is very easy to use to extract structured information from unstructured text. This text could be held in a simple character field or it could be within a binary document, we support many binary formats including TXT, RTF, HTML, PDF, DOC, DOCX, XLS, XLSX, PPT, PPTX and MSG. The official Text Analysis, Text Search and Text Mining documentation can be found here

 

For this example I have used the Text Analysis (TA) engine straight out of the box and yes it works, the results were OK, but as you would expect with any Industry, Line of Business or sport F1 has its own terms, the drivers and teams(constructors) being prime example of these so I wanted to create a custom dictionary to improve the understanding of these.

 

There's a good blog that shows the old way (HANA SP7) of doing this SAP HANA Custom Dictionary


With SP9, this is even easier, there's only really 3 steps.

  1. Create the XML dictionary
  2. Reference the dictionary in a TA configuration file
  3. Call the Text Analysis Configuration with SQL

 

1.1 HANA Web IDE

Go to the HANA Web IDE, for me this is at

http://ukhana.mo.sap.corp:8001/sap/hana/ide/editor/

For others it would be

http(s)://<HANA HOSTNAME>:80<HANA INSTANCE>/sap/hana/ide/editor/


WebIDE-1.png

 

1.2 Create Dictionary File

Create a New "File" for the dictionary, I used the path sap.hana.ta.config

The file needs to end in .hdbtextdict

WebIDE-New FIle.png

 

1.3 Create the dictionary

Here's a snippet of mine, I have attached the full XML file below.  Check your XML file opens in a web browser, also be care of the double quotes " - sometimes you may find the "smart quotes" like “ and ” which are not smart for XML files! 

 

<?xml version="1.0" encoding="UTF-8"?>
<dictionary xmlns="http://www.sap.com/ta/4.0">
   <entity_category name="F1 Driver">
      <entity_name standard_form="Lewis Hamilton">
            <variant name ="Lewis"/>
            <variant name ="Hamilton"/>
            <variant name ="HAM"/>
            <variant name ="@LewisHamilton"/>
            <variant name ="#TeamLH"/>
            <variant name ="LewisHamilton"/>
      </entity_name>
      <entity_name standard_form="Jenson Button">
            <variant name ="Jenson"/>
            <variant name ="Button"/>
            <variant name ="#JB22"/>
            <variant name ="@JensonButton"/>
            <variant name ="JensonButton"/>
      </entity_name>
      <entity_name standard_form="Kimi Raikkonen">
            <variant name ="Kimi"/>
            <variant name ="Raikkonen"/>
            <variant name ="Kimi Räikkönen"/>
            <variant name ="Räikkönen"/>
            <variant name ="Ferrari Kimi Raikkonen"/>
      </entity_name>
     </entity_category>
</dictionary>










Below you can see the full Dictionary XML file inside the WebIDE

Once you click Save you should see in the black console as above that is gets saved and activated (compiled automagically) immediately.

F1.hdbtextdict.png


2.1 Create configuration file 

The easiest way is to chose one if the other .hdbtextconfig file that you see.  Whichever one is the most appropriate.

This can be done easily - Right click copy and paste. I chose the VOICEOFCUSTOMER one as I was initially using some Twitter data for the unstructured analysis. Give the new file a sensible name, remember to keep the .hdbtextconfig extension.


2.2  Edit Configuration file

Open your newly copied file and scroll to the bottom.

Add, an entry that references your Dictionary file you created above for me I added

 

<string-list-value>sap.hana.ta.config::F1.hdbtextdict</string-list-value>










2.3 Save configuration file

You should see it also activates at the same time, which will check for any errors too.

F1.hdbtextconfig.png

 

3.1 Database Table

You can now use your new configuration. I loaded some Twitter data using the HANA Data Provisioning Agent that's also part of HANA SP9.  I created a simple table F1-TWEETS with 3 columns, It must have a primary key and also a text field in either an NVARCHAR, VARCHAR, BLOB or CLOB

F1-TWEETS.png

INSERT INTO "F1-TWEETS" (
SELECT "Id", "ScreenName", "Tweet"  FROM "F1"."F1HANA-Twitter_Status");









3.2 Create FullText index with the new configuration

CREATE FULLTEXT INDEX "F1-TWEETS-FTI" ON "F1"."F1-TWEETS"("Tweet")
CONFIGURATION 'F1'
FAST PREPROCESS OFF
TEXT ANALYSIS ON;








This creates a new table in my case $TA_F1-TWEETS-FTI which contains the structured version of the unstructured data.

When you use a dictionary the TA_NORMALIZED


3.3 Visualisation of the Restults

To illustrate the difference that the dictionary makes compare the 2 visualisations that I created with Lumira using a calculation view against the $TA_F1-TWEETS-FTI table.


Without the Dictionary - TA_TOKEN

Without-Dictionary.png

Without-Dictionary 2.png



With the Dictionary - TA_NORMALIZED

With-Dictionary.png

With-Dictionary 2.png

 

For me it is clear that there enormous benefit to using the Text Analysis to turn unstructured data into meaning information and when you combine that with the custom dictionaries you have a very powerful tool.

This time last year, I wrote about experiences migrating SAP BW customers in 10 Golden Rules for SAP BW on HANA Migrations. Things change in a revolution around the sun, and over the last year, we have found a sharp increase in the number of customers going live with Suite on HANA. It turns out that whilst there is some transferability of skills from BW to Suite migrations, there are just as many differences.

 

With that in mind, here's my 10 guidelines for SoH Migrations - they do, of course, apply just as well to S/4HANA!

 

1) Start with BW on HANA

 

This isn't an absolute rule and greenfield customers should definitely go directly onto Suite on HANA, but customers with a complex SAP landscape should not go live with Suite as their first system. Why?

 

I met with Vishal Sikka in TechEd in 2011 and asked him comment that if BW would soon run on HANA, that meant NetWeaver ran on HANA which meant they could easily run the Suite. I asked him why SAP chose not to also announce Suite on HANA support. His comment was that if a large customer's BW system were to go down, they would open a support ticket and get it resolved. If their ECC system went down and they stopped the manufacturing facility, he would get a phone call from a CIO.

 

Whilst HANA is an incredibly stable database and ECC runs very well on it, HANA will be a new database to the organization and it is best to start with a system which is not transactional in its nature. I always recommend to do BW on HANA first - it allows the training of staff, implementation of infrastructure, backup, high availability and disaster recovery, monitoring processes etc. Once BW is in, the organization will have the maturity to support HANA, and ergo the maturity to migrate Suite.

 

If getting live on Suite is a priority then you can run parallel projects, going live with BW first, and with Suite 4-8 weeks later.

 

Some people asked me what they should do if they don't have BW. That's OK, doing BW first is just a neat way to build organizational process intelligence around HANA. If you don't have BW then you just need to be a little more structured around how you build that capability, to mitigate any risk. Things you need to consider include architecture, sizing, networks, updates, backup/restore, HA/DR and monitoring. There's processes like support and incident management, change management, release management and transport management and people-centric items like support personnel and DBAs. Just the same as any other system.

 

2) Build an Integrated Schedule

 

This is important for any project, but with Suite on HANA it is essential. There will be connected systems like Supply Chain Management, forecasting systems like BPC, reporting systems like BW, third party interfaces and integration. There will be a raft of front end tools like SAP Gui, Portals, Web Stores. Cloud integration to SuccessFactors or Ariba or Salesforce.

 

You need to involve teams from Basis, Architecture, Infrastructure, Networking, Custom Development, Test Management, Finance, HCM and others. Suite touches the whole business.

 

We always build an integrated schedule that describes the project in way that can be displayed on a single monitor screen, so everyone understands what is happening and when. Now ensure that everyone buys into this.

 

Make sure that your integrated schedule also contains a reference to other releases or projects which will run concurrently, so you can track them, any change freezes, or dependencies. It's important in an integrated schedule that you ask teams not to pad their times, but to provide realistic estimates for how long tasks will take. Then, as a project manager, you add in contingency which will allow some slippage for issue resolution.

 

3) Build mid-level and detailed plans

 

I like to have 3 levels of plans for a migration project. The integrated schedule which typically describes the project on a weekly basis is the first.

 

The second is the detailed plan, which is at a task-time-resource level. Detailed plans are very hard to read, and only experienced project managers really know how to work with them and interpret them, building Gantt charts with complex dependencies, resource costing, WBS and allowing burndown charts and earned value calculations. Typically only the PMO needs to use the detailed plan.

 

The third is a mid-level plan, which is at the task-day-team level. This allows you to explain to the project team what they need to do and when, every day. Why every day? Because this allows you to squeeze the plan, and shorter projects have better time to value and lower cost.

 

4) Have a communications plan and stakeholder map

 

This can be very straightforward, but a Suite on HANA project will have eyes from many places in an organization, and rumor travels faster than the speed of light. Decide who, how and when to communicate with, and do it regularly. I find that CIOs and other senior leaders often like a short weekly update - my rule of thumb is it should be readable with one swipe of the finger on a smart phone.

 

A weekly 15 minute all-hands call can be useful too - for anyone interested in getting an interactive update.

 

If you communicate regularly to all your stakeholders then you dramatically reduce the chances of misinformation spreading and causing disruption to your project.

 

5) Have a production-sized sandbox/pilot

 

The details of how this works will depend on your organization, landscape and complexity but once you enter the main development system, you will have a change freeze. The best way to keep this change freeze short is to be prepared, and you can't be prepared unless you have previously completed a production-sized migration.

 

So take a system copy of the production integration environment (BW, ECC, PI etc.) and then migrate the ECC system to HANA. Let your Basis team do this 2-3 times before you release it to the technical and functional teams if possible, so they can hone their process.

 

It's also possible to do this early on, prior to purchasing all the hardware (buy one sandbox which is roughly sized using the SAP Sizing Guide). If you do this, you can validate sizing so you have confidence in your Bill of Material, and do things like archiving and data aging to reduce your hardware requirements.

 

6) Consider having some skin in the game from SAP

 

SAP Active Global Support (AGS) and Professional Services Organization (PSO) have merged into one group, called ONE Support. Regardless of whether you are a Max Attention, Active Embedded or Enterprise Support customer, you can contract them to be involved in the planning and support of the project. In particular they have a service catalog available which has a service for planning and for custom code management. There are free services for pre- and post-go-live checks which you should book in 6 weeks ahead of time.

 

Having SAP bless your architecture, sizing and plan is a big bonus and they have good quality resources for this sort of work. In addition, there is a HANA Ambassador program available in North America which provides a resource which reports into the Global Customer Office at SAP. It's a good way to ensure your project gets the attention it requires.

 

7) Join the Customer Advisory Council

 

There is a HANA Customer Council run by Scott Feldman which meets periodically. It's available free of charge for senior IT folks and project sponsors to go and talk to other customers and hear what's going on in the ground, and gain some additional confidence. More details including Information on this Council plus how to join the international HANA Global Community can be found at in this blog.

 

8) Change Many, Test Once

 

This goes against some of the views of IT folks but I am a big promoter of a change-many, test-once approach. SAP has excellent an excellent tool for HANA migrations called DMO, which will upgrade, patch, perform a Unicode conversion and migrate to HANA in one step, all without touching your source system.

 

This does increase the amount of effort in root cause analysis of problems (which caused the problem) but it provides a single test landscape. One of the biggest risks in any project is inadequate testing, and it allows you to have the conversation with the test team: I've reduce the number of UAT runs from 3 to one, please give me support!

 

9) Solution Manager is your friend

 

I don't think I ever thought I'd say this, but Solution Manger is your friend in a HANA migration! There is a SCN Wiki SAP Solution Manager WIKI - Custom Code Management - Solution Manager - SCN Wiki which has lots of useful pages including the tooling available for HANA Migrations.

 

This includes the Custom Code Management Cockpit (CDMC) which tells you what code has been customized and will break, Usage & Procedure Logging (UPL), which tells you what code is used and how much and Clonefinder, which tells you what transactions have been cloned to custom, and how customized they are.

 

Custom code is not your friend in HANA migrations, especially clones, because cloned transactions won't get updated with all the nice HANA optimizations that come as part of SAP ERP 6.0 EhP7.

 

Remember that you need to patch to the very latest version of Solution Manager! Don't take a N-1 approach to this!

 

10) Test, test and test!

 

Talk to your test manager and ensure that you have a good test strategy. Do you have separate phases for unit, integration, performance, stress and user testing? Do you have test automation using a tool like HP QualityCenter?

 

How wide is your test coverage? Does it include front end solutions like portals and external web access?

 

And remember that there will be some effort in custom code restitution, so leave time to do this. Whilst HANA is an amazing database, it is a columnar database and some custom code will not run optimally if it was poorly written. Row-based databases can be much more forgiving than columnar databases for shoddy code!

 

11) Build an integrated cutover plan

 

I've heard so many teams within an organization talk about "my plan". There needs to be "the plan"! The way we do this is to build a cutover spreadsheet with a numbering system and forecast times for every activity, and an integrated playbook which matches the numbering system in the spreadsheet to the table of contents.

 

Then when you ask the Basis team where they are at 3am, they can tell you what number, and you can see where you are relative to the schedule in 5 seconds flat. You can replace forecast with actual and get a revised cutover plan as you go.

 

Final Words

 

Now I've written this blog and looked back on the BW blog, it is fascinating how different the motions are in a BW on HANA migration, but that shouldn't come as a surprise: ECC on HANA is a transactional system, with all the complexities that come with this. It runs the core systems of the most complex companies in the world.

 

One thing I've missed - it should be implicit - is that in a migration project, you need experienced resources. You need people that know your company, your processes, and external experts. Make sure you work with people you trust and want to work with, and can depend on to buy you a cup of coffee at 3am! And good luck!

 

I'm interested in your feedback and tips - what have I missed?

 

P.S. This blog takes me past 15k points to the "Diamond" badge in SCN. Thank you all so much for your support through the years, I truly appreciate your time in reading my work.

James Williams, Manager DBA & SAP Basis at Bloomin' Brands, explains how real-time planning and consolidation has Bloomin' Brands cookin' up innovation.

 

HE9.jpg

 

We hope you enjoy hearing Bloomin' Brands first-hand experience with mission-critical SAP HANA.  Please let us know your feedback in the comments below.

 

To get more real-world customer HANA stories, subscribe to our iTunes or SoundCloud feed for weekly podcasts that will cover multiple in-production customer use case scenarios for SAP HANA.

 

Also, if you’ve got a killer SAP HANA scenario and would like to share it on the HANA Effect podcast, please let us know.

 

Transcript: SAP HANA Effect Episode 9

 

Sponsored by:

 

xeon_i_ww_rgb_3000

Q1. Since Dynamic Tiering was claimed as Disk columnar based, appreciate if someone can shed some light on why every query runs on extended table, remote Row Scan was used instead of Column Search.


Understand that “remote” was used by SAP HANA treated extended table as virtual table.

 

Experiment A:

Select * from Extended Storage table and based on the visualize plan, remote row scan was used.

 

Experiment B:

Select specific column from Extended Storage table and based on the visualize plan, remote row scan was used as well.

 

 

Experiment C and D: Replicate experiment A and B query on HANA Columnar table. From below, we can see that Column Search was used for both queries.

 

 

 

Q2. In RLV enabled Extended Storage, noticed that delta is constantly high in log_es as shown below.


Since RLV is improve performance to enable write concurrently and acts like a delta store in HANA, when will the delta merge happens and will the percent used shrink down if delta merge happens?

 

Q3. Realize that we can enable the trace “fedtrace” for indexserver to give us how indexserver operated on esstore, is there any trace we can enable on esserver to gain more insight on how exactly esserver works?



All valuable inputs and questions are welcome and perhaps we can use this space as a central knowledge base for Dynamic Tiering.

 

Hopefully these questions are answered before we consider Dynamic Tiering as a solution for multi data temperature.

 

Cheers,

Nichoals Chang

Heading off to the SAP SapphireNow and ASUG Annual Conference in Orlando in May? Have you registered for any pre-conference seminars? The pre-conference seminars are half or full day ASUG seminars available on a number of different topics. These will be held on May 4th 2015, a day before the regular conference kicks off from May 5th to 7th 2015.

 

 

Check out the following link for a listing of these seminars.

 

 

http://events.sap.com/sapandasug/en/pre-conference.html?bc=2%3

 

 

Our colleagues in SAP HANA Product Management team as well as Solution Management teams are preparing several seminars focused around SAP
HANA. Be sure to take advantage of this opportunity! Some of the SAP HANA sessions that you will find interesting are listed below

(be sure to check the above link for a complete listing of the Pre-Conference Seminars). These will appeal to folks interested in getting the big picture on the business value as well as end-to-end coverage of HANA. In addition two session are targeted specifically to application developers.

 

 

End-to-End SAP HANA Overview

Monday, May 4, 8:30 a.m. - 5:00 p.m.

 

Interested in learning about SAP HANA? Feeling overwhelmed with the depth and breadth of SAP HANA? Not sure where to start? Come join the SAP HANA product management team for a full-day, pre-conference session to get a primer on SAP HANA. SAP will provide an end-to-end overview of SAP HANA technologies, highlighting the
must-have information for you to be SAP HANA ready and get you oriented for deeper-dive sessions in the main conference. Various SAP HANA product management team members will provide coverage across the many topic areas and will be on hand to answer your queries

 

Building the Business Case for SAP HANA

Monday, May 4, 8:30a.m. - 12:00 p.m.

 

Understand the possible use cases for an SAP HANA implementation in an intensive, deep-dive working session, where details on determining business value and building a
business case will be shared. Learn how to prioritize use cases and determine value drivers. Also, hear a customer testimonial on their experience with defining use cases, prioritizing them, and ultimately building the detailed business case.

 

Application Development Based on SAP NetWeaver Application Server for
ABAP and SAP HANA

Monday, May 4, 1:00 p.m. - 5:00 p.m.


[This is a hands-on seminar.]

This session will provide an overview on how to leverage SAP HANA from SAP NetWeaver AS for ABAP applications that integrate with the SAP
Business Suite. Speakers will explore concrete examples and best practices for customers and partners based on SAP NetWeaver AS for ABAP 7.4. This includes the following aspects: the impact of SAP HANA on existing customer-specific developments, advanced view building capabilities, and easy access to database
procedures in the application server for ABAP; usage of advanced SAP HANA capabilities like text search or predictive analysis from the application
server for ABAP; and best practices for an end-to-end application design on SAP HANA. Finally, with SAP NetWeaver 7.4, SAP has reached a new milestone in
evolving the application server for ABAP programming language to a modern expression-oriented programming language. The new SAP NetWeaver Application Server for ABAP features covered in this session will include inline declarations, constructor expressions, table expressions, table comprehensions, and the new deep move corresponding.

 

 

Hands-On Predictive Modeling and Application Development Using SAP HANA
Predictive Analysis Library (PAL) and R

Monday, May 4, 8:30 a.m. - 12:00 p.m.

 

[This is a hands-on seminar]

At this session, you will learn how to create a Fiori-like application. When you build your own app with SAP Web IDE, the browser-based tool for rapid application development, you will benefit from a set of proven application patterns and UI controls. You will also experience a high degree of flexibility and control when developing with SAPUI5.

 

 

If you still to register please do take a look at the pre-conference seminars. If already registered you should be able to add pre-conference seminars to your registration.

 

We look forward to seeing you in Orlando in May!

Hi,

 

You may have already heard about the recent release of SAP Predictive Analytics 2.0, but may not be aware that this also includes the SAP Automated Predictive Library (APL) for SAP HANA.

 

The APL is effectively the SAP InfiniteInsight (formerly KXEN) predictive logic optimized and adapted to execute inside the SAP HANA database itself for maximum performance - just like the SAP HANA Predictive Analysis Library (PAL) and Business Function Library (BFL).

 

Obviously when you already have data in SAP HANA it makes sense to perform heavy-duty processing such as data mining as close as possible to where the data resides - and this is exactly what the APL provides.

 

By way of comparison, the PAL provides a suite of predictive algorithms that you can call at will - as long as you know which algorithm you need, whereas the APL focuses on automation of the predictive process and uses it's own in built intelligence to identify the most appropriate algorithm for a given scenario. So the two are very much complementary.

 

There are a couple to ways to take advantage of the APL. Of course, you can exploit the APL when using the SAP Predictive Analytics 2.0 desktop application - whenever accessing SAP HANA as a data source. In this case usage is implicit.

 

However it's also possible to access the APL independently of SAP Predictive Analytics 2.0. You can access the APL explicitly using SQLScript or from the Application Function Modeler (AFM) in SAP HANA Studio. And, of course, you can embed APL capabilities into your own custom SAP HANA applications.


We've put together a series of SAP HANA Academy hands-on video tutorials to explain how to access the APL from SAP HANA Studio using SQL Script:

 

1. Reference Guide & Download

In this video, part of the SAP Automated Predictive Library (APL) for SAP HANA series, we will introduce the SAP Automated Predictive Library (APL), download the APL reference guide, then download sample data & code and extract them for later use.

 

2. Import Sample Data & Check Installation

In this video, part of the SAP Automated Predictive Library (APL) for SAP HANA series, we will use SAP HANA studio to import the provided sample data into a SAP HANA schema, ensure the SAP HANA script server is running, and verify that the APL has been correctly installed.

 

3.Create APL User & Table Types

In this video, part of the SAP Automated Predictive Library (APL) for SAP HANA series, we will create and authorize a SAP HANA database user so that it can make use of the APL. We will also set up APL table types and test the APL using the "ping" function.

 

4. Predicting Auto Insurance Claim Fraud

In this video, part of the SAP Automated Predictive Library (APL) for SAP HANA series, we will use the APL to predict auto insurance claim fraud.

 

This example shows how an insurance company assesses past insurance frauds in order to create a category of client characteristics that may be susceptible to make fraudulent claims.

 

The first step of the analysis is to prepare the main input tables, one containing data that has already been analyzed, that contains some known fraud cases. This table is used to train the model. The results are used to indicate which variable(s) to use as the target and describe the claims data.

 

After considering past data and past fraudulent claims, the customer uses the data to train the APL model on that date produce an updated model that will be applied to the new data in order to detect potential fraud risks.

 

After training the model, the APL function returns summary information regarding the model as well as indicators like the Predicitve Power (KI) of the model, or the Prediction Confidence of the results (KR).

 

At the end of the data mining process, the “Apply Model” function produces scores in the form of a table that can be queried.

 

 

Or for the YT playlist follow this link: http://bit.ly/hanaapl

 

We hope these help you get started with the APL.

 

For a more in-depth discussion of the APL do check out the excellent blog by Ashish Morzaria.

 

Enjoy!

 

Philip

Michael Begala, Manager, Global BI & Analytics, shares how Commercial Metals Company completed their first SAP HANA project, Business Planning & Consolidation on SAP HANA, and set a path towards an integrated global data warehouse built on SAP HANA.

We hope you enjoy hearing CMC’s first-hand experience with mission-critical SAP HANA.  Please let us know your feedback in the comments below.

 

To get more real-world customer HANA stories, subscribe to our iTunes or SoundCloud feed for weekly podcasts that will cover multiple in-production customer use case scenarios for SAP HANA.

 

Also, if you’ve got a killer SAP HANA scenario and would like to share it on the HANA Effect podcast, please contact me.

 

HE8.jpg

 

Transcript: SAP HANA Effect Episode 8

 

Sponsored by:

xeon_i_ww_rgb_3000

Having successfully installed HANA with multitenant database containers (see my previous blog), I wanted to find out if everything would run just as smoothly in the case of an update to SPS 09 with conversion to multitenant database containers. As in my first blog, multitenant database containers are abbreviated to MDC.


My starting point was a HANA database on revision 80 with SAP EHP 6 for SAP ERP 6.0, version for SAP HANA running on top of it. The BW system was running on a separate HANA that was still on revision 70. The idea was to get the ERP and BW systems running on two tenants in the same HANA.

1_Scenario_2_update_with_MDC_running_ERP_BW.png

Updating to SPS 09

 

I downloaded the latest software components from SAP Service Marketplace using the SAP HANA studio (at the time, this was revision 92), and then prepared the software archive for the updatebefore executing hdblcmgui. All this is well described in the SAP HANA Server Installation and Update Guide.

 

Don’t be put off by the fact that you don’t see an option to migrate to MDC in the update wizard, as we did in the installation procedure. The conversion to MDC is that it is a post-installation step (see section "Converting to MDC" below). And actually this makes sense, because many customers will want to introduce MDC after they have been working with the new support package stack for a while. The update to SPS 09 from a lower support package stack is always from a single container to a single container.

 

 

We ran into a few minor issues at operating system level, which were solved by ensuring that we had upgraded to the versions recommended in SAP Note 1944799 (our system landscape hadn’t been updated for a while).   We also migrated to CommonCryptoLib as described in SAP Note 2093286.

 

More serious was the fact that the ERP system wouldn’t start once the update had finished. This was because the new 3-digit HANA revision codes were not recognized:

2_Error_in_ERP_after_update_to_SPS_09.jpg

According to SAP Note 1952701, we needed 740 Patch Level 48 but unfortunately this version was no longer on SAP Service Marketplace, so we ended up upgrading the kernel from 740 to 741.

 

 

 

Converting to MDC

 

The conversion from a single database container to multitenant database containers worked as described in the documentation.  Make sure you don’t forget any of the pre- or post-conversion steps, and migrate - don’t remove - the statistics server.

For an example with screen shots, see this blog post by N. van der Linden:  http://scn.sap.com/community/developer-center/hana/blog/2014/12/17/convert-to-a-multi-tenant-hana-database .

 

The result is one system database and one tenant database inside a HANA system that supports multiple database containers (as opposed to installation with MDC, which gives you only the system database). The system ID and the name of the tenant database are the same: in our example, HN1. 

3_Converted_system_DB_and_tenant_in_studiio.jpg

 

We were gratified to see the schema of our ERP system in the catalog of the tenant database, and not under the system database:

4_ERP_catalog_in_ERP_tenant_following_conversion_to_MDC.jpg

 

We now started the ERP system, this time without issues.

 

The only issue we did notice was that the repository roles were missing in the system database: 

5_Roles_in_system_DB_after_conversion.jpg

 

 


It turned out that the problem had been caused by our shutting down the database while the thread ImportOrUpdateContent was still active in the system database (visible on the Performance tab):  

6_Thread_ImportOrUpdateContent.jpg

This thread was triggered as part of the conversion to MDC, when the command hdbnameserver –resetUserSystem was issued. The  consequences can sometimes be more serious, so make sure you wait for the import of the delivery units to finish before shutting down. For more information, see SAP Note 2136496. As of Revision 94, ImportUpdateContent will no longer be triggered by this command. Moreover, development has told us that it plans to minimize the delivery units import time to a fraction of what it is in SPS 09.

 

If you have other issues when converting to MDC, please consult SAP Notes 2140297 and 2140344.

 

 

 

Transferring the BW system to the same HANA as the ERP system

 

We started by creating a second tenant in the target system. We gave it the same name as the source system SID (HB1), but there is no technical reason why you have to do this.  It was then necessary to update the source system to SPS 09 and convert it to MDC, before backing up the tenant in the source system and recovering it into the target tenant and system. SAP HANA database backup and recovery is explained in the SAP HANA Administration Guide in SAP Help Portal. Thus, the process was as follows:

  1. Update target system to SPS 09 (see first section above).
  2. Convert target system to MDC (see second section above).
  3. Create second tenant in target system.
  4. Update source system to SPS 09.
  5. Convert source system to MDC.
  6. Back up tenant in source system.
  7. Recover this backup into the target system tenant created in step 3.

 

One thing to note is that once we had done the recovery, the password of the tenant’s SYSTEM user reverted to what it had been in the source system, overwriting the password we had specified when creating the tenant in the target system. This is normal system behavior. For more information about the passwords of MDC systems, see the documentation.

 

The next step was to update the SAP HANA database client of the ERP as well as the BW system with hdbsetup.

 

Then, before restarting HANA or our BW system, we reconfigured the connection from BW to the new HANA database with the <SID>adm user using hdbuserstore. In the screen shot below, the turquoise rectangle represents the fully qualified domain name of the original HANA system and the yellow rectangles represent the fully qualified domain name of the HANA multitenant system.

8_hdbuserstore.jpg

 

For more information about hdbuserstore, see the documentation here

 

In the above example, the SQL port of our BW tenant is 30041 because the instance number is 00 and 3<instance number>41 is the first SQL port assigned to a manually created tenant.

 

For more information about the ports of multitenant database containers, see the documentation here.  Note, in particular, that the ports for a converted tenant database are different from those of tenant databases that are added subsequently.

 


Enabling HTTP access to the correct database container

 

We now configured the internal SAP Web Dispatcher so that it would know which HTTP requests to send to which database container from the Web-based applications running on the XS engine.

 

Originally, we set up IP addresses for each tenant database but this is not necessary; it works fine with DNS alias host names.

9_webdispatcher.ini.jpg

The first entry (wdisp/system_0) initially looked like this:

SID=$(SAPSYSTEMNAME), EXTSRV=http://localhost:3$(SAPSYSTEM)08, SRCURL=/

This entry is for the converted tenant which, in our case, is the tenant on which the ERP system runs.

We changed it as follows because we required additional entries:

SID=$(SAPSYSTEMNAME), EXTSRV=http://localhost:3$(SAPSYSTEM)08, SRVHOST=<fqdn>

 

We added a second entry with the DNS alias for the BW tenant (wdisp/system_1) and another entry with the DNS alias for the system database (wdisp/system_3).

 

 

We also updated the XS properties for each of our database containers in order to be able to open and work with the SAP HANA cockpit from the SAP HANA studio.


System database: 
10_XS_properties_DNS_alias_system_DB.jpg


Converted tenant database (on which ERP runs):

11_XS_properties_ERP_tenant.jpg


Created tenant database (on which BW runs):

12_XS_properties_DNS_alias_of_created_tenant.jpg

 

You can find full step-by-step instructions on how to configure HTTP access to multitenant database containers in SAP Help Portal.

Over the last few months, I’ve been trying out various scenarios involving the new multitenant database containers in SAP HANA SPS 09, and I thought it might be helpful to share my findings and examples with others who want to get their feet wet with this new feature. So here goes…

 

“Multitenant database containers” is a bit of a mouthful, so for the rest of this article I’m going to use the abbreviation MDC.

 

 

 

The first scenario I tested was the installation of SAP HANA SPS 09 with MDC, followed by the installation of two ABAP systems on two HANA tenants: 

Scenario_1_MDC_install_2_ABAP_systems_corrected_cropped.jpg

 

 

Installing SAP HANA with multitenancy

 

I started by installing HANA with MDC using hdblcmgui. The installation procedures are well documented on SAP Help Portal, so I won’t go into all the details here. The only thing you do differently from a standard installation is change the database mode from single_container (the default) to multiple_containers:

 

HANA_installation_with_MDC.jpg


The result is a system database but no tenant databases inside a HANA system that supports multiple database containers. For the distinction between a tenant database and the system database, see the SAP HANA Master Guide.


I then added my system database to the Systems view in the SAP HANA studio:

Add_system_MDC_system_DB.jpg

 

 

Once I was logged on as the administrator of the system database, I was able to create a tenant database in the SQL console using the CREATE DATABASE statement:

Creating_tenant_DB.jpg

Created_tenant_DB_confirmation.jpg

 

 

I added the tenant database in the Systems view:

Add_system_MDC_tenant_DB.jpg

 

Then I created and logged on to a second tenant. The Systems view in the SAP HANA studio then looked like this:

 

Added_systems_in_studio_MDC.jpg

 

The system database had an additional SYS_DATABASES schema:

SYS_DATABASES_schema_in_system_DB.jpg

The SYSTEM user of the system database has the privilege DATABASE ADMIN for the execution of operations on tenant databases.

DATABASE_ADMIN_privilege.jpg

 

 

Installing NetWeaver on a HANA database tenant

 

The software provisioning manager SP 7 provided with SL Toolset 1.0 SPS 12 supports MDC, so I was able to install an SAP NetWeaver 7.4 SR 2 on each of the tenants. This involved specifying the name of the tenant database with the tenant database’s administrator password, as well as the password of the system database administrator. These steps are described in detail, with screenshots, in Stefan Seemann’s blog: http://scn.sap.com/community/it-management/alm/software-logistics/blog/2014/12/02/software-provisioning-manager-and-hana-multi-tenant-database-containers

 

Installation of the HANA client was part of the same procedure.

 

 

 

Stopping and starting tenant databases

 

Having backed up the tenant databases, I then stopped one of them from the SQL console of the system database:

Stopping_tenant_DB.jpg

To open the administration console of the stopped tenant, I was prompted to log on with the credentials of the operating system user:

Logging_on_as_sidadm_user.jpg

It baffled me somewhat that the administration console of the stopped tenant database (DB2) should show the index server of the tenant (DB1), but it’s because the operating system user (the “SID user”) can currently see the processes of all database containers in this view.

 

Tenant database DB2 in the process of stopping:

Tenant_DB_DB2_stopping.jpg

Tenant database DB2 when stopped:

Tenant_DB_DB2_stopped.jpg

 

 

Development has told us that improved visibility and transparency of the processes for different database containers is in the pipeline.

 

Enabling HTTP access to tenant databases

 

I also enabled HTTP access to the individual tenants, but more about that in my next blog.

Run powerful real-time monitoring that supports your real-time in-memory investment.


Still Time.... Register Here!

 

Tuesday, March 24th 2015 @ 10:00 PT / 1:00 ET

 

Join us, as we welcome guest speaker Dan Lahl, vice president, product marketing SAP, who will discuss how IT organizations run transactional and analytical applications on a single in-memory platform delivering real-time actionable insights while simplifying their IT landscape.  During this one-hour webcast, Bradmark’s Edward Stangler, R&D director, HANA products will showcase key essentials for effectively monitoring SAP HANA, including:

 

  • Tracking Key SAP HANA Feature
    • Top Column Store Tables.
    • Status / progress on delta merges and full / partial loads.
    • Memory breakdown.
  • Reviewing Overall Health
    • CPU usage, volume I/O, memory breakdown, and instance information.
  • Familiar Metrics for Experienced DBAs
    • Statements.
    • Memory usage.
    • Space Usage.
    • Operations and transactions.
    • Connections. 
    • Network I/O, app tier, login, current SQL and SQL plan, etc. 
  • Alerting on HANA Resources
    • Space usage in volume data / log, long-running statements / transactions / SQL, delta growing too large (not merged fast enough) for column store tables, and more.
  • Flashback on Recent HANA Problems
    • Viewing historical data through real-time UI.

 

Register Today... to join us for this informative event.

 

And learn how Bradmark's Surveillance for SAP HANA satisfies an organization’s system management requirements across the SAP HANA computing platform, so you can maintain a production-ready environment and your peace of mind.

 

We look forward to seeing you online!

Actions

Filter Blog

By author:
By date:
By tag: