When first time, i logged onto the HANA Environment right away after the Teched - the reaction was a WoW!!!

But when it became a part and parcel of my life and then i started comparing it with my core BW transaction codes and the respective ABAP logs.

Learning a new tool is of course interesting and it's equally boring when you get stucked somewhere during the learning phase.

One of the incident i am here going to share is  "Connection of XML with HANA" which was very new to me.

One fine day, when i was trying to activate my Analytics View it was keep on failing and there my interest was not to read/understand the XML log but to find out why it's been failing . My impression as a core BW consultant was - reading the ABAP dump, fixing it, going through the trace file were more easier  then understanding the XML code. But later i found it very simple when i understood XML.

Here are the ways, how any one could proceed with XML log in HANA Studio

First of all we have to collapse all the nodes. It looks like this:

XML1.png

Then let's see the Job Info node, where we will get the information on Start Date, End Date,Elapsed Time, System Details and Result of the whole Job.

XML2.png

Next is to get into the Execution Info Node where it gives Status of the Activation Job.

XML3.png

Then we could see a validation process hast started at Client Side:

XML4.png

The validation process carried out as per the Rules set for Models in HANA studio.

You could navigate to  HANA Studio -> Window -> Preferences -> Modeler -> Validation Rule to see the list of rules

XML5.png

Then we could go back to XML log of the failed activation and expand the further message nodes to see "status" of each rules.

Below screenshot shows some of them where the status message were "OK".

XML6.png

But here is  one of the message where the step has failed with status as "Error"XML7.png

After the Client side validation is through, now the log shows the status at Server side as well.

XML9.png

 

Hope it will help the Community :-)

Regards,

Tilak

In 2001, I started at SAP and joined a small IT team, which worked very close to the line of business controlling. In parallel one more colleague started in this team at the same time. Both of us learned the basics of CO in ERP and how to use Report Writer, but afterwards moved to different focus topics. He became the expert for Profitability Analysis (CO-PA) and I became the expert for Profit Center Accounting (PCA). PCA was the first module within SAP moving to Business Warehouse for reporting with intraday data delta-loads in 2002. CO-PA followed to use BW soon, but it was a real challenge to build the right aggregates to speed up the reporting on top of the complex and powerful data models at this time.

10 years later and with Business Warehouse 7.30 powered by SAP HANA the possibilities are exciting. In the area of PCA we moved from summary record to line items (I need to tell you more about that some other time) and for CO-PA … we did something completely different.

One question before I go into details. You already know the story about SAP HANA in general, right? - No? Then let’s get an introduction from SAP’s Steve Lucas:

‘SAP HANA opens up a new generation of possibilities.’ The need for performance and reporting flexibility was the main driver for the outsourcing of transactional CO-PA processes and analytics into the BW environment in the past. But due to this system split you did postings in the transactional system and waited for the upload into BW. Then you did the Profitability Analysis and checks in the reporting system and would have to go back for reconciliation and corrections within the transactional system. You do this multiple times during a financial closing, so the effort and the duration is significant.

What is the solution? Bring back analysis and checks into the transactional system!

Ok, so in the last 10 years I told my business colleagues to extract data to BW for complex analytics. Today I am part of the team to setup the SAP HANA infrastructure to accelerate the business processes at SAP. For me this is a game changer (no kitties killed).

Back to the project facts:

  • The Global Controlling department at SAP rolled-out the CO-PA Accelerator in November 2011.
  • The project setup a 1TB SAP HANA appliance side-by-side connected to our Central ERP System.
  • The side-by-side HDB in combination with Classic DB accelerates the standard CO-PA Line Item Report and Research (Transaction KE24 and KE30)
  • Our new BW powered by SAP HANA hosts a Virtual Info Provider to run some classic BEX reports, as well as the innovative SAP BusinessObjects Analysis for Microsoft Excel, which allows the business community to format, reorganize and perform calculations while using familiar Excel features

Picture1.jpg

Speed – not just performance – is still a big driver to establish this kind of real time environment. One of the proof points was to shorten processes in the financial month-end-closing with the CO-PA Accelerator at SAP. But of course there is a high expectation that a solution like CO-PA Accelerator powered by SAP HANA should deliver a great performance as well. And it does!

Before the CO-PA Accelerator go-live a line item or total reporting on basis of ~85 mio line items was not executable. Aggregation did allow data access for research, but with limitations for drill-down and further analysis. Now the Controlling department at SAP have access to research on line item level with a response time <10 sec.

COPA Performance Results.jpg

And here is what a Group Controller at Provimi thinks about using CO-PA for his business processes (I like the beat!):

Is the global roll-out and the solution design completed? No, not yet. We started on a 1TB SAP HANA single-node appliance in 2011, but this does not fulfill the business requirements for a global roll-out and a high-availability solution. During Q2 2012 we will move to a scale-out 4TB multi note SAP HANA environment and start the global roll-out.

What’s next? Now is the right time to bring processes back into the transitional system to fully leverage the power of CO-PA in ERP.

Do I have concerns in regards the role of BW? No! BW is moving in a new position with strong integration to SAP HANA and into the SAP BusinessObjects platform. But there is nothing bad about integrated systems, is there?

In April my blogs will focus on the BW powered by SAP HANA system. Especially on the Business, Planning and Consolidation 10.0 add-on and how we setup a High-Availability Solution to service business critical processes.

Best regards,

Matthias Wild - proud to be part of SAP Global IT where SAP runs SAP.

So late last year, I had the opportunity to load some reasonable data volumes into SAP HANA. We got about 16GB of data or 160m records, and ran some tests.

Well now I have closer to 256GB of the same data to load, and I only have a 512GB SAP HANA appliance to spare, which already has a SAP BW system and a bunch of other things on it! So I thought it was time to try to optimise the data model and make it slicker.

We cut a few corners last time around because some of the data was a bit corrupt and so we had some text fields where there could otherwise be dates - and in addition, we were not judicious with the use of things like smaller integer data types, where there were only small numbers of distinct values.

I'm not sure how much value this has in SAP HANA because of the way it creates indexes, but text fields certainly take up a lot of unneccessary space. Today I'm going to do a bunch of testing with a 42m row test data set, and then use this for the basis of the final model - loading the full 2bn+ rows in-memory. And we'll see how it performs!

Step 1: Data Model optimisation

I now have no text values in my main fact table - only TINYINT, SMALLINT, INT, REAL, DATE and TIME. I'm going to load this file into the old fact table which isn't optimised, and compare for space to see how much we have saved. The model is already well normalised so I'm not expecting to be able to reduce this any further.

So we are moving from:

create column table "POC"."FACT"( "Col1" INTEGER, "Col2" INTEGER, "Col3" INTEGER, "Col4" INTEGER, "Col5" INTEGER, "Col6" INTEGER, "Col7" INTEGER, "Col8" INTEGER, "Col9" REAL, "Col10" INTEGER, "Col11" INTEGER, "Col12" INTEGER, "Col13" VARCHAR (32) default '', "Col14" VARCHAR (16) default '', "Col15" INTEGER, "Col16" INTEGER, "Col17" INTEGER, "Col18" INTEGER, "Col19" INTEGER);

To:

create column table "POC"."FACT2"( "Col1" INTEGER, "Col2" DATE, "Col3" TINYINT, "Col4" TINYINT, "Col5" SMALLINT, "Col6" SMALLINT, "Col7" TINYINT, "Col8" TINYINT, "Col9" REAL, "Col10" INTEGER, "Col11" TINYINT, "Col12" TINYINT, "Col13" REAL, "Col14" TIME, "Col15" SMALLINT, "Col16" TINYINT, "Col17" TINYINT, "Col18" SMALLINT, "Col19" DATE);

Now I've loaded the same data into both fact tables.

Fact Table 1: 1,327,747kb

Fact Table 2: 778,737kb

Wow - I'm really surprised by this. I'm guessing it's the text fields which were the killer, but this has halved the table size. This will make a big difference! I checked this and of course the column store has limited data types so things like TINYINT don't have an impact as integer datatypes are already compressed.

What's even more interesting is that we have reduced the load time from 88 seconds down to 26 seconds. I guess those text inserts were expensive in the column store.

Step 2: Load optimisation

HANA has two main load optimisation steps - the number of parallel processes and the amount of data consumed before commit. This SAP HANA box is a Medium, which means 40 CPUs and 512GB RAM. Theory should mean that you will get an improvement up to 40 threads, which will tail off.

This is really important right now because I know that the original 160m row test case takes 9 minutes to load. So we're looking at at least 2 hours to load the full set. Do you think we can make this much less?

 

Default

10k

50k

200k

500k

1 Thread

7m11s

 

 

 

 

2 Threads

3m33s

 

 

 

 

4 Threads

1m57s

 

 

 

 

8 Threads

1m13s

 

 

38s

 

16 Threads

44s

34s

28s

25s

26s

32 Threads

47s

32s

27s

25s

27s

40 Threads

47s

32s

43s

25s

42s

The learning here is really clear - it's really easy to get down from 7m to about 30 seconds without being very clever and then another 10-15% performance can be had by fine-tuning. It also looks like there is no benefit of moving past about 16 load threads or a Batch of more than 200k, at least in this system. What's more when you move to 40 threads, the results start becoming inconsistent. I have no idea why.

Step 3: HANA Database Version

So I'm interested to see what happens if we update our HANA Database version. I'm told that there improvements both in compression and performance as new releases come out. What do you think?

Well I looked into it and unfortunately the current version of HANA SP03 is Patch 25, which has a problem with data load performance. Guess this is going to have to wait until another day.

Update: well we were on HANA SP03 Patch 20 and it wasn't as stable as I'd have liked, so Lloyd upgraded us to Patch 25. This seems to be so much better and I'd really recommend keeping up to date. I'm not sure it actually fixed any of our problems but it stopped a whole load of errors from appearing in the logs.

Step 4: The main load

OK - now we're ready to do the main load. I don't think this system can be optimised any further and by my calculations of 40m records in 25 seconds, we should be able to load the full 2bn in 20 minutes or so. Are you ready? Let's see.

The load is split up into 17 files - details for the load are below.

 

Size

Rows

Time

File 1

3925MB

42m

25s

File 2

1510MB

16m

11s

File 3

14304MB

152m

120s

File 4

13196MB

140m

144s

File 5

14239MB

152m

164s

File 6

13049MB

139m

107s

File 7

13569MB

144m

105s

File 8

15511MB

166m

177s

File 9

14156MB

151m

144s

File 10

14960MB

160m

156s

File 11

13449MB

146m

218s

File 12

15316MB

166m

419s

File 13

19843MB

214m

163s

File 14

17037MB

176m

415s

File 15

16399MB

183m

247s

File 16

15275MB

163m

198s

File 17

11959MB

128m

178s

Total

227697MB

2312m

2991s

 

As you can see, the performance is great initially and then starts to tail off in the later files. This leads to a fairly disappointing load performance (still massively faster than any other database remember!!!) of 2991s or 50 minutes to load 2.3bn rows into memory. I was hoping for over double this.

The reason for this seems to be that HANA puts the rows initially into a delta store, and after that has loaded, it then automaticaly merges them into the main column store. This happens concurrently with the load and seems to kill performance. So what I did was to join all 17 files into one big file, and then try loading it. Let's see what happens:

 

Size

Rows

Time

File 1

227690

2312m

51m

Curiously it's not any faster, and this turns out it's because it does the merge delta during the load. You can disable this with the following statement:

ALTER TABLE "POC"."FACT2" WITH PARAMETERS ('AUTO_MERGE' = 'OFF);

Note that whilst auto merge is off (turn it on when you're done loading) you can do manual merges doing:

MERGE DELTA OF "POC"."FACT2";

Step 5: Learning about partitions

It's at this point that i learnt there is a 2bn row limit in a single SAP HANA database partition. Curiously it allows you to load more than 2bn rows but then fails on the subsequent merge delta. This sounds like a pain but it's actually a blessing in disguise for two reasons. First, it turns out that as the amount of data in the main store increases, so does the cost of a merge delta and second, using partitions allows you to get the best out of HANA. If you turn automatic merge off and do it manually, look what happens:

An early merge takes just a minute:

Statement 'IMPORT FROM '/hana/FACT_05_2010.ctl' WITH THREADS 16 BATCH 200000' successfully executed in 1:50.816 minutes  - Rows Affected: 0

Statement 'MERGE DELTA OF "POC"."FACT2"' successfully executed in 1:04.592 minutes  - Rows Affected: 0

Wait a while and you see that the merge time has increased by a factor of 4 whilst the load time into the merge store is more or less linear. This makes sense of course because the merge process has to insert the records into a compressed store, which is computationally expensive. It appears to increase at O(m.log(n)) where n is the size of the main store and m is the size of the delta store, which more or less makes sense based on my knowledge of search and sort algorithms.

Statement 'IMPORT FROM '/hana/FACT_09_2011.ctl' WITH THREADS 16 BATCH 200000' successfully executed in 2:23.475 minutes  - Rows Affected: 0

Statement 'MERGE DELTA OF "POC"."FACT2"' successfully executed in 4:11.611 minutes  - Rows Affected: 0

And since it turns out that you can partition easily by month, I emptied the table and decided to repartition it like this. I now have 26 partitions, one for each month, plus an extra partition for anything else that doesn't fit.

ALTER TABLE "POC"."FACT2" PARTITION BY RANGE (DATE)

(PARTITION '2010-01-01' <= VALUES < '2010-02-01',

……

PARTITION '2012-02-01' <= VALUES < '2012-03-01',

PARTITION OTHERS);

Note that HANA makes it easy to add new partitions and move data between partitions. Data management even in large volumes won't be a problem, you will be glad to know. And look at the load times - it is completely linear for the delta merge right up until the last partition. This is a major improvement compared to a legacy RDBMS where you get very slow batch load times unless you drop indexes - massively slowing concurrent read performance.

Statement 'IMPORT FROM '/hana/FACT_02_2012.ctl' WITH THREADS 16 BATCH 200000' successfully executed in 1:46.066 minutes  - Rows Affected: 0

Statement 'MERGE DELTA OF "POC"."FACT2"' successfully executed in 53.618 seconds  - Rows Affected: 0

What's more it loads a total of 3.7bn rows in 73 minutes - including the merge delta exercise, which I wasn't even counting before.

Step 6 - Using Table locks

Another HANA tip is to spend some time reading the SQL Reference Manual. It has lots of stuff in it, much of which you won't find documented anywhere else. I found a little function called TABLE lock which should allow you to load data faster. Let's try it, the SQL Syntax looks like this:

IMPORT FROM '/hana/FACT_02_2010.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

Note that you only want to do this on initial batch loads because it locks the entire table, but it's unlikely you will want to load billions of rows in one go after the initial load. What's really interesting here is that this feature puts the data directly into the main table - bypassing the delta table and the need to do a merge delta - but it is some slower 16% than a regular load followed by a merge delta. Why? Who knows!

Step 7 - Conclusions

Well I'm sure you have figured out a lot of this as you read this blog, but here's the takeaways that I got out of this exercise. But first and foremost, despite being fast, it is definitely worth your time to optimise your SAP HANA scenario.`

1) Spend time on optimising your HANA Data Model. It will reduce the size of your database for the big tables, improve performance and also reduce cost, because HANA is licensed by appliance memory size.

2) Test to optimise your load parameters. But don't spend too much time here. It's not that sensitive to detail changes so get it nearly right and move on.

3) Choose your partitioning scheme carefully. And then load partitions into individual files and do manual merges in-between partitions, if you are loading a lot of data. You don't have to but it will speed end-end load performance and allow for speedier issue resolution.

SAP HANA is pretty amazing technology and if you have worked with any other kind of RDBMS you will know that even the slowest of these times is far faster than anything else.

But first and foremost two things are persistent for me. First, tuning SAP HANA is just as important as with any other system - with a bit of work you can achieve performance that you wouldn't have believed before. And second, performance tuning SAP HANA is different to other systems - you tune for parallelisation and memory usage rather than for I/O. But remember this: performance tuning is about finding the performance envelope of a system and working around the weakest point. In that sense SAP HANA is no different to any other computer system in the world.

And in case you are in any way confused, here is the final SQL I used to create and load the table optimally:

drop  table "POC"."FACT2";

create column table "POC"."FACT2"(

          "Col1" INTEGER,

          "Col2" DATE,

          "Col3" TINYINT,

          "Col4" TINYINT,

          "Col5" SMALLINT,

          "Col6" SMALLINT,

          "Col7" TINYINT,

          "Col8" TINYINT,

          "Col9" REAL,

          "Col10" INTEGER,

          "Col11" TINYINT,

          "Col12" TINYINT,

          "Col13" REAL,

          "Col14" TIME,

          "Col15" SMALLINT,

          "Col16" TINYINT,

          "Col17" TINYINT,

          "Col18" SMALLINT,

          "Col19" DATE) NO AUTO MERGE;

ALTER TABLE "POC"."FACT2" PARTITION BY RANGE (DATE)

(PARTITION '2010-01-01' <= VALUES < '2010-02-01',

PARTITION '2010-02-01' <= VALUES < '2010-03-01',

PARTITION '2010-03-01' <= VALUES < '2010-04-01',

PARTITION '2010-04-01' <= VALUES < '2010-05-01',

PARTITION '2010-05-01' <= VALUES < '2010-06-01',

PARTITION '2010-06-01' <= VALUES < '2010-07-01',

PARTITION '2010-07-01' <= VALUES < '2010-08-01',

PARTITION '2010-08-01' <= VALUES < '2010-09-01',

PARTITION '2010-09-01' <= VALUES < '2010-10-01',

PARTITION '2010-10-01' <= VALUES < '2010-11-01',

PARTITION '2010-11-01' <= VALUES < '2010-12-01',

PARTITION '2010-12-01' <= VALUES < '2011-01-01',

PARTITION '2011-01-01' <= VALUES < '2011-02-01',

PARTITION '2011-02-01' <= VALUES < '2011-03-01',

PARTITION '2011-03-01' <= VALUES < '2011-04-01',

PARTITION '2011-04-01' <= VALUES < '2011-05-01',

PARTITION '2011-05-01' <= VALUES < '2011-06-01',

PARTITION '2011-06-01' <= VALUES < '2011-07-01',

PARTITION '2011-07-01' <= VALUES < '2011-08-01',

PARTITION '2011-08-01' <= VALUES < '2011-09-01',

PARTITION '2011-09-01' <= VALUES < '2011-10-01',

PARTITION '2011-10-01' <= VALUES < '2011-11-01',

PARTITION '2011-11-01' <= VALUES < '2011-12-01',

PARTITION '2011-12-01' <= VALUES < '2012-01-01',

PARTITION '2012-01-01' <= VALUES < '2012-02-01',

PARTITION '2012-02-01' <= VALUES < '2012-03-01',

partition others);

IMPORT FROM '/hana/FACT_02_2010.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_03_2010.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_05_2010.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_06_2010.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_07_2010.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_08_2010.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_09_2010.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_10_2010.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_11_2010.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_12_2010.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_01_2011.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_02_2011.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_03_2011.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_04_2011.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_05_2011.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_06_2011.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_07_2011.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_08_2011.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_09_2011.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_10_2011.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_11_2011.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_12_2011.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_01_2012.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

IMPORT FROM '/hana/FACT_02_2012.ctl' WITH THREADS 16 BATCH 200000 WITH TABLE LOCK;

MERGE DELTA OF "POC"."FACT2";

ALTER TABLE "POC"."FACT2" WITH PARAMETERS ('AUTO MERGE' = ON);

Next Steps

Well what's next? I think you know - I've optimised this data model for load performance, but how did I do for query performance? I'm already wondering about this because HANA is a parallel processing engine. If you put 27 months of partitions and then do a query over a wide date range then it should kick off a bunch of parallel processes. If you query on a single month you will hit less data but with fewer parallel processes?

What's the optimal data model for query performance and how does how you setup your joins, CE Functions and SQL Script affect this?

Acknowledgements and thank yous.

As is often the case with this sort of post, there are thanks to dole around. Lloyd Palfrey from Bluefin for being the HANA Ops guru and fixing the database every time I filled up disks, corrupted it and tuning the box. Vijay Vijaysankar, Vitaliy Rudnytskiy and David Hull from IBM, HP and SAP for bouncing ideas. Margaret Anderson and Serge Mutts from SAP CSA for helping with issue resolution. Markus Friedel from SAP AGS for helping resolve errors - mostly between my keyboard and chair.

And let's not forget Aleks Barilko, Marilyn Pratt, Gail Kling Schneider and the rest of the SCN crew for recovering most of this blog, which got lost in the move from the old to the new SCN whilst I was on vacation


It all started in October 2010 with the launch of SAP HANA at the Sapphire conference. HANA(High-performance Analytics Appliance) an in-memory appliance for Business Intelligence was introduced to allow access to real-time analytics & transactional processing.

Then September 2012 saw the launch of another in-memory appliance from Oracle named Oracle Exalytics positioned by the vendor as a preconfigured clustered application server to use in building blocks in clouds with elastic computing abilities.

In short, both were same.

They had similar characteristics

  1. In-memory computing
  2. Processing of massive quantities of real-time data in main memory
  3. Mobile support for the iPad and iPhone and even Android too.


What is HANA & Exalogic ?

According to SAP ,"SAP HANA is an integrated database and calculation layer that allows the processing of massive quantities of real-time data in main memory to provide immediate results from analyses and transactions in a SAP MACHINE".

Termed by ORACLE as, "EXALOGIC is the industry's first in-memory BI machine that delivers the fastest performance for business intelligence and planning applications. Which reduces operational cost risk  and Delivers lightning-fast performance to the world's fastest database machine".

 

Price to purchase HANA or Exalogic

SAP has not publicly released specific pricing information regarding HANA, but early estimates indicate customers can initially have HANA up and running for under $300,000, including hardware, software, and services. Depending on scale, pricing levels can reach up to $2 million or more. HANA is not capable of storing petabyte-levels of data. However, due to its advanced compression capabilities, HANA deployments can store tens of terabytes of data or more, which is considered large data volumes in most current SAP customer environments.

The Exalytics appliance is $135,000, plus $29,700 per year for premier support and other fees, according to a price list published last month by Oracle. To this is added TimesTen license fees of $138,000 ($34,500 for each of the four processors,) plus $30,360 per year ($7,590 per processor) for premier support.

Hardware Involved on HANA

Various vendors(Dell,IBM,Cisco,HP,Fujitsu) are involved in selling of the HANA hardware with different bandwidths.

Numbers may differ below on each vendor.

DELLIBMCISCOHPFUJITSU

PowerEdge R910

SAP HANA Size: Medium

4 Intel E7-4870 / 32 cores

512GB RAM

System x3850 X5

SAP HANA size: Small+

4 Intel X7560 / 32 cores

256 GB RAM

C460 M2

SAP HANA size: Medium

4 Intel E7-4870 / 40 cores

512 GB RAM

ProLiant DL580 G7

SAP HANA size: Medium

4 Intel E7-4870 / 40 cores

512 GB RAM

PRIMERGY RX600 S6

SAP HANA size: Small

2 Intel E7-4870 / 20 cores

256 GB RAM

 

Hardware Involved on Exalytics

The Exalytics hardware is an Oracle Sun Fire server with 1 terabyte of RAM and four Intel Xeon E7-4800 processors with a total of 40 cores. High-speed (40-Gbps InfiniBand and 10-Gpbs Ethernet) connectivity is designed to work hand-in-hand with Oracle's Exadata appliance, enabling the adaptive caching feature to move high-demand data out of that disk-based appliance and into Exalytics memory.

Customers using HANA & Exalytics

Here complete list is not involved and these numbers increase.

HANA

Nongfu Spring

Red bull

Bosch

Siemens

Adobe

P&G ....and many more

Exalytics

Key Energy services

Polk, Inc.

HBO

BNP PARIBAS and many more…..

Should a customer invest on a HANA or Exalytic ?

Current SAP customers should strongly consider deploying SAP HANA to add real-time analytic and transactional processing capabilities on top of existing SAP ERP and other business systems deployments. Non-SAP customers unsatisfied with their current EDW environment should also evaluate HANA, weighing the benefits of adding near real-time analytic capabilities against the cost of migrating to a new system and new vendor. It is also important to evaluate where real-time analytics will most benefit your enterprise. Not all business problems can be solved via real-time analytics, and systems such as HANA should only be deployed where significant business value can be achieved.

Among the important differences compared to SAP HANA, Exalytics is designed to run on Sun-only hardware, it is a mash-up of various existing Oracle technologies, and there are few, if any, systems in production. As with all Oracle technologies, the risk of vendor lock-in is high, and the cost is significantly higher than comparable HANA deployments.

Conclusion

Obviously there is no love lost between these companies. The reason is not hard to guess. Each is aiming at a specialized market within the IT universe. In spite of SAP's greater focus on transactional applications, both SAP and Oracle are looking at largely the same customers. These customers are firms that need high-horsepower appliances and are willing to spend into the millions on the hardware and software capable of meeting their needs.

Related Links

Oracle releases Exalytics to take on SAP's HANA

Why SAP HANA is a Better Choice than Oracle Exalytics

Oracle's Exalytics now available, set for showdown with SAP's HANA



Hello SCN,

Welcome folks to the new SCN look and am all ready to write my first blog in this upgraded version of SCN. I would like to thank SCN for acknowledging my contribution by giving a SILVER BADGE. At the same time am happy to share that i am now SAP Certified HANA Consultant. As i started my journey with SCN Last year and when i see how my journey is proceeding am happy to share that i am now SAP BI Certified, SAP HANA Certified, SAP Active Blogger, SCN Active Contributor Bronze and now SCN Active Contributor Silver.

This community is certainly a great community filled with technology geeks, i made many new friends here and have supporting mentors like Tammy Powlas, Arun Varadarajan who help me to understand and encourage me to write new blogs and to contribute more to the community.

Now with this new look which i felt like i was using "twitter" stye "follow" and "Facebook" style "Like" was awesome. It helps and encourages the contributor to contribute more and the learner to find the blogs/articles he is interested in easily.

Though i had some rough time initially to understand the working of the new SCN and to move my blogs as i continue to work i feel more interested and enticed to work on New SCN. And i hope to hit the "GOLD" soon. This is it, i wrote my first blog in New SCN.

Thank you friends for reading my blog.

The SAP HANA 1.0 education portfolio for HANA 1.0 SPS03 :

  • TZHANA - SAP HANA 1.0 –  Introduction (2 day course)
    This is a technical overview course addressing all target audiences.
    Delivery: Instructor led in physical or virtual classroom.
  • RHANA - SAP HANA 1.0 –  Introduction (4 hour course)
    This is the online equivalent of TZHANA and provides a technical overview course addressing all target audiences.
    Delivery: Self-paced online course (recorded classroom training).
  • TZH300 - SAP HANA 1.0 – Implementation & Modeling (2 day course)
    This course provides more in-depth knowledge on implementing information models in SAP HANA and replication of data using SAP Landscape Transformation (SLT) addressing primarily implementation consultants.
    Delivery: Instructor led in physical or virtual classroom.
  • TZBWHA - SAP BW 7.3 on SAP HANA 1.0 (1 day course)
    Participants of this course will gain an overview how to migrate and use SAP BW 7.3 with SAP HANA 1.0. Primary target audiences are consultants and customers with SAP BW experience.
    Delivery: Instructor led in physical or virtual classroom.

  C_HANAIMP_10 - SAP Certified Application Associate - SAP HANA 1.0 (3 hours exam)
The certification builds on the knowledge gained through related SAP HANA training and preferably refined by practical experience. Target audiences are partner consultants, SAP consultants, and customers that engage in SAP HANA projects in modeling and implementation roles.

If you haven't read the FAQ SAP HANA InnoJam Online 2012 I recommend you to do it right now...after reading my blog of course

So...Phase 2 will come to an end in March 29th...by that time, all teams should have uploaded their videos and project descriptions to the InnoJam Group on Vimeo. Please keep in mind, that there's no exceptions...any team who fails to upload their video and project description will be disqualified.

So...when it's Phase 3 going to happen? From March 29th to April 2nd our Selection Committee will have that hard and demanding job of watching and reviewing all the submitted videos and project descriptions.

The same April 2nd, we will inform by mail and on an SCN blog who are the 8 finalists that will compete for the big prize on Palo Alto on April 12th.

The 8 finalist will have 6 minutes to show their demos live...no Power Points allowed...just working demos. After that, 3 minutes will be used for Q&A with the judges.

The clock is ticking...there's only 1 month left for the big day...but only 18 days to submit both video and project description to the InnoJam Group.

I will like to wish the best of luck to all teams. I know that everyone will make the project, but sadly only 8 teams can qualified for the finals.

Have fun and see you soon!

Try this demo on your iPhone or iPad and visit Experience SAP HANA to check out other cool ways to visualize SAP HANA data.

Actions

Filter Blog

By author:
By date:
By tag: