Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

If you have spent much time with HANA at all, then you will have encountered the Out of Memory error.

[MEMORY_OOM]  Information about current out of memory situation: (2013-12-17 16:24:39 413 Local)

OUT OF MEMORY occurred. Failed to allocate 786815 byte.

This happens, because HANA requires the data on which you're operating, and all of the temporary space required to do aggregation and calculation, to be available in physical RAM. If there is not enough physical RAM available, your query will fail. Therefore you have three major factors:

- The amount of RAM available to the HANA system

- The volume of data on which you are operating

- The amount of temporary space required by the calculation

The first factor is fixed, and the second two are a factor of the design of your model (and also of concurrency, as each user will require their own temporary space). This is why SAP recommend you retain 50% of RAM for tables, and 50% for calculation memory. Indeed, this can vary depending on the system.

Fortunately there are several things you can do to reduce, or eliminate, Out of Memory Errors. As a bonus, every one of these things will help your overall application design and performance.

1) Upgrade to the latest HANA Revision

Newer HANA Revisions are always more memory efficient, both in how they store tables and how they process data. How much difference a particular Revision will make will depend on a number of factors, but for instance, if you use ODBC/JDBC connections and SQL for aggregations without using Analytic Views, Revision 70 may improve performance by 100x and temporary space by 100x compared to Revision 69.

In addition, Revision 70 seems to be a bit more frugal on memory, and it releases RAM back to the Operating System more aggressively than previous Revisions. Either way, take the time to update regularly - the HANA development team are constantly refining specific scenarios.

2) Migrate the Statistics Server

If you are on HANA SP07, you can now migrate the Statistics Server into the main Index Server process. You can read about this in SAP Note 1917938. This should save you around 2GB of memory, which, if you have a 16GB HANA AWS instance, will be much appreciated.

3) Never use row organized tables

HANA allows row- and column- organized tables, but you should never use row tables. You can double click on a table to check - it should say Type: Column Store. I discuss this in more detail in my blog 6 Golden Rules for New SAP HANA Developers, but the important thing for us here is that row organized tables are not compressed as well as column tables, plus they always reside in memory. You can't unload row tables, and they are a memory hog.

4) Set your tables not to automatically load

You can configure tables in HANA not to automatically pre-load on startup using: ALTER TABLE table_name PRELOAD NONE. This is good practice in development systems, because it means you can restart the HANA system faster (as tables won't be preloaded) and HANA uses less memory by default.

5) Partition large tables

HANA loads tables into memory, on demand, on a column-partition basis. I discuss this in some detail in this document How and when HANA SP7 loads and unloads data into memory. If you have a unpartitioned large table with a column with a lot of distinct values - I checked one of the test table in my system and found it had a 12GB column - then that entire column will be loaded into memory the moment any part of it is accessed.

If you partition a table by a sensible key (date, or an attribute like active/inactive), then only those partition-columns that are actually accessed will be loaded into memory. This can dramatically reduce your real-world memory footprint.

6) Redesign your model

HANA doesn't have a temporary table concept like in a traditional RDBMS like Oracle or MS SQL. Instead, temporary tables required for calculations are temporarily materialized in-memory, on demand. HANA has a lot of clever optimizations that try to avoid large materializations and a well-designed model won't do this.

There is a need for a new advanced modeling guide but the best one available for now is the HANA Advanced Modeling Guide by werner.steyn2. In particular, you should avoid using Calculation Views until you have mastered the Analytic View - always try to build an Analytic View to solve a problem first. This is because Analytic Views don't materialize large datasets, unless you materialize the whole Analytic View.

7) Make sure your Delta Merges are working correctly

Nice spot marcel.scherbinek, I forgot to add this. HANA keeps a compressed "main" store of column tables and an uncompressed "delta" store, for new items. Periodically, a process called mergedog combines the two in a DELTA MERGE process. This is a set of design tradeoffs that ensures fast read, and fast writes, and good compression.

Sometimes however your tables may not be configured to automerge (you can issue ALTER TABLE table_name ENABLE AUTOMERGE), and occasionally mergedog fails and you have to restart HANA. This can cause the delta store to become very large, and because it is uncompressed, this is a big issue if you don't have much RAM.

If you double click on a table and click the Runtime Information tab, you will see two memory consumption numbers: Main, and Delta. Delta should always be much less than main during correct operation. You can right click a table any time and select "Perform Delta Merge".

Final Words

HANA requires that your working set of tables (those tables frequently accessed), plus sufficient calculation memory for the temporary tables materialized during your query, are available in-memory. HANA will unload partition-columns of tables on a Least Recently Used basis, when it is out of memory.

If you see OOM errors and you have done (1-5) then the chances are that your model is materializing a large data set. Because temporary tables in HANA aren't compressed, they can be 5-10x larger than the equivalent column table, which means that you can balloon large volumes of RAM. If you have a 512GB HANA system and a 50GB table, you can easily run out of RAM if you create a bad model.

It feels to me like there's a need for a new modeling guide for OOM errors for HANA. Maybe that's the subject of a future blog.

Have I missed anything? What are your tips?

17 Comments
Labels in this area