cancel
Showing results for 
Search instead for 
Did you mean: 

"Other" Component SYS.M_HEAP_MEMORY_?

former_member184795
Participant
0 Kudos

We've identified a SQL Script procedure that is consuming a huge amount of Used Memory( 200gb +) on our development environment.

Our spec for our development environment is a one node 500gb and we're running HANA 1.00.74.

So, we've two issues.

1. The SQL Script procedure consumes huge amounts of memory and eventually failing with mem allocation errors. The used memory spikes to 95% (through the overview screen) or can be seen executing the following query.

  1. SELECT 
  2.         ROUND(SUM(TOTAL_MEMORY_USED_SIZE/1024/1024/1024), 
  3.         2) AS "Used Memory GB" 
  4. FROM SYS.M_SERVICE_MEMORY; 

Alerts that the box is running out of memory are generated and in M_CS_UNLOADS view, column tables & columns are unloaded from memory.


2. After the procedure fails, used memory volume is still quite high on the box and I'm finding it hard to pin-point where the (intermediary results from query?) are holding all this memory.

In the past I've seen model generated translation tables can be tracked using SYS.M_DEV_JOIN_TRANSLATION_TABLES_. However this doesn't seem to be apply for SQL Script, as the volume remains low here. The only visible entry I've seen is the "Other" category in the heap memory view, as below..

The SQL Script procedures themselves that are triggered are predominately CE functions, with some SQL in the procedures higher on the call stack. Originally we did make a conscious effort not to mix both in the same procedure, but the reason I'm back working in development on this project is a bug in 74 around a CE_CALC function that I need to change to a Select statement [Story for a different day!].

In any case, I'm more interested for this discussion in finding how sql script procedure generate intermediary/temp tables and how these can be monitored, both from a system admin point of view, but also from a development perspective wrt efficient SQL execution.

Any help/comments welcome!

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hello Sean,

from an administrator perspective I would recommend you to run SQL: "HANA_Memory_Overview" (SAP Note 1969700) while the problem exists. In the heap section you should then see the top heap allocators. SAP Note 1999997 provides additional information regarding SAP HANA memory consumption.

Kind regards

Martin

former_member184795
Participant
0 Kudos

Thanks Martin, I will check out the notes listed and let you know if it has what I was looking for.

former_member184795
Participant
0 Kudos

Update:

I've gone through most of the queries and there's some very useful scripts there for sure.

Using the HANA_Memory_Overview, before execution

While executing the bad sql script procedure, I can see from the HANA_Memory_Overview that it's the following pool objects that are taking up all the used memory. This grows until it hits the ceiling and the procedure will fail with an memory allocation error.

When it fails, the JECreateNTuple entry is automatically released, but it takes some time before the 2nd Pool/JoinEvaluator object is returned to the Pool.

Is there anyway of finding more information relating to these objects? Also is there a way of manual releasing the remaining object back to the pool?

Thanks again,

Sean.

For some reason, I couldn't embed the screen images in edit, so had to attach. Apologies.

Former Member
0 Kudos

In SAP Note 1999997 you can find more information for the top allocators. The translation tables are not automatically purged, but you could set the translator_cache_size parameter to a lower value to reduce the size of this pool. If the high memory consumption is not justified by the SQL statement complexity and data volume, you can additionally check SAP Note 2000002 and make sure that you haven't manually set a high value for the late materialization thresholds.

former_member184795
Participant
0 Kudos

Thanks again Martin for the prompt response. So I've read through a lot of the notes around used memory on the support portal, and there's some excellent documentation on the subject. As for the bad sql script procedure, it was easily corrected but i actually kept the bad version around for testing and help with future support.

And yes, our dev box had a very large value for late materialization threshold per note 1975448 (trying to check why/who changed it).

However..

  1. This example doesn't involve translation tables, I can see these easily in M_DEV_JOIN_TRANSLATION_TABLES_ (Aside: Is there anyway to automatically purge?, table re-distribution seems to work, but it would be nice not to have to do that each time)
  2. Is there any scripts that can give us more information about these objects and anyway these can also be manually purged, especially in the case where we had a mem allocation error and this large amount of used memory hadn't been returned to the pool?

Former Member
0 Kudos

There is no way to purge the translation tables (apart from adjusting the translator_cache_size parameter). With SPS 09 there will be an improved hash based translation tables implementation (in fact it is already there with SPS 08, but not active per default and not officially supported) which will reduce the memory consumption significantly.

There are no official tools available to dig deeper into these memory allocators. You would have to use some hdbcons stuff, but probably you would not be able to interpret the results.

former_member184795
Participant
0 Kudos

Thanks again Martin, the improved hash based translation table implementation sounds promising for sure.


In the meantime for SPS 07 or 08, if for some reason we do encounter an issue with translation tables occupying a large amount of used memory, is the recommendation to manually change the translator_cache_size parameter to a low enough value that it forces a purge of sorts (based on LRU)?


Is the parameter translator_cache_size setting exclusive to "Pool/JoinEvaluator/TranslationTable" entries? Would it have been preferable to cap the actual memory size that can be allocated rather than the number of files that can be generated?


In the example above, the large volume were reported under "Pool/JoinEvaluator/JERequestedAttributes/Results" in HANA_Memory_Overview script. From testing these appear to tie to SQL Script procedures? Is there anyway way to limit the size/number of these through configuration?


Thanks again,

Sean.

Former Member
0 Kudos

The only option to reduce the translator cache size is to reduce the translator_cache_size parameter (which defines the number of translation tables, not the maximum memory size). This parameter is exclusive for Pool/JoinEvaluator/TranslationTables. The size of other areas like Pool/JoinEvaluator/JERequestedAttributes/Results is linked to the actual (intermediate) results of database requests, so there is no way to adjusting it with central parameters.


Answers (0)