cancel
Showing results for 
Search instead for 
Did you mean: 

SAP HANA Performance Tuning

Former Member
0 Kudos

I am running the following update statement on SAP HANA via Apache JMeter, a load testing tool, over a JDBC connection and am getting a pretty low throughput something around 20 requests per second simulating 10 clients and iterating the update about a 100 times for each simulated client.

Is there a way to tune HANA so that I achieve a higher throughput? CPU and Memory usage is quite low on my system so I am also lost on where the performance is going.

Accepted Solutions (1)

Accepted Solutions (1)

lbreddemann
Active Contributor
0 Kudos

A common mistake is not using prepared statements.

Make sure you do that.

- Lars

Former Member
0 Kudos

Hi Lars, I am already using prepared statements. I am currently VPNd to the HANA server could it be a network related issue?

lbreddemann
Active Contributor
0 Kudos

Could be a lot of things. Without measuring all one can do is guessing...

Measure where the time is spent.

With the network latency you already have an idea to start with.


- Lars


Answers (1)

Answers (1)

Former Member
0 Kudos

I ran a network test using TTCP and got the following bits of information using a buffer size (or otherwise just transmitting ) 8192 bytes of data  2048 times gives a throughput of about 722.21 KB/sec is that a good enough rate for HANA and what is the guidance on the network requirements for self-hosted apps accessing HANA in the cloud or on a data center somewhere?

lbreddemann
Active Contributor
0 Kudos

Ok, you measured your throughput. While for your update most likely the latency would be of interest, it's missing the point.

The point was to find out, where time is spend while processing your update statements.

Why don't you use the available tracing options for SAP HANA?

There's JDBC tracing available and even JMeter allows to see detailed timing information afaik.

Maybe it's not even the network transfer, but e.g. some locking effect.

What's the execution statistics for the update statements?

Former Member
0 Kudos

I checked M_OBJECT_LOCK_STATISTICS_RESET for lock statistics and got the results below:

(NB: Click the image above to get the full results)

I have singled out this update statement for the locking on warehouse table :

Update WAREHOUSE SET W_YTD =W_YTD + ? WHERE W_ID = ?

For the above results we have 100+ terminals trying to update just 2 warehouses concurrently.

FYI: we got the above in about two hours of testing

From HANA SQL Plan Cache I got the following statistics for the above update statement:

Statement_String:                Update WAREHOUSE SET W_YTD + 1 WHERE W_ID = 1

Execution_Count:                1000

Total_Cursor_Duration:       962091

Avg_Cursor Duration:          962

Total_Execution_Time:       962091

Avg _Execution _Time:       962

Total_Lock_Wait_Count:    107,

Total_Lock_Wait_Duration: 172066

What other statistics should I get and if I have the above contention is it normal and what can I do to reduce it?

lbreddemann
Active Contributor
0 Kudos

If you have access to SAP Notes, especially to the HANA performance related ones, it would be rather straight forward to tell you how to analyze this situation.

Looking just at the little bit of stats you posted I'd say that lock contention surely is a topic here.

How do you separate the W_IDs across the different clients?

Former Member
0 Kudos

As per the remark from Lars below, it looks like lock contention is at least one part of your problem.

The note that is a mine of great scripts is: 1969700 - SQL statement collection for SAP HANA

From this note you'd be interested in the HANA_Locks_LockWaits_History.txt script.

Run the SQL in Studio and you'll get a nice overview of lock waits by object.

SELECT

  HOST,

  LPAD(PORT, 5) PORT,

  TO_CHAR(BLOCKED_TIME, 'YYYY/MM/DD HH24:MI:SS') || '-' || TO_CHAR(SERVER_TIMESTAMP, 'YYYY/MM/DD HH24:MI:SS') WAIT_TIMEFRAME,

  LPAD(SECONDS_BETWEEN(BLOCKED_TIME, SERVER_TIMESTAMP), 7) WAIT_S,

  LPAD(BLOCKED_CONNECTION_ID, 12) WAIT_CONN_ID,

  LPAD(BLOCKED_TRANSACTION_ID, 13) WAIT_TRANS_ID,

  LPAD(LOCK_OWNER_CONNECTION_ID, 12) HOLD_CONN_ID,

  LPAD(LOCK_OWNER_TRANSACTION_ID, 13) HOLD_TRANS_ID,

  WAITING_SCHEMA_NAME SCHEMA,

  WAITING_TABLE_NAME TABLE_NAME,

  WAITING_RECORD_ID RECORD_ID,

  LOCK_TYPE,

  LOCK_MODE

FROM

( SELECT

    T.HOST,

    T.PORT,

    T.BLOCKED_TIME,

    MAX(T.SERVER_TIMESTAMP) SERVER_TIMESTAMP,

    T.BLOCKED_CONNECTION_ID,

    T.BLOCKED_TRANSACTION_ID,

    T.LOCK_OWNER_CONNECTION_ID,

    T.LOCK_OWNER_TRANSACTION_ID,

    T.WAITING_SCHEMA_NAME,

    T.WAITING_TABLE_NAME,

    T.WAITING_RECORD_ID,

    T.LOCK_TYPE,

    T.LOCK_MODE,

    BI.ORDER_BY

  FROM

  ( SELECT                       /* Modification section */

      TO_TIMESTAMP('2015/02/25 00:00:00', 'YYYY/MM/DD HH24:MI:SS') BEGIN_TIME, 

      TO_TIMESTAMP('2015/02/27 23:59:00', 'YYYY/MM/DD HH24:MI:SS') END_TIME,

      '%' HOST,

      '%' PORT,

      '%' SCHEMA_NAME,

      '%' TABLE_NAME,

      10 MIN_WAIT_TIME_S,

      'TIME' ORDER_BY           /* TIME, TABLE, DURATION */

    FROM

      DUMMY

  ) BI,

    _SYS_STATISTICS.HOST_BLOCKED_TRANSACTIONS T

  WHERE

    T.HOST LIKE BI.HOST AND

    T.PORT LIKE BI.PORT AND

    T.WAITING_SCHEMA_NAME LIKE BI.SCHEMA_NAME AND

    T.WAITING_TABLE_NAME LIKE BI.TABLE_NAME AND

    ( T.BLOCKED_TIME BETWEEN BI.BEGIN_TIME AND BI.END_TIME OR T.SERVER_TIMESTAMP BETWEEN BI.BEGIN_TIME AND BI.END_TIME ) AND

    ( BI.MIN_WAIT_TIME_S = -1 OR SECONDS_BETWEEN(T.BLOCKED_TIME, T.SERVER_TIMESTAMP) >= BI.MIN_WAIT_TIME_S )

  GROUP BY

    T.HOST,

    T.PORT,

    T.BLOCKED_TIME,

    T.BLOCKED_CONNECTION_ID,

    T.BLOCKED_TRANSACTION_ID,

    T.LOCK_OWNER_CONNECTION_ID,

    T.LOCK_OWNER_TRANSACTION_ID,

    T.WAITING_SCHEMA_NAME,

    T.WAITING_TABLE_NAME,

    T.WAITING_RECORD_ID,

    T.LOCK_TYPE,

    T.LOCK_MODE,

    BI.ORDER_BY

)

ORDER BY

  MAP(ORDER_BY, 'DURATION', SECONDS_BETWEEN(BLOCKED_TIME, SERVER_TIMESTAMP)) DESC,

  MAP(ORDER_BY, 'TABLE', WAITING_TABLE_NAME),

  BLOCKED_TIME DESC

Kindest Regards,

Amerjit