Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
stefan_koehler
Active Contributor

Introduction

In the past two weeks i was involved in an Oracle database and OS platform scalability and performance test for a non-SAP banking application. The application uses basic LOBs for storing XML messages and from time to time the end users notice an performance issue by writing the LOB data.

My client built up a (performance) test environment with the same Oracle database and OS version for simulating a large write load on basic LOBs with different settings, but the test results were not correlated to production or logical at all. So we started to discuss the environment and the LOB settings in production and how the test environment was built up. This blog focus on a special case about basic LOBs and its "redo generation" in both environments.

The LOB definition / issue

The application schema was deployed by the software vendor of the banking / finance application. It uses the default storage settings for basic LOB (in our case the important ones are "nocache" and in-line storage enabled). The XML message size varies between just a few bytes and more than 4000 bytes, so in-line and out-line storage is used.

The LOB data is written by direct path writes (nocache clause) by storing the LOB data in its "external" LOB segment (out-line storage with data larger than 4000 bytes). I immediately thought about caching the LOB content and eliminating the bottleneck of direct path writes in such response time critical environments, but my client mentioned, that they want to make use of less redo for direct written LOB data. They provided me a PDF document from an Oracle consultancy which basically states, that you nearly generate no redo for out-line stored LOBs by using the nocache clause. I was immediately in doubt with that statement, but the document includes a test case and the results were also reproducible in the (performance) test environment. I did not understand how Oracle is able to recover such LOB data, if no "full" redo record was written for it. So i decided to do further researching in my virtual environment with the same client database settings.

The LOB test case

The following tests were run with an Oracle 11.2.0.3 database on OEL 6.2.

General database and LOB settings

The database is running in "force logging" mode, which means that DML / DDL operations are recorded in the redo log files even if you specify the nologging clause or hint. The LOB itself is created with its default settings for cache, logging and in-line storage enabled.

The LOB test data

shell> dd if=/oracle/T11/11203/bin/oracle of=/tmp/data.dd bs=1024 count=5
SYS@T11> create or replace directory LOBDATA as '/tmp';
SYS@T11> grant read, write on directory LOBDATA to UTEST;

I created a binary file with 5120 bytes on OS level, that i want to store in the "external" LOB segment. I used this approach as SQL*Plus limits the LOB content size by its design.

First test case with Oracle database in NOARCHIVELOG mode (like the test environment of the client)

UTEST@T11>
DECLARE
  SRC_FILE BFILE := BFILENAME('LOBDATA', 'data.dd');
  BLOB_POINTER BLOB;
BEGIN
  INSERT INTO TAB_LOB_NOCACHE VALUES (1,EMPTY_BLOB()) RETURNING DATA
  INTO BLOB_POINTER;
  DBMS_LOB.OPEN(SRC_FILE, DBMS_LOB.LOB_READONLY);
  DBMS_LOB.LOADFROMFILE(dest_lob => BLOB_POINTER, src_lob => SRC_FILE,
                                                         amount => DBMS_LOB.getLength(SRC_FILE));
  DBMS_LOB.CLOSE(SRC_FILE);
  commit;
END;
/

I ran a script called Snapper (by Tanel Poder) in a separate session while this PL/SQL procedure inserted one record into the table / external LOB segment. The following session statistics regarding redo were captured for one insert of round about 5120 bytes.

Oracle has written only 52 bytes of redo content for storing over 5000 bytes of LOB data. That sounds pretty less, right? So let's dump and verify the redo log content.

SYS@T11> select OBJECT_ID, DATA_OBJECT_ID from dba_objects
                      where OBJECT_NAME = 'LOB_BASIC_NOCACHE';
OBJECT_ID DATA_OBJECT_ID
---------- --------------
     83384              83388
SYS@T11> select lf.MEMBER from v$logfile lf, v$log lo
                      where lf.GROUP# = lo.GROUP# and lo.STATUS = 'CURRENT';
MEMBER
-------------------------------------------------------------------------
/oracle/T11/origlog/redo_T11_g2_01.log
/oracle/T11/mirrlog/redo_T11_g2_02.log
SYS@T11> alter system dump logfile '/oracle/T11/origlog/redo_T11_g2_01.log';

Let's verify the redo record length (hex 0x0034 = decimal 52) at first. You can see that only 52 bytes are used for that redo record, but we inserted much more LOB data with round about 5120 bytes. The interesting part is the section called "Direct Loader invalidate block range redo entry". Such a redo record is written in case of a nologging operation, but we have defined the LOB, the table and the tablespace with the logging clause and the database is running in "force logging" mode in addition.

We are not able to run a valid scalability and performance test on external LOB segments, if the Oracle database is running in NOARCHIVELOG mode in the (performance) test environment. Let's re-run this test with ARCHIVELOG mode for verification.

Second test case with Oracle database in ARCHIVELOG mode (like the productive environment of the client)

I ran a script called Snapper (by Tanel Poder) in a separate session while this PL/SQL procedure inserted one record into the table / external LOB segment. The following session statistics regarding redo were captured for one insert of round about 5120 bytes.

Oracle has written round about 8272 bytes of redo content for storing 5120 bytes of LOB data now. Round about 8000 bytes are reasonable as a complete LOB chunk is written (even if we only store 5120 bytes) by DML and i used the default chunk size of 1 data block (=8 kb) on my test database. Let's dump and verify the redo log content for completeness.

SYS@T11> select lf.MEMBER from v$logfile lf, v$log lo
                      where lf.GROUP# = lo.GROUP# and lo.STATUS = 'CURRENT';
MEMBER
---------------------------------------------
/oracle/T11/origlog/redo_T11_g3_01.log
/oracle/T11/mirrlog/redo_T11_g3_02.log
SYS@T11> alter system dump logfile '/oracle/T11/origlog/redo_T11_g3_01.log';

Let's verify the redo record length again (hex 0x2050 = decimal 8272). You can see that 8272 bytes are used for that redo record even if you only inserted 5120 bytes of data (check the 00 values at the end of the redo record), but this works as designed. Now we have written a valid / full redo record and are able to recover the database to any point in time.

Thinking about test results

Why does Oracle behave like that? It seems to be strange from an application point of view, but it makes absolutely sense from a database technology point of view. You never have the possibility to perform a point in time in recovery, if you are running the Oracle database in NOARCHIVELOG mode. Oracle needs the redo log content for instance crash recovery only and the LOB data is already written to the data files in case of direct path writes. Oracle does not need to log a valid / full redo record for such operations (call it an optimization or whatever).

I verified my findings with the PDF document and find a foot note, that the database was running in NOARCHIVELOG mode as well. Unfortunately it seems like the author of this paper did not thought about his observations and its consequences as no productive Oracle database should run in NOARCHIVELOG mode and you generate the expected amount of redo (like full chunks), if you are running the Oracle database in ARCHIVELOG mode.

However my client tests its application with cached LOBs now as it was proved, that the original assumption (and its consequence) about less redo by non-cached LOBs was just wrong for any Oracle databases running in ARCHIVELOG mode.

Summary

You need to setup your test environment exactly the same way as your production, if you want to have reliable and meaningful test results. Even such inconsiderable settings like archive log mode (for a test environment) can impact the database behavior drastically. In the test case from above (with NOARCHIVELOG mode) you are able to produce much more LOB I/O load as it would ever be possible in production. You usually hit log buffer / log file sync issues much earlier in production before reaching the load of the test environment.

If you have any further questions - please feel free to ask or get in contact directly, if you need assistance by troubleshooting Oracle database performance issues.

References

Labels in this area