cancel
Showing results for 
Search instead for 
Did you mean: 

archive redo log too small

benoit-schmid
Contributor
0 Kudos

Hello,

On one ECC6.04 system with Oracle 11.2.0.2, we regularly have checkpoint is not complete.

Before re-increasing size or the number of redo groups again, I have check their sizes:

ls -altrh /oracle/PRD/oraarch

total 4269436

-rw-r--r--   1 oraprd   dba            0 Jun 15  2011 .ch.unige.nsr.ignore

--w-------   1 root     root        500M Jun 21  2011 2_DELETE_ME_ON_ARCHIVER_STUCK

--w-------   1 root     root        500M Jun 21  2011 1_DELETE_ME_ON_ARCHIVER_STUCK

-rw-r--r--   1 root     root         791 Jun 21  2011 0_DELETE_ME_ON_ARCHIVER_STUCK.README

-rw-r-----   1 oraprd   dba          28M Apr  3 22:31 PRDarch1_10262_761304167.dbf

-rw-r-----   1 oraprd   dba          35M Apr  3 23:27 PRDarch1_10263_761304167.dbf

-rw-r-----   1 oraprd   dba          40M Apr  3 23:27 PRDarch1_10264_761304167.dbf

-rw-r-----   1 oraprd   dba          40M Apr  3 23:27 PRDarch1_10265_761304167.dbf

-rw-r-----   1 oraprd   dba          35M Apr  3 23:27 PRDarch1_10266_761304167.dbf

-rw-r-----   1 oraprd   dba          30M Apr  3 23:27 PRDarch1_10267_761304167.dbf

-rw-r-----   1 oraprd   dba          33M Apr  3 23:27 PRDarch1_10268_761304167.dbf

-rw-r-----   1 oraprd   dba          29M Apr  3 23:28 PRDarch1_10269_761304167.dbf

-rw-r-----   1 oraprd   dba          31M Apr  3 23:28 PRDarch1_10270_761304167.dbf

-rw-r-----   1 oraprd   dba          29M Apr  3 23:28 PRDarch1_10271_761304167.dbf

-rw-r-----   1 oraprd   dba          34M Apr  3 23:28 PRDarch1_10272_761304167.dbf

-rw-r-----   1 oraprd   dba          40M Apr  4 00:13 PRDarch1_10273_761304167.dbf

-rw-r-----   1 oraprd   dba          30M Apr  4 00:30 PRDarch1_10274_761304167.dbf

-rw-r-----   1 oraprd   dba          32M Apr  4 00:30 PRDarch1_10275_761304167.dbf

-rw-r-----   1 oraprd   dba          31M Apr  4 01:42 PRDarch1_10276_761304167.dbf

drwxr-xr-x  23 oraprd   dba           42 Apr  4 02:23 ../

-rw-r-----   1 oraprd   dba          28M Apr  4 04:32 PRDarch1_10277_761304167.dbf

-rw-r-----   1 oraprd   dba          28M Apr  4 06:03 PRDarch1_10278_761304167.dbf

-rw-r-----   1 oraprd   dba          30M Apr  4 06:30 PRDarch1_10279_761304167.dbf

-rw-r-----   1 oraprd   dba          28M Apr  4 06:32 PRDarch1_10280_761304167.dbf

-rw-r-----   1 oraprd   dba          28M Apr  4 06:33 PRDarch1_10281_761304167.dbf

-rw-r-----   1 oraprd   dba          28M Apr  4 06:35 PRDarch1_10282_761304167.dbf

-rw-r-----   1 oraprd   dba          28M Apr  4 06:36 PRDarch1_10283_761304167.dbf

-rw-r-----   1 oraprd   dba          28M Apr  4 06:38 PRDarch1_10284_761304167.dbf

-rw-r-----   1 oraprd   dba          28M Apr  4 06:39 PRDarch1_10285_761304167.dbf

-rw-r-----   1 oraprd   dba          28M Apr  4 06:55 PRDarch1_10286_761304167.dbf

-rw-r-----   1 oraprd   dba          35M Apr  4 06:56 PRDarch1_10287_761304167.dbf

-rw-r-----   1 oraprd   dba          37M Apr  4 06:56 PRDarch1_10288_761304167.dbf

-rw-r-----   1 oraprd   dba          35M Apr  4 06:56 PRDarch1_10289_761304167.dbf

-rw-r-----   1 oraprd   dba          40M Apr  4 06:56 PRDarch1_10290_761304167.dbf

-rw-r-----   1 oraprd   dba          40M Apr  4 06:56 PRDarch1_10291_761304167.dbf

-rw-r-----   1 oraprd   dba          32M Apr  4 07:32 PRDarch1_10292_761304167.dbf

-rw-r-----   1 oraprd   dba          27M Apr  4 09:32 PRDarch1_10293_761304167.dbf

-rw-r-----   1 oraprd   dba          28M Apr  4 10:36 PRDarch1_10294_761304167.dbf

drwxr-xr-x   2 oraprd   dba           40 Apr  4 11:04 ./

-rw-r-----   1 oraprd   dba          28M Apr  4 11:04 PRDarch1_10295_761304167.dbf

It shows that the db is performing switch log before the max redo size is reached.

Would you know what could explain this wearied behavior?

Thanks in advance for your answer.

Accepted Solutions (1)

Accepted Solutions (1)

stefan_koehler
Active Contributor
0 Kudos

Hello Benoît,

are you sure that you have an issue with "checkpoint is not complete"? Many customers are mixing this up with the alert log entries "Thread <X> cannot allocate new log, sequence <X> - Private strand flush not complete".

However regarding your question about reaching the max redo size. There are 3 main reasons for that behavior:

  1. Manual log switch by third party tools
  2. Expected behavior (preemptive redolog switches) like described in sapnote #1627481
  3. Init parameter ARCHIVE_LAG_TARGET (http://docs.oracle.com/cd/E11882_01/server.112/e25513/initparams009.htm#REFRN10003)

Regards

Stefan

benoit-schmid
Contributor
0 Kudos

Hello Stephan,

Stefan Koehler wrote:

are you sure that you have an issue with "checkpoint is not complete"? Many customers are mixing this up with the alert log entries "Thread <X> cannot allocate new log, sequence <X> - Private strand flush not complete".

...

  1. Manual log switch by third party tools
  2. Expected behavior (preemptive redolog switches) like described in sapnote #1627481
  3. Init parameter ARCHIVE_LAG_TARGET (http://docs.oracle.com/cd/E11882_01/server.112/e25513/initparams009.htm#REFRN10003)

You are asking if I am sure that it is checkpoint is not complete.

This is what I see in my alert: Checkpoint not complete.

Does it answer your question?

For sure it is not manual switch.

ARCHIVE_LAG_TARGET is deactivated in my config.

See you,

Answers (2)

Answers (2)

benoit-schmid
Contributor
0 Kudos

Hello,

I will resize the redos.

What are the pros and cons of reducing the log_buffer size as recommended in SAP Note 1627481?

Thanks in advance for your help.

Former Member
0 Kudos

Hi  Benoît,

If you reduce log_buffer size, it will require more I/O to flush redolog buffer into the online redolog files which will reduce the system performance.

Best regards,

Orkun Gedik

Former Member
0 Kudos

Hi Benoit,

A well tuned DB should not make more than a redolog switch per minute.

Instead, your system generates a lot of archive redolog per minute during high workload.

Depending on the size of your origlog and mirrlog filesystems, consider to increase their size, for istance to 500MB per group.

Regards

Leo

benoit-schmid
Contributor
0 Kudos

Hello Leopoldo,

Leopoldo Capasso wrote:

Hi Benoit,

A well tuned DB should not make more than a redolog switch per minute.

Instead, your system generates a lot of archive redolog per minute during high workload.

Depending on the size of your origlog and mirrlog filesystems, consider to increase their size, for istance to 500MB per group.

Regards

Leo

The problem of settings 500 MB, is that a redo may contain several hours of classical production works but on average we have a small ECC system.

In this case, if you loose a redo you loose several hours of work.

I agree that I have to increase the size.

But the answer is not just increase the size.

I guess that have to reduce the log_buffer size and I may also use ARCHIVE_LAG_TARGET that Stephan mentioned.

What is really sad concerning this redo pb, is that I have checked the abap program

that is redo intensive.

It is doing a full delete and a full insert instead of updating a few table lines in the DB.

I also expect that our developers would change it in the future, but life is not like in the books .

Thanks in advance for you feedback.

Former Member
0 Kudos

Dear all,

i don't agree to set parameter ARCHIVE_LAG_TARGET , because is not mentioned in any SAP Note.

Regarding risk to loose a redolog, i just think that you storage and backup strategy must assure that no redolog file must be lost in case of crash or problems of any kind.

I usually prefer to use redolog as large as to obtain maximum performance.

Regards

Leo

benoit-schmid
Contributor
0 Kudos

Hello,

Leopoldo Capasso wrote:

Regarding risk to loose a redolog, i just think that you storage and backup strategy must assure that no redolog file must be lost in case of crash or problems of any kind.

You can have the best storage and backup strategy.

If you do a rm -r /oracle/PRD you loose everything including the current redo.

With huge redo and no other tuning, you can loose a lot of production work

because you have not generated archive redo

See you,

former_member182307
Contributor
0 Kudos

Hello Benoit,

This is a normal behavior and is described in the following notes :

Note 998675 - Archive log of smaller size than the original redo log

and Note 1627481 - Preemptive redolog switches in Oracle 11.2 ( that Stephan mentionned ).

Regarding your "checkpoint is not complete" issue I would :

First : increase the redolog file size up to 100Mo .

Second : add redo log groups if this is not enough.

Regards,

Steve.