cancel
Showing results for 
Search instead for 
Did you mean: 

Transaction log very high when dropping a column on popular table

pietercoene
Explorer
0 Kudos

Hi,

this week, i had to drop a column from a table (table has 40 columns, 3600 rows). So nothing special eh? No big table in size or rows, but the table is referenced in more than 400 objects (stored procedures, triggers, views, SQL Functions). These objects are constantly used by our users...

Dropping the column is very fast, but the transaction log grows almost instantly about 400MB (normally, on this system the transaction log grows about 150MB / hour). We have 1024MB configured and for the next half hour, the log increases faster than normal (our full database dump takes about 50 minutes, so thats a long time too see our transaction log growing fast closing in to 1024MB)

Why is this? Are all objects that reference this table recompiled perhaps? And why does this affect our transaction log?

And maybe the most important question, what can we do about it?

ps: i know i can increase our transaction log, but thats not the issue right now...

Greetz

Pieter

Adaptive Server Enterprise/15.7/EBF 21985 SMP SP110 /P/X64/Windows Server/ase157sp11x/3546/64-bit/OPT/Fri Nov 08 01:38:31 2013

Accepted Solutions (1)

Accepted Solutions (1)

former_member188958
Active Contributor
0 Kudos

Hi Pieter,

For doing the ALTER TABLE DROP, do you have the database option "select into" set or are you using the "full logging for alter table" setting?

In general, I think you are probably right that much of the growth in the log has to do with the compiled objects.  The ALTER TABLE would mark those objects to be renormalized (and thus then recompiled) the first time they are executed.  The compilation (creation of a new query plan in procedure cache) does not grow the log, but the renormalization does rewrite the query tree stored in sysprocedures, which involves (on a 2k page size server, anyway) about 2K of log for each row in sysprocedures for the object.  There can be a surprising number of such rows, something as simple as "create procedure p select c1, c2 from t1" produces 7 rows.

(sysprocedures rows are wider than they look, they have a hidden column that contains the query tree in binary form).

These log records would be generated as each object is executed for the first time after the alter, so that probably generates a lot of activity very quickly, then trails off.

Each of these would be separate transactions, so it should be generally possible to do a DUMP TRANSACTION to truncate the log after the ALTER (and perhaps some of the more immediate activity) has finished.

Here is a little demonstration:

former_member188958
Active Contributor
0 Kudos

1> create table t1 (c1 int, c2 int)

2> go

1> insert t1 select id, id from sysobjects

2> go

(32 rows affected)

1> create procedure p as select c1 from t1

2> go

1> dump tran test with truncate_only

2> go

1> select count(*) from syslogs

2> go

-----------

          20

(1 row affected)

1> alter table t1 drop c2

2> go

(32 rows affected)

1> select count(*) from syslogs

2> go

-----------

          79

(1 row affected)

1> execute p

2> go

c1

-----------

           1

           2

[snip]

   752002679

(32 rows affected)

(return status = 0)

1> select count(*) from syslogs

2> go

-----------

         110  <---- execution of p grew the log by 41 records due to rewriting the query tree

(1 row affected)

pietercoene
Explorer
0 Kudos

Hi Bret,

thx for the clarification. The 'select into' option is ON, the 'full logging for alter table' is OFF.

It's not possible to dump the transaction after the drop column (unlogged transaction) so we need to do a full database dump. That's why it takes a while to clear the transaction log.

But its good to know, i'll keep it in mind the next time i need to drop a column

Greetz

Pieter

Former Member
0 Kudos

Bret,

This opens a new question. Consider/Assume i got a window of 2 hour for the database in which i am going to perform the alter on one table (consider table is referenced in more than 500 + objects and alter statement execution will take only 15 mins) and business would like to have the referenced objects in renormalized state before handing over to app/business team .


I have 1hr.45 mins window time left to with me to perform operations ( re normalization and full backup)


1) How to re normalize  all referenced objects manually 

2) Is there any way to find the list of objects referenced to this object ? . If so how to find it 

this will help from log full issues/minimize logsegment utilization  , During operating hours . And during window time i can make database to "trunc log on checkpoint and select into bulk copy". So that my alter or re normalization won't use much log space) ..

Once all my activity completed i can revert back DB option changes and take full DB backup and will handover the DB for processing . So that application can uses the objects and logsegment usage will be as before ( Ex . for this case 150MB / hour)

former_member188958
Active Contributor
0 Kudos

You can use the dbcc upgrade_object command to force renormalization.

use test

go

create table t1 (c1 int, c2 int)

go

create procedure p1 as select c1 from t1

go

dump tran test with truncate_only

go

alter table t1 drop c2

go

select count(*) from syslogs

go

dbcc upgrade_object(test, "p1", force)

go

select count(*) from syslogs  -- note the increase in log records from normalization here

go

execute p1

go 

select count(*) from syslogs -- note no increase here now.

go


The sysdepends table will generally tell you which objects depend on another, providing the dependent object (table in this case) hasn't been dropped and recreated since the referencing object (the procedure) was created.  (The table stores dependencies by object id, so when you drop the table the row is deleted, but when you re-create the table a new row isn't inserted ).

You can generate a dbcc_upgrade script from sysdepends for ever object that depends on a table:

select

     "dbcc upgrade_object(test, '"

+     object_name(id)

+     "', force) "

from sysdepends

where depid = object_id("t1")

go

-bret

Former Member
0 Kudos

Thanks a lot bret . Very helpful for me

pietercoene
Explorer
0 Kudos

Hi,

seems like a good solution, but only if you have a 'maintenance window'. We normally don't have a lot of maintenance windows, so if you can't go offline, dbcc upgrade_object won't help us either.

Also, setting the 'trunc log on checkpoint' option is not possible for us, as we use Sybase Replication (we have a warm Active/Standby environment)

Still, the most simple thing to do for is taking a full database dump, the only downside is that it takes 50 minutes to complete. After that, i don't care if the transaction log grows fast, cause a transaction dump takes only seconds to complete...(and we take advantage of the 'threshold' mechanism to take automaticly transaction dumps)

But, nice to learn about the dbcc_upgrade trick

PS: you also can use the ase method "sp_depends"

Greetz

Pieter

pietercoene
Explorer
0 Kudos

Good news,

when using the option 'no datacopy' when dropping a column, no full database dump is needed. The only pitfall is to reclaim space, you have to run the reorg rebuild yourself, but we can run this on our standby environment, so no user is affected, yeay!

ps: one limitiation -> 'no datacopy' does not work for computed columns....

Greetz

Pieter

Former Member
0 Kudos

Pieter,

Thanks for sharing another method . In "no datacopy" method does alter statement(for column drop) requires equal amount of actual table size as free space in relevant data segment ??

consider i have 10GB of table in that , i would like to drop 1 column . If this is the case

1) Does normal drop column statement requires around 10GB of space in data segment ?

2) "No datacopy" option won't bother about free space ? will it succeed if i have only 5 GB of free space ?

former_member188958
Active Contributor
0 Kudos

It doesn't require free space when you run it, because with "no datacopy" the table isn't copied at all.  ASE just marks the column as being "logically deleted", so it stops being seen.  However, the data is still there, using up space until you do a subsequent REORG REBUILD or ALTER TABLE that does make a copy of the table.

1) Yes.  Possibly somewhat less if your table is very fragmented.

2) yes.

Former Member
0 Kudos

Thanks bret

Answers (0)