cancel
Showing results for 
Search instead for 
Did you mean: 

Efficient way to alter a large table with limited space on the server

Former Member
0 Kudos

Hi All,

I joined a new organization one week back and I was assigned a task on my very first day. The task is to alter a table which appeared simple at first but as I started working on it I encountered a lot of hurdles.

Here, the table in question is 91 GB is size and it has a identity column with lot of identity gaps which was never fixed until now. The actual issue was that when this table was created the identity column was created with a precision of 10 i.e., numeric(10,0),. The data has grown so much that this range is almost exhausted and hence they want to increase the precision to 12 within next two weeks deadline. To increase the precision they were asking me to alter the table.

If my understanding is correct, altering such a huge table requires space greater than the actual size of the table. If I read the documetation right it should be 120% of the existing table size. I had two thoughts in my mind:

- Fire a alter table with bulkcopy db option enabled or

- bcp out and bcp in the data retaining the existing identity values

Things would have been easier if we had space on the server but I had another surprise, the UAT server had only 20 GB space which is not sufficient for both of the above options. I was frustrated and checked the production server. The production server had 95 GB free space.

Since, my previous ideas are not feasible in the current scenario, I came up with a different solution. My plan was to use SELECT INTO to move data into a new table in production with bulkcopy db option enabled. Once the data is copied, create a clustered index on the table with sorted_data option and drop the old table. My team was convinced with idea as the storage we requested for will take time for approval and be available.

I have one questions here.

1. Once I create the new table, I will be left with only 4 GB space. I have to create a composite clustered index on this new table and I am thinking of creating the index with sorted_data option which will build the index while scanning without doing any copy of the data pages. My understanding is that index creation also needs space but since I am specifying that the data is already sorted it does not to do any copy or move the data pages.

-> Please let me know if my understanding on this point is correct.

The select into query is given below:

select new_identity_column=identity(12),[column_list]

into new_table

from old_table

Sybase ASE version: 12.5.3

The table has 400000000 records at present.

I am still worried about doing this as I did not get a chance to test this in UAT and also I am worried about the log.

Please let me know if my understandings are correct and if there is a better way to do this. I may be completely wrong, please guide me as I am new to Sybase ASE database administration.

Thanks

ADNAN

Former Member
0 Kudos

Hi Mark,

Thank you for assisting me.

Please find all the requested information below:

- What is the table's current identity gap setting? (select identitygap from sysindexes where id = object_id('<table_name>') and indid < 2)

The identity gap is zero for the table.

- What is the current max(ident) value? (select max(<ident_col_name>) from <table_name>)

Min - 1790259173

Max - 9630809127

- What is the table's current locking scheme ... allpages, datapages, datarows? (see output of sp_help <table_name>)

Allpages locked table

- Does the table currently have a clustered index? (sp_helpindex <table_name>)

Yes. The table has a composite clustered index with three columns:

ID: numeric(10,0)

Name: varchar(40)

RowNum: numeric(10,0)

- Do you need to maintain the current identity values if/when copying the data to a new table?

No. Our team has agreed to regenerate the identity column values again. I have added the identity function to generate identity column values while doing select into.

- How big of a maintenance window do you have in which to perform the change?

I have to complete this by next week. I can do it either this weekend or the next weekend.

- How much of the current table is accessed on a normal basis? Is older data updated on a regular basis or is older data basically read-only? (Wondering if older data could be moved prior to your maintenance window; would require more logging for the actual maintenance window but could free up some wall clock time if you can move a large chunk of the data prior to the maintenance window.)

The table is not accessed much and it is only used to dump all the trade activity. The old data is still present in the table. I asked the same question on whether we can move the old data out but they are more focused on resolving this issue than moving the data.

- Are rows ever deleted from this table and if so, how many are deleted over a given period of time (eg, how many deleted in a week, in a month, in a year)?

I came to know that they tried to archive the old data sometime back which failed for some reason and there was no deletes later.

I hope, my answers are to the point and answers all your questions. Please let me know if I am going in the right direction or correct me if I am wrong.

Thanks a lot in advance.

Regards

ADNAN

Accepted Solutions (0)

Answers (0)