7 Replies Latest reply: Dec 31, 2009 12:54 PM by Lars Breddemann RSS

bottleneck of fast incremental backups

Clovis Wichoski
Currently Being Moderated

Hi,

 

i need a way to have a warranty that a change in a production system dont cause user to loss data, as maxdb dont have audit feature, i think to activate incremental backup at every 5 minutes, then if for some reason i need to check changes i will just restore that incremental backups, and check what changes from one to other.

 

to better understand my problem, think that you use some tool for persistence, like, hibernate, ojb, castor, etc..., this tools maybe works different for complex object graph then, in some cases, when persisting a value, some value can be lost, or changed incorrectly, like you in program set a object for a reference have a value, but persistence framework have an unknown bug and sets that value to null, instead the reference the user wish.

the idea here is, if that occurs, how i can get the right data that table have before persistence framework put the wrong value? one of my ideas i stated above.

 

any tip are welcome.

 

best regards.

 

Clóvis

 

  • Re: bottleneck of fast incremental backups
    Lars Breddemann
    Currently Being Moderated

    Hi Clovis,

     

    using incremental backups does not seem to be a good idea to me.

    Of course you can use backups to recreate the old values, but for that it's sufficient to take daily data backups and rollforward the recovery instance by using the log backups.

    For that I wouldn't complicate things more by adding incremental backups.

     

    However, this kind of data 'protection' rather calls for some data versioning.

    In fact what you describe is: "Keep the old data and store the new data as wel".

     

    So in this case you may think of creating triggers and versioning tables for your tables.

    (Yes, this would be one of the few places where triggers are sensible to use).

     

    For every INSERT, UPDATE, DELETE statement you can then insert the old data together with a timestamp to a versioning table.

    Should you require the old data, you may just query this table then.

     

    regards,

    Lars

    • Re: bottleneck of fast incremental backups
      Clovis Wichoski
      Currently Being Moderated

      Hi Lars,

       

      exists any type of SQL script that can automate the creation of this type of triggers? or for every table i must do that by hand?

       

      regards

       

      Clóvis

       

      • Re: bottleneck of fast incremental backups
        Lars Breddemann
        Currently Being Moderated

        Hi Clovis,

         

        there's no build-in feature in the MaxDB tools for this, no.

        But I remember that CASE tools offered this kind of "Datadesign-macro" back when CASE and RAD was hip and trendy.

         

        Anyhow, It's not that difficult to create such a trigger.

        And depending on how your data looks like you may even think of a general kind of logging table.

        For example:

        Your table may look like this:

        CREATE TABLE city
        (zip   CHAR(5)  PRIMARY KEY CONSTRAINT zip_cons, 
         name  CHAR(30) NOT NULL,
         state CHAR(2)  NOT NULL)

         

        And you create a logging table for that like this one:

        CREATE TABLE LOG 
        (table_name varchar(30) not null,
        change_id int not null DEFAULT SERIAL,
        change_op int  not null CHECK (change_op between 1 and 3),
        before_data CLOB,
        primary key (tablename, change_id) )

         

         

        Let us assume that change_op codifies the change operation like this: 1=INSERT, 2=UPDATE, 3=DELETE).

        Now you create one three triggers (one for INSERT, UPDATE and DELETE) and in that trigger you call the same stored procedure that actually perform the insert into the logging table.

        The before data itself is just stored as a concatenated string of the old data (before_data).

        With that you have a central logging table that would not immediately lead to heavy locking situations and that will still stay rather compact.

        Also the logging code needs to be written only once - leaving the actual table data concatenation and the call of the stored procedure as the only coding you need to create for each table.

         

        You can also think of adding a timestamp or a user field to the table if this is required.

         

        That's the way I would approach this issue (without further information about the exact data and the requirements).

         

        regards,

        Lars

        • Re: bottleneck of fast incremental backups
          Clovis Wichoski
          Currently Being Moderated

          Hi, Lars,

           

          i understand, if you dont mind there is a way to put here a sample with that stored procedure, and triggers?

           

          regards

           

          Clóvis

           

          • Re: bottleneck of fast incremental backups
            Lars Breddemann
            Currently Being Moderated

            > i understand, if you dont mind there is a way to put here a sample with that stored procedure, and triggers?

             

            Since x-mas is just over I still make presents, so let's see

             

            create table hotel.hist_data(
            table_name varchar(30) not null,
            change_id int not null default SERIAL,
            change_operation int not null check (change_operation between 1 and 3),
            old_data varchar(4000),
            primary key (table_name, change_id)
            )
            //
            
            // GENERIC LOG-INSERT PROCEDURE
            
            create dbproc hotel.log_change (IN TABLENAME CHAR(30), IN CHANGEOP FIXED(1), IN OLDDATA char(4000))
             AS 
             TRY
             INSERT INTO HOTEL.HIST_DATA (table_name, change_operation, old_data)
                  values (:TABLENAME, :CHANGEOP, :OLDDATA);
              RETURN 0;
            CATCH
            IF $RC <> 100 THEN
                BEGIN
                STOP ($RC, 'proc unexpected error');
                END;  
            //
            
            // UPDATE TRIGGER FOR TABLE HOTEL.CUSTOMER
            create trigger log_upd_customer for hotel.customer after update
            execute (
                 VAR OLDDATA char(4000);
            try
                  OLDDATA = chr(OLD.CNO) 
                  & ', ' & chr(OLD.TITLE)  
                        & ', ' & chr(OLD.FIRSTNAME)
                        & ', ' & chr(OLD.NAME)
                        & ', ' & chr(OLD.ZIP)
                        & ', ' & chr(OLD.ADDRESS); 
                  call hotel.log_change ('CUSTOMER', 2 , :OLDDATA);     
            catch
            IF $RC <> 100 THEN
                BEGIN
                STOP ($RC, 'unexpected error');
                END;       )
            //

             

            Now let's change some data:

            // -- was MUELLER before
            update hotel.customer set name='Porter' where cno=3000
            commit
            
            select * from hotel.hist_data
            TABLE_NAME; CHANGE_ID; CHANGE_OPERATION; OLD_DATA
            CUSTOMER;           1;                2;  3000, Mrs, Jenny, Mueller, 10580, 1340 N. Ash Street, #3

             

            Hope that's what you wanted.

             

            regards,

            Lars

             

            p.s.

            because of the not-so-comfortable-clob-access in MaxDB I just used a large varchar field to store the old data.

            • Re: bottleneck of fast incremental backups
              Clovis Wichoski
              Currently Being Moderated

              Hi Lars,

               

              thanks for the gift.

               

              just one more doubt, and if i change hist_data and add more one field to save the column that was changed like this::

               

              create table hotel.hist_data(
              table_name varchar(30) not null,
              field_name varchar(30) not null,
              change_id int not null default SERIAL,
              change_operation int not null check (change_operation between 1 and 3),
              old_data varchar(4000),
              primary key (table_name, field_name, change_id)
              )
              //

               

              and then on trigger i call dbproc for each field that was changed, something like:

               

              IF OLD.FIRSTNAME <> NEW.FIRSTNAME THEN BEGIN
                  call hotel.log_change ('CUSTOMER', 'FIRSTNAME', 2 , :OLD.FIRSTNAME);
              END;

               

               

              this will increase the number of rows, but only log what was really changed, what is your view of this change?

               

              best regards and happy new year!!!

               

              Clóvis

               

              • Re: bottleneck of fast incremental backups
                Lars Breddemann
                Currently Being Moderated

                > and then on trigger i call dbproc for each field that was changed, something like:

                >

                >

                IF OLD.FIRSTNAME <> NEW.FIRSTNAME THEN BEGIN
                >     call hotel.log_change ('CUSTOMER', 'FIRSTNAME', 2 , :OLD.FIRSTNAME);
                > END;

                >

                >

                > this will increase the number of rows, but only log what was really changed, what is your view of this change?

                 

                Hi Clovis,

                this totally depends on your needs.

                Of course you can check for all changed columns and just save these ones.

                It's really just about what you require.

                 

                Personally I would try to have one log entry for one change action - so I'd modify the trigger code and only concatenate the changed columns. Of course one has to store the column no./name in that case as well...

                 

                As I wrote: it's completely up to you and your requirements.

                I only tried to provide a little primer for your own solution.

                 

                Have a fine new years party and a happy 2010!

                 

                cheers,

                Lars

Actions