Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member
0 Kudos

High Availability - LOG SHIPPING

High Availability Features

------------------------------------

* In order to reduce or avoid server or database downtime so that the applications/users works continuously with the server, SS supports 3 types of DR features.

                1. Server Level High Availability features

                                * Clustering

                2. Database Level High Availability features

                                * T.Log Shipping

                                * Database Mirroring

                3. Object Level High Availability features

                                * Replication

  1. 1. Log shipping

--------------------

* It is simple method of keeping entire database in different server.

* It works with Backup, Copy and Restore jobs.

* We need 3 servers

                * Primary server

                * Secondary server

                * Monitor server (optional)

* In primary server database is in online state where the applications or users are connected.

* In secondary server database is in standby mode where we can read data.

* In secondary server T.Log backups are applied either with

                * Norecovery or

                * Standby

* We can have multiple secondary servers.

Advantages

-----------------

                * To reduce downtime for the applications or users

                * To implement load balancing. i.e. we can allow reading from secondary server and

manipulations in primary server.

Points to Remember:

-----------------------------

We can ship all databases except master, model, read-only and the databases whose recovery model is SIMPLE.

    We should have sysadmin privileges on both servers.

    Changing recovery model from FULL or BULK LOGGED to SIMPLE will break log shipping process.

    We cannot take ad-hoc Transaction Log backups from primary server as it breaks log shipping log

    backup chain.

Architecture

------------------

Log Shipping Jobs

------------------------

* Log shipping supports 4 jobs

                * Backup Job

                * Copy Job

                * Restore Job

                * Alert Job

  1. 1. Backup Job

-------------------

                * It is created in primary server for every log shipping configuration.

                * It takes backup of T.Log file periodically and deletes old backups and old history information.

                * We have to grant read write permissions on Backup folder to primary server service account

    and read permissions to secondary server account.

  1. 2. Copy Job

----------------

                * Created in secondary server for every log shipping configuration.

                * Copy the backup files from backup folder into copy folder.

                * It deletes old files and old history from copy folder.

                * On backup folder we have to grant read permission to secondary server account and read -

   write permissions on copy folder.

  1. 3. Restore Job

--------------------

                * It is created in secondary server for every log shipping configuration.

                * It restores the files from Copy folder into secondary server in standby mode.

  1. 4. Alert Job

----------------

                * It is created in Monitor server.

                * If monitor server is not used it is created in primary and secondary servers.

                * Only one instance of Alert Job is created.

Requirements

-------------------

* Minimum 2 servers are required.

* Database must be in FULL or BULK LOGGED recovery model.

* Any of the editions

                * Enterprise Edition

                * Standard Edition

                * Workgroup Edition

* Both the servers should have same collation settings.

Steps

--------

  1. Go to SSMS take 3 instances
  2. Note down their service accounts

SYS1 (SQL Server 9.0.5000 – WIPRO\administrator)

SYS2 (SQL Server 9.0.5000 – WIPRO\administrator)

SYS3 (SQL Server 9.0.5000 – WIPRO\administrator)

          

               

  1. Create the following folders

                * In Primary Server F:\logshipping_backup

                * Grant read write permissions on this folder to Primary Server account

                * Grant read permission to secondary server account

                * In Secondary server E:\logshipping_copy

                * Grant read write permissions to secondary server account

  1. Test sharing features as

                * In Primary server

                                * Start --> Run --> \\sys1\logshipping_backup

                * In Secondary Server

                                * Start --> Run --> \\sys3\logshipping_copy

  1. 5. In Primary server

                1. Create database with the name: suresh

                                CREATE DATABASE suresh

                2. Create sample table (EMP TABLE)

CREATE TABLE EMP

(

EMPNO NUMERIC(4) ,

ENAME VARCHAR(10),

JOB VARCHAR(9),

MGR NUMERIC(4),

HIREDATE DATETIME,

SAL NUMERIC(7, 2),

COMM NUMERIC(7, 2),

DEPTNO NUMERIC(2)

);

INSERT INTO EMP VALUES(7369,'SMITH','CLERK',7902,'1980-12-17',800, 300, 20);

INSERT INTO EMP VALUES(7499,'ALLEN','SALESMAN',7698,'1981-02-20',1600,300,30);

INSERT INTO EMP VALUES(7521,'WARD','SALESMAN',7698,'1981-02-22',1250,500,30);

INSERT INTO EMP VALUES(7566,'JONES','MANAGER',7839,'1981-04-02',2975,300,20);

                3. Taking Full backup & Log Backup

BACKUP DATABASE suresh TO DISK='F:\SQL_BACKUPS\suresh_full.bak'

BACKUP LOG suresh TO DISK='F:\SQL_BACKUPS\suresh_full.bak'

(Not required to perform manually)

Note:

                We have to configure on existing database.

  1. 6. Go to Secondary Server          

* Create folder with the name d:\SureshFiles and grant read write permissions to service account.

                (Not required to perform manually)

               

* R.C on database (suresh) --> Restore -- > database…

                   To Database:   suresh

                   Select From Device:

                   Click on browse button --> Add--> select suresh_full.bak file (\\sys1\SQL_BACKUPS\suresh_full.bak) --> OK

                * Select checkbox under Restore option.

               * Go to options --> select recovery state : with standby

                --> OK

  1. 7. Configuring Log Shipping

               

* Go to Primary Server

                * Right click on Sales db

                * Tasks --> Ship Transaction Log

                * Select checkbox "Enable this as primary database...."

                * Click on "Backup Settings" --> Enter backup folder path

                                \\SYS1\LSBackup_Sales

                * Click on "Schedule" Change the time to 5minutes

                                Occurs Every: 5 minutes

                * OK

                * Under "Secondary Databases" click on Add button

                * Secondary server instance = Click on "Connect" button --> Select secondary server instance

                * Select the option "No, the secondary database is initialized (Yes, generate full backup)

                * Click on Copy Files tab --> enter Destination folder as

                                \\SYS3\LSCopy_Sales

                * Click on schedule Change the time to 5minutes

                                Occurs Every: 5 minutes

                * Click on Restore transaction log tab -->

               

* Select "Standby Mode" option and checkbox "Disconnect users in the database when..."

                * Click on schedule Change the time to 5minutes

                                Occurs Every: 5 minutes

                * OK

                * OK

                * OK

Observations

------------------

  1. Go to primary server --> SQL Server agent --> Jobs --> View the backup job with the name 

LSBackup_Sales

  1. If no monitor server is used, check 2 alerts are created in Alerts folder.

                Log shipping Primary Server Alert

                Log shipping Secondary Server Alert

  1. In Secondary server verify that 2 jobs are created

                * Copy                  (LSCopy_Sales)

                * Restore            (LSRestore_Sales)

  1. All the above (Backup, Copy, Restore) jobs uses "sqllogship.exe" file.
  2. Alert job calls the following SP

                sys.sp_check_log_shipping_monitor_alert

  1. sqlmaint.exe is responsible for updating backup, copy and restore information in the monitor    

    server/p/s servers.

  1. Linked Servers are created in primary and secondary server related to monitor as

LOGSHIPLINK_<MonitorServerName>

  1. The following SP’s are used to configure Log Shipping

                1. master.dbo.sp_add_log_shipping_secondary_primary

                2. master.dbo.sp_add_log_shipping_secondary_database

Configuring Alerts

-------------------------

* Once we configure Log shipping 2 alerts are created automatically.

* We have to configure response for the alerts.

Steps

--------

* Go to Primary server --> SQL Server Agent --> Alerts --> Right Click on Log shipping Primary server alert --> Properties --> Response -->Notify Operators --> Select existing operator and select Email

                * Go to secondary server configure "Log shipping secondary alert…"

Monitoring Log Shipping

----------------------------------

  1. Using MSDB tables and Views

                * Go to primary server --> MSDB

                                1. Log_shipping_primary_databases

                                                * Consists of information related to

                                                                * Log Backup folder path

                                                                * Last Log backup file name

                2. Log_shipping_primary_secondaries

                                * Consists of details of secondary server name and database name

                * Go to Secondary Server --> MSDB

                                1. log_shipping_secondary

                                                * Details of copy folder and last copied file

                                2. log_shipping_secondary_databases

                                                * Details of last restore file

Creating Linked Server in Server1 (primary) for Monitoring

--------------------------------------------------------------------------------

  1. Go to secondary server Server2 and create a login which can access msdb tables.

                use master

                go

                create login link_login with password='hyd'

                go

                use msdb

                go

                create user link_login for login link_login

                go

                grant select to link_login

* Go to primary server(server1) --> server Objects -->R.C on Linked Server -->

                Enter server name : SERVER2 -->

                Select "SQL Server"

                --> Select Security   --> Be made using .... Enter login name and password --> OK

                Enter login name: link_login

                password=hyd

               OK

* Go to Server1 write the following script

select t1.primary_database AS DBName,t1.last_backup_file,

t2.last_copied_file,t3.last_restored_file

from msdb.dbo.log_shipping_primary_databases t1

    join [class2\third].msdb.dbo.log_shipping_secondary t2

ON t1.primary_database=t2.primary_database

   JOIN [class2\third].msdb.dbo.log_shipping_secondary_databases t3

                ON t2.secondary_id=t3.secondary_id

  1. 2. Using Log Shipping Status Report

-------------------------------------------------

* Go to Primary Server --> R.C on Server name --> Reports --> Standard Reports --> Transaction Log Shipping Status

  1. 3. Using Agent Job History

------------------------------------

* We can monitor the jobs history and if any issues are there we can resolve it.

Points to Remember

-----------------------------

* As part of Log shipping only database users are transferred to secondary server (db).

* Logins are not transferred as part of log shipping. We have to transfer manually or by creating job.

Log backup schedule                      = 15min

                Copy backup schedule = 15Min

                Restore backup schedule = 15Min

If the changes are made in primary server @8am. The max time to reach at secondary server 8:45am (15+15+15)

* If the table is truncated at primary server then at the secondary server database also it is truncated, because the truncate command is minimal logged operation.

* Log shipping is stopped if we change the db recovery model from FULL or BULK_LOGGED to SIMPLE.

FAQ: - If we take FULL backup at primary server then is log shipping continues?

Ans:

                * Full backup cannot break log backups LSN number. Hence Log shipping process continues.

Log Shipping Issues

---------------------------

  1. 1. Out of sync

-------------------

* It is raised when the T.Log backups are not applied at secondary server with the error number 14421

                * If there is no disk space in the secondary server for restoring backups.

                * If there is no respective path to create the files in secondary server.

                * If there is ad-hoc T.Log backup was taken in the primary server.

                                                9:00        T.Log     1000

                                                9:10        T.Log     1050 (Ad-hoc)

                                                9:15        T.Log 1100

Solution

------------

* Restore first ad-hoc log backup which was taken at 9:10 manually in secondary server with norecovery.

                * Restore next log backup which was generated by Log shipping process i.e. at 9:15

                * If the Agent service or msdb is not working in secondary server.

                * If any T.Log backups are missing/corrupted before restoring.

                                * Disable Log shipping jobs.

                                * Take differential/Full backup and apply in secondary server.

                                * Enable jobs

  1. 2. Copy and Restore jobs are not running

--------------------------------------------------------

* Check secondary server date and time. i.e secondary server date and time is less than primary server.

                * Change the date of start for the above jobs equal to secondary server date.

FAQ: - If the secondary server is down or Agent service in secondary server is not running or msdb is not online copy and restore jobs fail. Once the Agent service is started how the backup files are copied.

Ans:

                * All the backup files are copied from last copied file at once by the copy job.

FAQ: - In secondary server restore job was failing and there is error message

"it is too early to apply the logs". What may be the possible reasons?

Ans:

                * Try to restore if there is any previous backup file.

Performing fail over

-----------------------------

* Process of making secondary database available to the appls or users is called fail over when primary server/db failed.

* Log Shipping Fail over process is not automatic.

Steps

--------

  1. Perform one last T.Log backup in primary server if possible
  2. Disable Log shipping jobs
  3. Restore if any backups are not applied at secondary server with NORECOVERY.
  4. Restore tail log backup with RECOVERY.
  5. Transfer the Logins related to that db.
  6. Make the secondary server available to the users.
  7. Configure Log shipping.

TRANSFERING LOGINS FROM ONE INSTANCE TO ANOTHER INSTANCE

----------------------------------------------------------------------------------------------

--First 2 steps should be executed in primary server

--We have to copy 2 step output and run in standby server

--Step1--

USE master
GO
IF OBJECT_ID ('sp_hexadecimal') IS NOT NULL
DROP PROCEDURE sp_hexadecimal
GO
CREATE PROCEDURE sp_hexadecimal
@binvalue varbinary(256),
@hexvalue varchar(256) OUTPUT
AS
DECLARE @charvalue varchar(256)
DECLARE @i int
DECLARE @length int
DECLARE @hexstring char(16)
SELECT @charvalue = '0x'
SELECT @i = 1
SELECT @length = DATALENGTH (@binvalue)
SELECT @hexstring = '0123456789ABCDEF'
WHILE (@i <= @length)
BEGIN
DECLARE @tempint int
DECLARE @firstint int
DECLARE @secondint int
SELECT @tempint = CONVERT(int, SUBSTRING(@binvalue,@i,1))
SELECT @firstint = FLOOR(@tempint/16)
SELECT @secondint = @tempint - (@firstint*16)
SELECT @charvalue = @charvalue +
SUBSTRING(@hexstring, @firstint+1, 1) +
SUBSTRING(@hexstring, @secondint+1, 1)
SELECT @i = @i + 1
END
SELECT @hexvalue = @charvalue
GO
IF OBJECT_ID ('sp_help_revlogin') IS NOT NULL
DROP PROCEDURE sp_help_revlogin
GO
CREATE PROCEDURE sp_help_revlogin @login_name sysname = NULL AS
DECLARE @name sysname
DECLARE @xstatus int
DECLARE @binpwd varbinary (256)
DECLARE @txtpwd sysname
DECLARE @tmpstr varchar (256)
DECLARE @SID_varbinary varbinary(85)
DECLARE @SID_string varchar(256)
IF (@login_name IS NULL)
DECLARE login_curs CURSOR FOR
SELECT sid, name, xstatus, password
FROM master..sysxlogins
WHERE srvid IS NULL
AND name <> 'sa'
ELSE
DECLARE login_curs CURSOR FOR
SELECT sid, name, xstatus, password
FROM master..sysxlogins
WHERE srvid IS NULL
AND name = @login_name
OPEN login_curs
FETCH NEXT
FROM login_curs INTO @SID_varbinary, @name, @xstatus, @binpwd
IF (@@fetch_status = -1)
BEGIN
PRINT 'No login(s) found.'
CLOSE login_curs
DEALLOCATE login_curs
RETURN -1
END
SET @tmpstr = '/* sp_help_revlogin script '
PRINT @tmpstr
SET @tmpstr = '** Generated '
+ CONVERT (varchar, GETDATE()) + ' on ' + @@SERVERNAME + ' */'
PRINT @tmpstr
PRINT ''
PRINT 'DECLARE @pwd sysname'
WHILE (@@fetch_status <> -1)
BEGIN
IF (@@fetch_status <> -2)
BEGIN
PRINT ''
SET @tmpstr = '-- Login: ' + @name
PRINT @tmpstr
IF (@xstatus & 4) = 4
BEGIN -- NT authenticated account/group
IF (@xstatus & 1) = 1
BEGIN -- NT login is denied access
SET @tmpstr = 'EXEC master..sp_denylogin ''' + @name + ''''
PRINT @tmpstr
END
ELSE
BEGIN -- NT login has access
SET @tmpstr = 'EXEC master..sp_grantlogin ''' + @name + ''''
PRINT @tmpstr
END
END
ELSE
BEGIN -- SQL Server authentication
IF (@binpwd IS NOT NULL)
BEGIN -- Non-null password
EXEC sp_hexadecimal @binpwd, @txtpwd OUT
IF (@xstatus & 2048) = 2048
SET @tmpstr = 'SET @pwd = CONVERT (varchar(256), ' + @txtpwd + ')'
ELSE
SET @tmpstr = 'SET @pwd = CONVERT (varbinary(256), ' + @txtpwd + ')'
PRINT @tmpstr
EXEC sp_hexadecimal @SID_varbinary,@SID_string OUT
SET @tmpstr = 'EXEC master..sp_addlogin ''' + @name
+ ''', @pwd, @sid = ' + @SID_string + ', @encryptopt = '
END
ELSE
BEGIN
-- Null password
EXEC sp_hexadecimal @SID_varbinary,@SID_string OUT
SET @tmpstr = 'EXEC master..sp_addlogin ''' + @name
+ ''', NULL, @sid = ' + @SID_string + ', @encryptopt = '
END
IF (@xstatus & 2048) = 2048
-- login upgraded from 6.5
SET @tmpstr = @tmpstr + '''skip_encryption_old'''
ELSE
SET @tmpstr = @tmpstr + '''skip_encryption'''
PRINT @tmpstr
END
END
FETCH NEXT
FROM login_curs INTO @SID_varbinary, @name, @xstatus, @binpwd
END
CLOSE login_curs
DEALLOCATE login_curs
RETURN 0
GO
----- End Script -----

Note: -  This script creates two stored procedures in the master database. The two stored procedures are named the sp_hexadecimal stored procedure and the sp_help_revlogin stored procedure.

Run the following statement.

EXEC master..sp_help_revlogin
GO

--step2:                   EXEC sp_help_revlogin

--step3:                   The above SP generates some output as follows

/* sp_help_revlogin script

** Generated May 25 2009 9:11PM on ONLINE */

-- Login: BUILTIN\Administrators

CREATE LOGIN [BUILTIN\Administrators] FROM WINDOWS WITH DEFAULT_DATABASE = [master]

-- Login: NT AUTHORITY\SYSTEM

CREATE LOGIN [NT AUTHORITY\SYSTEM] FROM WINDOWS WITH DEFAULT_DATABASE = [master]

Run the above output (for required logins) in Stand by server instance.

--You can download from

--http://support.microsoft.com/kb/918992/

......................................................

Administering and Managing Log Shipping

=======================================

Log shipping is one of four SQL Server 2005 high-availability solutions.

Other SQL Server 2005 high-availability alternatives include

                * Database Mirroring

                * Failover Clustering

                * Peer-to-Peer Replication.

Note: - Database mirroring and Peer-to-Peer Replication are new technologies introduced with SQL Server 2005

Log shipping offers increased database availability and database protection by maintaining a warm standby database on another instance of SQL Server 2005.

Unlike database mirroring, a Log Shipping failover is not done automatically but its manual failover.

Factors affecting Log Shipping

=================================

The size of the transaction log backups, speed of the network, and length of time the restoration takes all play a significant role in planning and scheduling the log shipping solution.

Log Shipping Design Considerations

===================================

*             SQL Server Version: SQL Server 2005 Standard, Workgroup and Enterprise Editions can be used for log shipping. All servers must be configured with the same case-sensitivity settings.

*             Recovery Mode: The recovery mode of the source database on the primary server must be        configured as full or bulk-logged. Because the transaction log is a key part of log shipping, the simple recovery model cannot be used.

*             Monitor Server: The monitor server is optional. This server is important because it provides a central location for the log shipping status and alert messages.

*             Security: Sysadmin role is required on each SQL Server that will take part in the Log Shipping. The agent account used to run the backup job on the primary server must be able to read and write to the backup location.

*             Backup & Restore T Log Location: The backup storage location is used to store the transaction log backups created by the backup job. It is highly recommended to host the transaction log backups on a fault-tolerant server independent of the log shipping primary or secondary servers having enough disk space. Similarly on backup server, enough drives should be available.

Viewing Log Shipping Reports

=============================

* Log shipping reports can be viewed from the primary, secondary, and monitor servers. However, viewing the reports from the monitor server is most effective because the monitor server contains records from both the primary and secondary servers. Viewing the log shipping report from the primary and secondary servers shows only half the data.

Log Shipping Tables

====================

                Table Name                                                                                       Description

                ----------------                                                                                     ---------------

log_shipping_monitor_alert                                       Stores alert job ID.

log_shipping_monitor_error_detail                        Stores error details for log shipping jobs.

log_shipping_monitor_history_detail                     Contains history details for log shipping agents.

log_shipping_monitor_primary                                                 Stores one monitor record for the primary                                                                                                                           database in each Log shipping configuration,                                                                                                                       including information about the last backup file                                                                                                                                 and last restored file that is useful for monitoring.

log_shipping_monitor_secondary                            Stores one monitor record for each secondary database,                                                                                              including information about the last backup file and last                                                                                                                restored file that is useful for monitoring.

Reasons of Log shipping Failure

==============================

* Network Failure

* No Disk space on Secondary Server

* Read/Write permission removed from Log folder

* Password Expired of SS Agent Service

* Copy / Restore job is not running

Prerequisite toconfigure LS

-------------------------------------

* 2 Servers ( Primary & Secondary)

* Recovery model should be FULL

* SQL 2005 Enterprise/Standard edition

* Same collation on both Servers

* Agent should be in Automatic Mode

* One shared folder on primary server to keep log backups

-------------------------------------------------------------------

Script to check Log shipping Status - SQL2000

------------------------------------------------------------

select p.primary_Server ,p.primary_database ,s.secondary_server, p.last_backup_file,s.last_copied_file,

  1. s.last_restored_file

from msdb..Log_shipping_monitor_primary p , msdb..Log_shipping_monitor_secondary S

where p.primary_database = S.primary_database

What is .TUF file in Log Shipping?

===========================

TUF file is a Microsoft SQL Server Transaction Undo file. .TUF File contains the information regarding any modifications that were made as part of incomplete transactions at the time the backup was performed.

A transaction undo(.TUF) file is required if a database is loaded in read-only state. In this state, further transaction log backups may be applied.

TUF File in Log Shipping

====================

The transaction undo file contains modifications that were not committed on the source database but were in progress when the transaction log was backed up AND when the log was restored to another
database, you left the database in a state that allowed addition transaction log backups to be restored to it (at some point in the future. When another transaction log is restored, SQL Server uses data
from the undo file and the transaction log to continue restoring the incomplete transactions (assuming that they are completed in the next transaction log file). Following the restore, the undo file will be re-
written with any transactions that, at that point, are incomplete.
Hope its not too geeky.


Question: In my environment there is an issue with Log shipping destination file path, I've to change the file path on the destination, I've changed and LS copy is working fine and LS restore is failing because it is trying find the .tuf file on the old path which is not exists on the destination.


I don't want to do full restore for 30+ databases, so I'm trying to update the .tuf path on msdb on destination server but I couldn't find out the path details on any of the log shipping system tables. I knew the last restored file path details can be found on
dbo.log_shipping_monitor_secondary ,dbo.log_shipping_secondary_databases
tables, updating these tables not helping to resolve my issue.


Where is the .tuf file path details on msdb?
Ans: The tuf file path is none other than the column backup_destination_directory in log_shipping_secondary on the primary server. And this will be automatically updated when you change the folder name in the LS setup page . But TUF should be available in the old directory when the next restore happens.

SELECT backup_destination_directory FROM dbo.log_shipping_secondary

If you are changing the path for this directory what SQL server does is , when the next restore happens it first tries to copy the TUF file from the old directory to new directory and then only go ahead with the restore operation . If SQL server cannot find the .tuf file in the old directory or the old directory is itself lost – then there is no other way than reconfiguring your LS setup from scratch.


What is Undo File? Why it is required?

Undo file is needed in standby state because while restoring the log backup, uncommitted transactions will be recorded to the undo file and only committed transactions will be written to disk there by making users to read the database. When you restore next tlog backup SQL server will fetch the uncommitted transactions from undo file and check with the new tlog backup whether the same is committed or not. If its committed the transactions will be written to disk else it will be stored in undo file until it gets committed or rolled back.

3 Comments
Labels in this area