Do you have any specific questions? The basic setup is documented in the following SAP Notes:
From my point of view backup and restore is a topic where just posting some commands is quite dangerous. If you take a backup you want to be 100% sure that you can recover the system. Therefore I strongly recommend reading the mentioned SAP Notes. Note 1588316 contains some SQL scripts that can help you set up the backup/recovery solution best matching your needs.
First off, let me introduce myself – this is my first posting here. My name is Bret Halford, and I'm a Principal Support Engineer for Sybase. I've been supporting ASE for 18 years.
My comments here are purely from the viewpoint of general use of ASE; so far I have had very little exposure to how SAP in specific runs on ASE aside from some knowledge of the new features in ASE developed for the purpose.
One of the first things you should think about when deciding on a backup strategy for ASE is how much data loss you can accept in the case of a disaster, as well as how much down time you can accept. I expect that SAP has already made these decisions for their own applications. There are actually factors beyond the dump strategy that can factor into this planning – for instance disk mirroring to guard against a hard drive freezing up and bursting into flame, the “high availability” option that uses two hosts sharing a disk system to guard against failure of a host, etc.
An ASE server has multiple databases on it, each database can be on multiple devices (and can share devices). Each database has a transaction log, which is itself a table within the database. At any given time, the contents of tables written to disk may contain a mix of committed and uncommitted data. ASE uses a write-ahead transaction log, meaning that when a client issues a commit transaction all the log records for that transaction are written to disk before success is returned to the client. However, the changes to data pages are not flushed to disk by the commit command (though they are flushed by other events, such as checkpoints or aging out of cache.) If the server goes down abruptly, a recovery process applies the transaction log records to the rest of the data stored on disk to return it to a transactionally consistent state.
As a general overview, there are three main methods for doing backups (in order of common usage): the DUMP and LOAD commands, the QUIESCE DATABASE command coupled with disk copy, and Replication.
The DUMP DATABASE command copies the entire database to an archive device. It does not prune (truncate) the transaction log (which is itself a table in the database). DUMP DATABASE is typically run on a daily or weekly basis. If your loss tolerance is 24 hours, then a single DUMP DATABASE every 24 hours is a reasonable choice for backup strategy and the transaction log can be kept pruned by either setting the database option “truncate log on checkpoint” or by regular use of the DUMP TRANSACTION command with the TRUNCATE_ONLY option.
If your loss tolerance is less than 24 hours, down to about 15 minutes, then you would add in transaction dumps. The DUMP TRANSACTION command copies the contents of the transaction log to an archive device and truncates the log from the beginning to the start of the oldest open transaction. DUMP TRAN can be run off a cron (or Job Scheduler) job, or triggered automatically when the log grows to a certain size using thresholds. Note that the more frequently transaction log dumps are taken, the more transaction log dump files there are to LOAD, and the longer it will take to load them all. These dumps can be kept to be reloaded on the host in case of a problem, or used to maintain a "warm" backupserver by promptly loading the dumps as they are taken onto a second host; in case of a failure the warm system can be brought online relatively quickly by starting the recovery process by issuing the ONLINE DATABASE command (the time to load the dumps is avoided, but the last phase of recovery still needs to be done).
DUMPs can be made either natively (directly to file or tape device) or to an interface written using the Backupserver API. DUMPS made to a file look like “dump database mydatabase to ‘/work/mydatabase.dmp’”, DUMPS made to an API look like “dump database mydatabase to ‘myapimodule::arbitrary_api_parameters’”. Sybase provides one API module with ASE, which is the compression module, which takes a compression level and file name as a parameter. “dump database mydatabse to ‘compress::4::/work/mydatabase.cmp.dmp’”. There are other third party vendors such BMC’s SQL Backtrack and various automated tape library manufacturers that create and provide API modules. ASE DUMPs also now have a native compression parameter, there are two methods for dump compression because the compression API was developed first and is still supported.
The second major backup method is the use of QUIESCE DATABASE coupled with a device copy of the underlying devices of the database, usually using a very fast copying method. QUIESCE prevents ASE from performing any writes to the devices until it is released; this allows a consistent “snapshot” of multiple devices to be copied. Activity can continue on the server in this state until the data caches fill completely with dirty buffers – although any process that has to flush log records will be blocked at that point until the log records can be written. This approach is typically used for very large databases where the time taken to dump or load the full database becomes very long with the traditional DUMP and LOAD commands. This method can also be combined with DUMP TRANSACTION.
The third method, using Replication, is used when the tolerance for data loss or down time is very low. The method maintains a hot backup system on another host by constantly shipping and applying transaction log records to the hot backup system using a Replication Server. If the primary fails, the warm backup can quickly be substituted.
Adding to Bret's posting:
As pointed out - it is mandatory that you think about your objectives regarding recoverability of the system. It will probably be different for some QA and test systems and for your production systems.
For production systems SAP mandates recoverability not only of the SAP database but also of the transaction log.
To split the task of developing a backup and restore concept:
1) Determine which databases you want to backup
In an SAP environment there is only one database that MUST get backed up (the <DBSID> database).
It is highly recommend to also backup the master database frequently. This is of great value when the complete ASE server was corrupted and has to get restored.
There are two more databases in an SAP system which you might want to consider for a backup - 'saptools' and 'sybmgmtdb' . saptools stores monitoring information of the DBACOCKPIT and the sybmgmtdb database stores information about jobs you created and scheduled in ASE. However, it is not mandatory to backup these.
2) Determine how you want to backup your database
The 'dump database' is an online operation (SAP system may stay online , users connected) . The disadvantge is that you cannot truncate the transaction log while a dump database is in progress, as the transaction logs are needed to provide consistency of the database dump.
So if the database reaches a certain size, it might become impracticable to use the 'DUMP DATABASE' , particulalry if there is a high transaction load in paralell. So if yur dump database takes 3 hours and database users generate 20GB of transaction log data , you need to have a transaction log of at elast 20GB (better 25GB). teh dump database command may use parallel streams to speed it up, but obviously there are limitations.
If the dump duration is a concern (large databases) you may consider the 'QUIESCE DATABASE' command in conjunction with an external mirror split solution. This command is semi online. Users may stay online, but write operations are blocked until the databases are un quiesced.
Regarding the transaction log dump:
Sybase ASE database have their log inside a database device, as a dedicated part of the database. There are no externally visible log files as in DB2 and Oracle. When you dump the tarsnaction log , you simply save a current snapshot of the log , not a particulary dinstingusied log file.
It is advisable to backup the transaction log frequently in order to avoid loss of data.
Log dumps can get automated by creating scheduled jobs or defing a threshold action (see documentation on sp_thresholdaction) or a combination of both.
Yes, that's true.
Sybase ASE does currently not support incremental backups in the sense that you dump a database partially with the dump command considering only pages that have been updated since the last full backup. This feature has been requested.
N.B. In Sybase world you will perhaps sometimes hear the term 'incremental dumps' , which however refers to dumps of the transaction log.
Just wondering how customers running ASE handle their backups then.. If DB size is couple of dozens TB already (which is not that uncommon with SAP) should they backup the whole 20TB every week (plus growth and logs in between)?? What about TCO in this case?
Would you come up with couple of examples how this can be done efficiently?
While ASE does not yet have native support for incremental dumps, there is a third-party tool, SQL Backtrack by BMC (which is a partner of both SAP and Sybase) does offer this functionality. SQL Backtrack uses a Backup Server API module that performs some form of timestamp or checksum processing on the stream resulting from a DUMP DATABASE command and passes on to the archive only those blocks that have changed. The SQL Backtrack application is then able to generate a full database dump file corresponding to any of the incremental dump points.