Additional Blogs by SAP
cancel
Showing results for 
Search instead for 
Did you mean: 

The Migration Roadmap so far ....

1. Planning the migration
2. Designing architecture migration
3. Implementing schema migration
4. Implementing data migration
5. Implementing application migration
> Implementing operation migration
7. Validating the migration

This is the sixth installment for best practices for migrating to SAP SYBASE ASE Server.  Regardless of which Database vendor you are moving from, the steps outlined below will form the overall guide for our talk.   To continue a theme, we will target our talk mainly to the non-SAP Applications. 

The story so far: we have our logins created, the database or databases, tables and indexes created and data and all compiled code is on the Target ASE Server.   We have converted our Clients and  Applications to connect successfully to the SAP Sybase ASE Server.

While the Migration Roadmap with the steps from 1 to 7 are roughly based on chronological order, implementing operation migration, or we should say the "discovery and conversion" portion of implementing operation migration should be started right after the "designing architecture migration" step.  

Operations: a definition

Database operations will be defined as special Clients or Applications that take care of the heath of the Database.  Essentially these are the DBAs helper tasks.  Examples of operational tasks would be:

- starting and stopping a Database Server
- data imports and exports
- backing up a Database Server
- monitoring a Database Server

When we undertake a migration project, step one in this subject area is to list all existing operations, judge the efforts in the operations migration efforts and deliver any new designs that need to be considered.  For this discussion let us divide Operations into:

- Contingency or protecting our data.
- Monitoring our data
- Third-party operation tool support
- Data shipping
- Administration

Contingency or protecting our data.

Despite what you have set up in the original Database vendor platform,  a common contingency scenario is provided by either a full or implemental  Database or Transaction Log dump and the storing these Dump Files away from the source Server.   The easiest method for protecting data in a SAP Sybase ASE Server is to use the dump command and to allow either scheduled dumps or event driven dumps to occur.  Scheduled dumps can be done using Windows scheduler, CRON jobs or by using built-in functionality provided by the Sybase Job Scheduler (part of the base Database Server).  For a typical contingency scenario plan,  one might have a full Database dump once a week followed by Transaction Log dumps once a day or once an hour.  The design behind this scheme is that the full Database dump is done to allow a "base starting point in time" for the Transaction Log dumps to be valid and applied if we needed to do a load of the data. 

When you are designing a dump schedule there is only one design question to ask:  "upon a failure where  I need to load the data into the Target Server, what is the maximum amount of time this operation can take?"

Lets assume you have 12 hours of downtime in your Database SLA (Service Level Agreement); that's 12 hours in which the business has said that you can take a full outage and not impact the business. Realistically the 1 to 3 hours of any problem is going to be spent quantifying the problem - what is it, what are my exposures, who is involved, what are my options, level of business impacted and  an answer to "is it the type of problem where I have to go to my contingency option of a Database load?"   Going back to Database dumps is our last measure of contingency.

Next plan of action is going to be trying to fix the problem.  If it is a hardware issue then we have no choice but to rebuild the Database Server on a new platform and reload the data from dumps.  If you reach this level of rebuilding it is going to be critical that some level of automation in scripting be in play.  If it is an object level corruption, then a simple drop and recreate of the object might suffice.  We would have to not only recognize the object but also follow up and check the entire Database using a "dbcc" or "Database consistency check" command.  The dbcc command should be run before dumps and to also verify the actual dump itself is valid and contains no errors.  Some logic errors can be hidden for weeks and captured to the dump file before finally being discovered as an issue and then being "reloaded" back in again as the dump file was corrupted weeks earlier.  

For this scenario lets assume we need to load the Database with the most recent full Database dump and then apply the Transaction Log dumps.  Every Transaction Log dump must be loaded in the same order in which it was dumped.  A couple of potential time-wasters can be lurking:  due to space restrictions, Database dump files are moved off the current Database Server onto tape.  We would need to recognize the tape, mount the tape, transfer the file over to a local file and then apply the file.  And we are assuming that this is a concurrent and not a serial exercise.   If we assume a medium size Database of 500 GB and a weekly dump schedule - we are looking at reapplying dumps (including transfer to local storage) of 4 to 10 hours.  The slowness of file transfer from network storage to local storage is a far bigger issue than most Clients realize.    Next comes a sanity check by a small group of Clients and finally a release to the public.  Back on line again! 

Lets recap: so far we are 3 hours (initial problem determination) + 8 hours (avg reloading dumps) + 2 hours (consistency checking) + 1 hour user checking; that is 14 hours.   It's just over our 12 hour SLA assuming it all comes together in a perfect world.  We should also be aware of the Business needing to add missing records (occurred during our 12 hour window) to bring the system to present time and day functioning.  I am being a bit pessimistic in this case, however, I do want to raise a couple of points regarding contingency:

- People always under estimate the amount of time and people effort required.
- People always plan for the best case scenario.
- Like life insurance, we can never have enough.
- As data grows, SLAs need to be adjusted or alternate strategies designed.
- Contingency strategies are never quite documented 100%.
- Contingency strategies are never rehearsed.

Clearly we have to do a dump strategy and one should always do a dump strategy regardless of any other disaster recovery designs.  The Migration project is perhaps the time when we can start to really understand what we have and more importantly document and time the steps that are involved in applying the dump and load contingency strategy.   A good plan is a well rehearsed plan.  This should be considered to be a valid "sub-project" under the Migration project.  SAP Sybase Global Services will assist in preparing a dump strategy  for the SAP Sybase ASE Database Server.  It is an important part to complete your migration of the Database operations.  This is the ideal time to fully understand the logistics of this contingency scenario.

(As a side note - I have purposely left of any further talk about any High-Availability or Disaster Recovery architectures using other SAP Sybase products.  I can talk at length about that in later blogs. - jwj)


Monitoring our data

Everybody has it and depending upon your IT shop - you either ignore it until there is a problem or you watch it constantly.  Monitoring your data means many things to many people, if we can pare this idea down to some common "haves" for monitoring a short sample list might be:

- Display a birds-eye or "at a glance" view of the Server to quickly determine GREEN (OK)| YELLOW (WATCH)| RED (BAD)conditions.
- Display SQL statements are being executed and what is the most resource consuming
- Display the current CPU, IO needs.
- Indicate what tables are the most accessed ones
- Indicate what tables have locking and type of locks
- Ability to monitor or follow a specific user connection
- Ability to determine what is the ASE waiting for
- Ability to store historical data for trending analysis
- etc, etc....

If you are thinking I missed some very important monitoring points here, well, it's like I said, monitoring and what's important means many things to many people.

Similar to the Client Applications Blog featured previously, we are going to have two monitoring client types: third-party monitoring tools and in-house monitoring tools.  For third-party monitoring tools the best and only course of action is to consult with the  Company that developed the application and inquire regarding if they offer support for linking a different set of APIs to connect to the SAP Sybase Database.

With respect to your in-house monitoring programs this will require a re-design of the monitoring done presently and adapting the properties to the SAP Sybase ASE Server. The ASE Server has various monitoring schemes available: internal tables (MDA tables), auditing (logins, commands, etc) and  DBA or Systems Administrator commands.

The MDA tables are installed when you install the base ASE Server and require you to "enable monitoring" using a command.  When monitoring is enabled, internal tables (in tempdb) gather a multitude of statistics ranging from:  why the system is waiting on various activities , to exploring the worst and most common tables, SQL statements and users.  While the gathering of this information is straight forward it is recommended you initiate a "sub-project" to have SAP Sybase Global Services to come in and design and implement a monitoring strategy to get you started on the right foot.  Personally I have developed a "black box" DBA design that not only monitors the Server but is designed to be proactive not reactive; working on configuration of the Server and its objects before they become concerns.    With the MDA tables and monitoring - if you can dream it - you can build it.

For those of you that like GUI tools, SAP Sybase has a web-based Client tool that displays and allows event management on a variety of SAP Sybase products.  Known as Sybase Control Center (DBA Cockpit is an off-shoot of this), this GUI tool will display up to the second monitoring of the ASE Server as well as capturing historical events for tending analysis.  Event management is another great aspect of this tool to enable certain custom code to be run when an event or trigger happens.

One monitoring program well known to Sybase DBAs is the sysmon (system monitor) report.  This is the grandfather report that gives you an general idea on where to look in the ASE Server for general monitoring issues or concerns.  This is the first report we run even before we look at  the MDA tables.  The sysmon report views the ASE Server as a whole as opposed to the  MDA tables which allow us to drill down to the details.  How SAP Sybase IT departments use the sysmon report is varied but one fact remains, all SAP Sybase IT departments run sysmon as a first look at any monitoring investigation.  Typically we would run an sp_sysmon command such as:

exec sp_sysmon "00:05:00"

which runs the sysmon report to the console (where we logged in the session) for 5 minutes of active monitoring.  The sysmon values are averaged so in some cases we do not want to run sysmon for more than 5 minutes in case we lose the information on various performance peaks and valleys.  It is common the run sp_sysmon in a cron job every 5 to 10 minutes for a minute.  Here is an example of a shell script one could use:

# cron job executed
# we are using isql, a command based client common to all SAP Sybase clients
# change the <<...>> to what your enviroment is
isql -Usa -P<<your sa password>>  -S<<your Server>> -o<<Output file name>> <<EOF!
-- start of our script, run for 1 minute
exec sp_sysmon '00:01:00'
go
# next line is to mark the last line
EOF!
# look in the Output file name for the report

You probably realize that if its available inside the ASE Server, SAP Sybase Global Services can implement this as well as providing custom coding to  zero in on specific parts of the sysmon report.  Personally I have been involved in the redesign of sysmon to allow for historical captures and turning sysmon into an event driven report as opposed to a repeating a time scheduled report.   For example the trigger event to run a sysmon report was done whwn the average CPU % busy reached 50% on the ASE Server.   If you can think about it, we can design it.

Third-party operation tool support

The SAP Sybase ASE Server has a wide variety of Vendor support for monitoring tools.   Essentially all tools do the same time with some differences.  As we are focused on the "migration" of your operations to the SAP Sybase world, we really want to foremost deliver the same functionality that you had before.   The easiest way to do this is to mimic the monitoring tool you have now using the same tool but changing the  Database layer support provided by the Tool Vendor.  Here are a sample of the monitoring tools supported by the SAP Sybase Server:

Spotlight® on Sybase ASE by Quest Software
DB PowerStudio from Embarcadaro
Performance Center from Embarcadaro
Surveillance DB from Bradmark
DBXray from BMC
ASEMON from Goran Schwarz (via souceforge)

To wrap our monitoring talk up, its important to make a list of what monitoring you require now and in the future, regardless of the tools you use today.  Costs and Support is also part of the tools equation to be considered.  The features you list should not be dependent on a tool.  Once the list is completed and rated, the evaluation of existing tools is applied and a tools report card is derived.   As part of the migration effort, a monitoring tools evaluation is considered to be a sub-project and can be done by SAP Sybase Global Services

Data shipping

Operations and data shipping is a process where you need to move or obtain data from one location to another.  While we can argue it is part of Client Application migration (which it is as everything outside of the Server is a client), it is also an integral part of Operations.  We need designs that mimic the current Database Server data shipping functionality and deliver beyond that.  The premise is we want to move data to/from the Target Database Server which is the SAP Sybase ASE Server.  We will make an assumption here for our discussion, in the original architecture we moved data from our source Database Server to another "like source" Database Server.  Assuming we are migrating all Database Servers to be SAP Sybase Servers, for the interim we are not going to do "big bang" but migrate a single Database Server as a separate migration package.  In this case we would re-design the original data shipping design to mimic the base functionality of moving data from source to Target but to also allow additional functionality.  By solving one design we have added to many future design scenarios by using Sybase Replication Server.  Continuing on with this example we would remove the former Client data shipping application and replace this with Sybase Replication Server.  The Sybase Replication Server product would take records from the SAP Sybase ASE Server and copy them to the Target Database Server, regardless of which that Database vendor is present.  Here is an extremely simple morphing of a data shipping scenario into the SAP Sybase world.

The SAP Sybase Replication design would bring a mechanism for timed or real time data shipping as well as a proven and integrate feedback mechanism to ensure no records are lost during this process.  In planning for a migration, these types of details would be uncovered and "sub projects" would be assigned to provide a well understood and contained package to deliver for the eventual migration.

Administration

Administering your new Database Server will involve taking out the SQL syntax and applying SAP Sybase commands to effect the same administration.  Here are some examples:

For making general changes to your scripts all $ORACLE_HOME needs to be changed to $SYBASE and any default databases listed in $ORACLE_SID will be assumed to be $DSQUERY.  Any command line scripts will need to use the SAP Sybase Client of isql which is found in the $SYBASE/OCS-15_0/bin/isql directory.  For some common administrative tasks please refer to the following table:

OperationsSolution
To move data in flat files to and from the ASE ServerUse the bcp command located in $SYBASE/OCS-15_0/bin
To load data into the databaseUse the bcp command located in $SYBASE/OCS-15_0/bin
To start database serverUse the startserver command in $SYBASE/ASE-15_0/install  (startserver –f RUN_$DSQUERY)
To shutdown database server

Use isql and issue "shutdown"  (SQL command).



The key is to recognize where the administrative scripts are located, change the base operations to launch isql and to change the SQL syntax commands to mimic the current source Database Vendor code.

Almost at the end!  Next blog I will focus on the last step in our migration high-level steps,  validating the migration or  "how do I know I have captured all of my source system nuances  in the SAP Sybase world?"

Time for your feedback: if you have any questions or would like me to expand on any designs I have mentioned in my Blogs just drop me a note and I would be more than happy to provide you with an answer or even another blog topic focusing on your question.