Additional Blogs by SAP
cancel
Showing results for 
Search instead for 
Did you mean: 

The Migration Roadmap so far ....

1. Planning the migration
2. Designing architecture migration
3. Implementing schema migration
-> Implementing data migration
5. Implementing application migration
6. Implementing operation migration
7. Validating the migration

This is the fourth installment for best practices for migrating to SAP SYBASE ASE server. Regardless of which database vendor you are moving from, the steps outlined below will form the overall guide for our talk. We are now onto the topic of implementing the data movement  to the target database server. We will also focus our talk mainly to the non-SAP Applications.

To recap: we have our logins, the database (or databases) and the tables and indexes created on the target ASE Server. Our tables were created using the PowerDesigner tool and all datatypes and column lengths have been changed over to the ASE format automatically.


To move data from the source to the target database server can be done by several methods. We have choices to make as to the how and what and when for the data movement or population of the target server.  The following sections list some of these considerations.

Designing for the data movement Implementation

Big Bang Migration


Big Bang data movement (aka materialization) is data movement that takes the entire data rows for every Source table, applies the data rows in the SAP Sybase ASE Server tables and completes the process by building the indexes and updating the statistics. This is the least complicated method of ensuring all data points have made it over to the new ASE Server environment. This method is useful for smaller migrations or for migration efforts that will fit comfortably into the available maintenance window. This is the first option under consideration for any migration design effort.

Trickle Migration


Trickle migration is data movement from source to target that occurs in real-time: when data is manipulated at the source, that record is manipulated at the target ASE Server. The key in this migration scheme is to recognize the event of a data point being changed at the source. How we get the record over to the Target ASE Server can be done in a variety of ways  (mentioned later in this document).

Combination Migration


This last migration design scheme is not necessarily a catch-all if your migration planning does not fit into either Big Bang or Trickle migrations. What this design does it to provide a scheme to be somewhat proactive in your migration efforts: realizing that you have a limited window but Big Bang seems to be the easiest approach for your efforts. We will make the assumption we want to touch the least amount of data to be moved come migration implementation day.  To do this we need to only recognize data that has been changed during the last time we did the big bang event on the future production (target) server. We need to fully understand the tables or data that are static (never changed) and dynamic tables or data that are dynamic. We can isolate whole tables or schemas that are static and we can bring these over beforehand. That's the easy part. Dynamic tables will need to be recognized and data brought over during the migration implementation.  Look for datetime stamps or columns as a guide.   If a table has static and dynamic data, this might be a perfect candidate for table or data partitioning. For partitioned tables the data build during implementation would be very much reduced.


If a table was partitioned on the Source database Server it would be natural to assume that partitioning would also be used at the target ASE Server and that the partitioning would naturally be of help to reduce the overall migration implementation time.

To recap, we are presented with 3 migration designs to choose from. Which one to use? While there are a number of factors the most important factor in choosing a migration implementation design is to focus on the one that allows implementation to occur within the time allotted. This includes quieting the source server, extracting data from the source, transporting data to the target, loading the target, rebuilding indexes and statistics, testing and final release. Add to this any fail back routines that might need to be done on the Server and Client side.


These decisions are part of the initial assessment done by SAP Global Services in the Roadmap Service for Database Platform .

Designing for the data movement materialization

Once we have decided upon the steps we are going to be using to bring the data over, we need to translate these high-level steps into technical details: what are the migration tools we are considering for the task of migration implementation. Lets work in some real examples into this discussion and based on the designs for data movement discussed previously there are several tool recommendations.


Lets assume we have gone through the assessment process with SAP Sybase Global Services and we have decided that the Big Bang migration approach suites our Company best and it seems like it would fit comfortably in our available migration window. Big Bang is to bring all data points over all at once from source to target. There are a couple of options for the materialization of data:

Option 1. Flat file unload and load.


Just as it sounds; data is unloaded from the source schema to a flat file (typically CSV format) and flat file loaded into the ASE target. After the load one would rebuild the indexes and statistics.


The easiest method is to write an automated script that sequentially unloads the table data to produce the flat file. For Microsoft sources we can simply use the bcp program, an example:

bcp out <<file.csv>> -c -D<<database>> -T<<table>> -U<<user>> -P<<Password>> -S<<Server>>

and running this will produce a file in a local directory in CSV format. We could take care of the transportation of that file al depending on the location where we point to in the command line. If we were to take Oracle as a source we would likely use the Data Pump Export ultility :

expdp <<DBA>>/<<password>>@<<server>> tables=<<table>> directory= <<dir name>> dumpfile=<<table>>.csv logfile=<<table>>.log

is an example of the command line utility that one light use. Loading data into SAP Sybase target server is done by the command line:

bcp <<database or schema>>..<<table>> in <<table.csv>> –U<<DBA>> –P<<password>> –S<<server>> –c .


Sequential unload and load is simple to execute and understand. Certainly one would parallelize these commands to effect concurrent data unload and loads to occur to bring about a decrease in the Big Bang "start to finish" time. Automation of scripts and parallelization for concurrency is the key to a successful big bang migration.

Option 2. Using a third-Party ETL tool.


Using various tools for 'Extract, Transform and Load" we are primarily interested in the "EXTRACT" function.  Extract to a CSV file and use the SAP Sybase bcp utility to provide parallel loads.   Bottom line, if you have the ETL tool already, use that tool. Other options are the SAP Sybase ETL tool. Sybase ETL extracts data from multiple heterogeneous data sources and loads it into one or more data targets using a comprehensive set of transformation functions. SAP Sybase ETL capabilities include:


Data extraction – provides the ability to extract data from a variety of data sources, such as Sybase ASE, Sybase IQ, Microsoft Access, Oracle, DB2, Microsoft SQL Server, flat files, XML files, and ODBC data sources.
Data transformation – provides the ability to convert, cleanse, merge, and split data streams, which you can then insert, update, or delete data in a data target.
Data loading – provides the ability to load data into a target database via update, insert or delete statements, or in bulk.

Option 3. Use SAP Sybase's Enterprise Connect Data Access (ECDA)


ECDA is a connectivity product by Sybase that enables direct connections from an ASE database into an Oracle database, making it possible to transfer Oracle data directly into ASE. ECDA hooks into the ASE mechanism of "proxy tables". Considerations: This option can be used when the data volume is such that the data can be transferred in the available migration window. It is unlikely to be suitable for very large data volumes. An advantage is that ECDA takes care of mapping Oracle datatypes to ASE datatypes, and that the migration can be fully performed through SQL. Using this option requires purchasing SAP Sybase's ECDA product.

Option 4. Use SAP Sybase's Replication Server


This tool captures database transactions in Oracle or Microsoft (to name a few)  and applies these to ASE and fits into the trickle migration option we discussed earlier. This tool can also initially copy the full contents of the Oracle tables into ASE, in order to initialize the data replication ("materialization of the replication system"). Using this option requires purchasing SAP Sybase's Replication Server product, as well as learning how to use and be trained on  Replication Server.  SAP Sybase Global Services will provide the necessary implmentation and skills transfer.   Letting Replication Server perform the initial data copy from Oracle to ASE may not be realistic for large data volumes. In this case, the initial materialization of the replication system might be better performed with one of the other options mentioned here.

Lots of option here for data materialization and more often than not there might be multiple of approaches depending upon data volumes and migration timeframes. These options will be considered with the pros and cons and an initial design developed as part of the initial assessment in the Roadmap Service offered by SAP Global Services.

Next blog I will focus on the next step in our migration high-level steps: Implementing application migration: methods for moving Client Apps from non-SAP to the SAP Sybase environment.