Additional Blogs by SAP
cancel
Showing results for 
Search instead for 
Did you mean: 
0 Kudos

The Migration Roadmap so far ....

1. Planning the migration
2. Designing architecture migration
3. Implementing schema migration
4. Implementing data migration
5. Implementing application migration
6. Implementing operation migration
> Validating the migration

This is the last installment for best practices for migrating to SAP SYBASE ASE Server.   In the future I'll be talking about migrations but more focused on migration topics.  Regardless of which Database Vendor you are moving from, the steps outlined below will form the overall guide for our talk today.   To continue a theme, we will target our talk to the non-SAP Applications.   

The story so far: we have our logins created, the database or databases, tables, indexes and data and all compiled code is on the Target ASE Server.   We have converted our Clients and  Applications as well as all operations to connect successfully to the SAP Sybase ASE Server.  We are almost ready to start testing and then releasing this package to be implemented into production.

At first glance, validating the migration seems quite simple, we just going to test using your test scripts as we do for any Application upgrade... that should be enough right?

Yes and no. 

By all means use your regression tests to test all  Client functionality but here is a list of what needs to be tested beyond a simple "if I put A in I should expect B back" scenario:

  • Tests to verify correct functioning of business processes
  • Tests to provide normal and peak load scenarios
  • Tests to verify simple operations: starting up, shutting down (planned and unplanned)
  • Tests to verify external failure scenarios
  • Tests to verify the migration implementation
  • Tests to verify the migration back out strategy

Before I comment on the individual tests and the nuances of each, lets focus on what is important: the migrated system should deliver the same or better "rate of work" being done.  In database terms we equate "rate of work" with "transactions per second" or TPS where a transaction is a unit of work.  From a business perceptive a unit of work might be to create an invoice and provide verification to the Customer.   This same "Business Transaction" could have more than one  Database Transactions.  In an ideal world, the Database Transaction would be directly equivalent to a Business Transaction,  that's the theory. 

Before we start the migration we need to establish a base measurement of the amount of work being done.   To keep this simple there are two methods to calculate the amount (or rate) of work being done  by the Database Server.   One could simply write a program that does a variety of inserts and deletes on tables and we measure the duration.  While this is a valid test, the issue is it does not reflect your reality.  Any test used to verify the migration must be routed in reality and not theory. 

We have two methods of determining performance: the first method is to simply record the start and end time of a user initiated transaction.   Programming hooks to record this event would need to exist prior to the migration and would be implemented by the Application team.  The simplicity of this method allows the scheme to be used in a variety of ways; not only can we use this in a single instance but we can also determine any load issues by again referring to the duration of each transaction in the group.  Recording the date and time stamps on records might suffice for implementing this method; alternate duration calculations can be done by choosing a unique table that is touched at the beginning of the transaction and a table that is uniquely touched before the commit point of the transaction.  I have had past Clients that have implemented this which much success in Server side stored procedures.

The second method is to use the measurement of Transactions Per Second  (TPS).  Various vendors have methods for calculating this:  in Oracle one uses an AWR or STATSPACK report and the Load Profile Transactions entry, MSQL uses perfmon and SAP Sybase uses sysmon reports and the TPS entry.  TPS will calculate the overall transactions per second which will include relevant transactions and  non-relevant transactions.  Using TPS for the Source  Database Server and comparing it to the Target Database Server might be misleading but if it is the only tool available then we should at least note the differences.  

The preferred method is to use the first method to reflect the reality of testing to the production dynamics as close as possible.  For the remainder of this talk, lets assume we have this valid method of calculating our performance. 

Testing scenarios


Tests to verify correct functioning of business processes


Ideally this type of testing can be done in parallel with other Testers to allow a measure of concurrence to reduce the testing cycles.  Obviously if we are concerned with record locking issues then we need to adjust the testing scenarios.  Many IT Departments have strict testing procedures and will use this opportunity to run reports for data verification.  We assume that all parts and therefore all database objects would be touched and therefore verified.  We look to repeat this battery of tests at least twice to ensure any performance issues are captured and contained or to verify that performance is static over multiple runs.   Either way we do not move off this testing scenario until the results are validated as correct.

 

Tests to provide normal and peak load scenarios


The understanding and set up of this test can be tricky.  We first have to determine what constitutes a normal day and a peak day; it is number of concurrent database connections or is it the amount of work being done on the original server?   It is important to establish not only the peak time instance but if we can duplicate these peak scenarios successfully on our test environment.  Factors that must be under consideration are:  


  • duplicating the number of concurrent users and lag times of their real-time responses.  This is to reflect blocking issues correctly.  While we could set up 100 concurrent users, having them hit the same table in the same order in our stress test is not going to reflect any production system.
  • duplicating the transaction rates.   What we really are after is a measure of the overall transactions from start to finish.  How are the transactions affected by one user verses many users?
  • duplicating the data points.  Certain performance depends upon the data skew.  We need to know this.  For example having test  data for a monthly run needs to be gathered near the end of the month verses the beginning of the month where we might have fewer data points.

The recommendation here is to provide a test environment that has data points that are immediately before the peak period, has traced transactions and the test Clients reflect actual production.  Easier stated than done.  There are several tools on the market to help in this endeavor and we can verify what is right for your environment during the assessment period.

Tests to verify simple operations: starting up, shutting down (planned and unplanned)


These tests are to verify hardware and software compatability.  The purpose is to give the IT DBA team a certain measure of comfort that upon an unplanned failure where the Server is crashed, the likelihood of an immediate recovery can be considered.   We assume first that the ASE Server will recover from this unplanned failure (depending upon the root cause of the failure!) and this test will allow us to time and practice cold boots and database consistency checks.

Tests to verify Client failure scenarios


Being part of a larger environment, we need to test how the ASE Server will react if pressured by failure in related environments.    This would include all Client failures , hardware failure and data shipping failure scenarios.  If a Disaster Recovery environment is being built, the secondary site would be tested and verified.

Tests to verify the migration implementation


This is assumed to be in the critical testing path.  We will have duplicated these steps many times but we need the opportunity to test, time and document each migration implementation step from start to finish, at least twice.  Part of the milestone for this critical test is to ensure that the two complete test runs behave as expected, no documentation or steps are updated (as in they are finished) and all timings do not deviate from the other test run by more than 5%.  If we do not have this then we repeat this test again and again until we get it right. It is important to get this right.

Tests to verify the migration back out strategy


Equally important is to verify what the back out strategy is and when the chosen back out strategy becomes obsolete.   While designing and implementing a back out strategy is usually a paper-based test scenario,  this area need to be defined and given to your Management before the migration implementation.


Validating the migration environment involves unit testing, regression testing, load testing and  rehearsing the implementation and back out steps.  In the Migration  Project Plan these validation steps are occurring after the build phase and may be considered to be the last steps before proceeding to implement the migration in production.    Perhaps the biggest unknown for many people is the load testing and consequently how the SAP Sybase ASE Server will behave in production after we "pull the switch."   We need to mitigate this risk.  How we reduce this risk is by:

  • Configuring  the SAP Sybase ASE Server to allow continuous peak functioning.  This will be done in the initial build steps  and further refined as testing is done.
  • On-site active monitoring, for a period to be determined, after go-live.
  • Skills transfer to be done via a buddy system where the SAP Sybase Consultants will act as Mentors to your staff.
  • Ensuring that any test being done is an adequate reflection of production under load conditions.
  • Ensure SAP Sybase Global Services is involved in your planning, testing and implementation phases.


While we have come to the end of this migration blog, we are by no means done with this topic.  Future plans are to introduce a simple list of migration activities, talking about various HA and DR strategies and to expand upon the design idea I put forward in the last blog: "black box DBA" or being a "hands off " DBA.  As always, if you have any comments or thoughts for future designs send them my way and we can certainly come up with a strategy.