1 2 3 4 Previous Next


56 Posts

SAP HANA XS offers the possibility to schedule applications as jobs. As database backups are triggered by executing SQL commands an application can be built which calls these SQL commands. This blog will provide an overview about howto write a simple XS application which creates a database backup and to schedule it as an XS job.

The first step is to install the SAP HANA database server. In this example SAP HANA SP 9 is used.

After the installation has finished successfully the Service for the XS engine is already running. The status can be checked within SAP HANA Studio or on the command line using the command "HDB info" as user gtiadm.



To make a first call to the the XS engine use your favorite browser and type in the URL below.


So, the database is succesfully installed, up and running. The XS engine is also working.
The next step is to create an SQL user whose permissions are limited to do backups and to work with the XS engine.

In this example the name of this user is BACKUP_OPERATOR.




To verify that the user is able to create backups, logon to the database using SAP HANA Studio and create the first backup.


So the user is able to create backups. The next step is to logon to the XS Admin tool using BACKUP_OPERATOR to make sure this is working fine too.




As everything is working fine we can start to build the small SAP HANA XS application which is executing the backup command.

We also build a test application to be able to check the final backup statement and to create a backup manually.

To keep things separated I create a new workspace first.


After the studio has restarted choose the development perspective by clicking "Open Development".


The project data will be stored in the database GTI. Due to this we have to add the database instance to the current workspace.




Finally we can create to actual project.






After clicking through all the popups the project should be created succesfully. It has no files yet.


For this project we need five files:

  1. DataBackup.xsjob
    It contains the job definition.
  2. DataBackup.xsjs
    This file is called during the job execution. It only forwards the call to the file DataBackup.xsjslib
  3. DataBackup.xsjslib
    This file contains the actual command to create the database backup.
  4. index.html
    This is a simple html file which allows us to run the backup manually.
  5. DataBackupTest.xsjs
    This is a JavaScript file which is used by file index.html


The content of the files is as follows. You can use copy and paste.:



  function getSql(prefix) {

         return "BACKUP DATA USING FILE ('" + prefix + "_BATCH_JOB')";



  function getTimestamp() {

        var date = new Date();

        var year = date.getFullYear();

        var month = date.getMonth() +1;

        var day = date.getDate();

        var hour = date.getHours();

        var minute = date.getMinutes();

        var second = date.getSeconds();

       return year + "_" + month + "_" + day + "_" + hour + "_" + minute + "_" + second;



function createBackupStatement(prefix) {

        return getSql(prefix + "_" + getTimestamp());



function createBackup(prefix) {

       var sql = createBackupStatement(prefix);

       var conn = $.db.getConnection();

      var stmt = conn.prepareStatement(sql);







  var dataBackup = $.import("DataBackup.xsjslib");

var prefix = $.request.parameters.get("prefix");

  var action = $.request.parameters.get("action");


  if ("Create backup command" === action) {

  var sql = dataBackup.createBackupStatement(prefix);

  $.response.setBody("</p>SQL command for data backup:</p>" + sql);



if ("Create backup" === action) {


     $.response.setBody("Backup created successfully. Check backup directory.");




function createBackup(input) {

       var dataBackup = $.import("DataBackup.xsjslib");






      "description": "Create data backup of database GTI",

      "action": "Automatic_Backup:DataBackup.xsjs::createBackup",

      "schedules": [ {

        "description": "Create data backup of database GTI",

                "xscron": "* * * * * * 59",

                "parameter": { "prefix": "MyScheduledBackup" }

        } ]





      Project to create a database backup.  </p>

          <form action="DataBackupTest.xsjs">

               <p>Prefix for backup:<br>

                      <input name="prefix" type="text" size="30" maxlength="30">  </p>

              <p>  <input type="submit" name="action" value="Create backup command">

                      <input type="submit" name="action" value="Create backup">

           <input type="reset" value="Reset">

             </p>  </form>



To save and activate the application press the "forward"-button. After the project was activated successfully we can start a test.
The application creates a backup using a custom prefix followed by the current timestamp.





The backup location is the same as it was when the backup was created within the Studio. Different paths or the usage of backint can be achieved with different backup commands.
The last step is to use this application as an XS job. For this we have to activate the scheduler inside the XS engine.



You might have to create the parameter "scheduler" if it does not exist.

The scheduling is active once the parameter is set. It is not necessary to restart the database or the XS engine.

To schedule the application as a job we have to logon to the XS Admin.


After clicking on the field "XS Job Dashboard" the job is shown with status INACTIVE. By clicking on the job you get into the details of the job.



By clicking the button 'Edit Schedule' you get a pop up where you can set the job active.




After pressing the button OK, the job has the new status ACTIVE.

But the job is not scheduled yet. To schedule the job the field 'Active' in the upper part of the browser needs to be checked. You also have to set the password for user BACKUP_OPERATOR and a Start- and End Time.


After pressing the button "Save Job" at the bottom of your browser the status of the job will change to "SCHEDULED" or, "RUNNING".



Using the configuration from above the backup is scheduled every 59 seconds which is very often of course.

More details about the XS job itself can be found here.

     Technology is rapidly changing and with the world moving from physical bits to digital bytes, there is an ever growing chasm amongst those who take analytics to the next level and those who are stuck with management reporting. This growing separation amongst businesses results in what we call an analytical divide. For those who are stuck with management reporting the use of new tools, strategies, skills and competencies, as well as new roles, organizational models and governance mechanisms are required to cross this analytical divide and take your business to the next level.

Register now for the Gartner Business Intelligence & Analytics Summit 2015, you’ll learn how to re-master your skills and how to deliver the analytic advantage that your organization needs to succeed in the digital age.

At the summit, you will learn:

  • Communicate business value of BI, analytics
  • Develop BI skills necessary for success
  • Modernize core technologies for data integration
  • Get more out of mobile, social, cloud, in-memory
  • Understand business impact of advanced analytics
  • Craft strategy to launch/reboot BI initiative
  • Learn how others are achieving BI innovation



Date: March 30 – April 1,

Location: Las Vegas, NV

Duration: 2 Days

                                                                                                                                                                             gartner-bi-summit-298x200.pngREGISTER HERE

           Big Data Analytics are going mainstream with the growing demand from Business and IT leaders who want to utilize analytics to improve both strategic and operational decisions The old, slow and expensive process of having to collect data on one system and analyse it on another are in the past, with product offerings such as SAP HANA organizations are now able to transform their businesses with real time Big Data analytics. This will not only improve decision making but it will save time and money, allowing companies to gain new levels of insight into their customers behaviour, operations, financial performance, social media trends and more.

The TDWI Solution Summit is a two-day event geared toward senior business and technology leaders who approve or recommend BI and analytical systems and solutions that run against large and complex data sets, and are planning a project in the next 12 months.

Join an exclusive, hosted gathering of experienced professionals, industry thought leaders and top solution providers for real-world tips and best practices.


Attend  the TDWI Solution Summit  to learn:

  • Innovative technologies and practices for enterprise big data analytics
  • Find out how to transform your business with big data analytics
  • Real world tips and best practices to help you harness the power of big data analytics

Date: March 15-17

Location:Savannah, Georgia

Duration: Two days


Register Here

Rob Verschoor

Exodus Migration Magic

Posted by Rob Verschoor Dec 19, 2014

When you're trying to migrate SQL stored procedures from Oracle or Microsoft SQL Server to SAP ASE, IQ or SQL Anywhere, inevitably you'll be facing the problem that the source DBMS has particular features or syntax constructs that the target database don't have.

That may sound like a hopeless situation. However, the SAP Exodus DBMS migration tool manages to convert SQL code in a number of cases that might seem to be too difficult to migrated successfully. Let's look at some selected examples below.




Oracle PL/SQL applications often use sequences to generate unique numbers like primary keys, for example:

     /* PL/SQL syntax */

     INSERT INTO MyTable (keycol, attrib1)

     VALUES (MyKeySequence.nextval, 'Hello!');


SAP ASE does not support sequences, but Exodus migrates this to identical functionality anyway:

     /* ASE T-SQL syntax */

     INSERT INTO MyTable (keycol, attrib1)

     VALUES (dbo.sp_f_dbmtk_sequence_nextval('MIGRATED_DB','dbo.MyKeySequence'), 'Hello!')

What happens here is that the Exodus run-time component sp_f_dbmtk_sequence_nextval() delivers the same functionality as the Oracle nextval function. This component is an ASE SQL function that uses the reserve_identity() built-in function, as well as a few other tricks to effectively get a full sequence implementation in ASE (the arguments 'MIGRATED_DB','dbo.MyKeySequence' are the name of the target ASE database and the name of the ASE table posing as a sequence, respectively).

When migrating to SQL Anywhere, there is no migration problem since SQL Anywhere supports sequences natively.

When migrating to IQ, you'd think that those same SQL Anywhere sequences could be used - but unfortunately sequences are not supported when inserting into an IQ table. Fortunately, Exodus provides a run-time component similar to the ASE example above (but slightly different) thus allowing sequences to be used directly with IQ tables.




Oracle PL/SQL has a number of datatypes that do not exist in the some of the SAP target databases.

For example, PL/SQL has an INTERVAL datatype (2 flavours) and a date/time-with-timezone datatype. ASE, IQ do not support such datatypes natively, and SQL Anywhere supports only date/time-with-timezone.

A more challenging example is PL/SQL's ANYDATA datatype, and MS SQL Server's SQLVARIANT datatype. Both of these are 'type-less' and can represent any datatype.

To allow these datatypes to be migrated anyway, Exodus provides its own predefined user-defined datatypes, plus a number of stored procedures and functions that operate on these datatypes. This allows most of the functionality around these datatypes to be retained after conversion. Some additional manual editing may be needed to complete fully equivalent SQL code after conversion.



Try-catch Exception Handling

Oracle PL/SQL uses a try-catch method for handling exceptions where control transfers to a declared exception handler when an exception occurs. IQ and SQL Anywhere provide a similar exception handling method, but ASE does not. In ASE, error status should be checked explicitly after every DML statement, and action should be taken depending on the error status found. Oracle exception handlers will be jumped to automatically; moreover if a statement block or a stored procedure does not handle an exception, it is passed on the next outermost block or to the caller.

As a result, there are big differences between Oracle and ASE when it comes to exception handling. The resulting control flow, as well as the transactional aspects of implicitly rolling back in case of unhandled PL/SQL exceptions, and very different.


At first (and second) glance, it would seem impossible to convert Oracle's exception handling to ASE since the error handling approaches are so fundamentally different. Yet, that is exactly what the just-released latest version 2.5 of Exodus achieves. By generating additional SQL statements and inserting these at strategic locations in the ASE T-SQL code, we have managed to achieve nearly-identical run-time behaviour when it comes to handling exceptions.

(Please appreciate that this stuff is too complex to show as an example here)



PL/SQL Packages

A "package" is a very commonly used construct in PL/SQL.This is a named collection of stored procedures and functions, plus a number of session-specific variables that are global to the package. Even though the SAP databases do not provide a package mechanism, Exodus makes creative use of available SQL features and manages to convert PL/SQL packages to a functionally identical series of SQL objects, including the semantics around the package-global variables. Nearly all semantics are fully retained, except for some of the more rarely used features (like explicitly resetting a package).



Buffered PL/SQL output

For Sybase users, one of the first things that meets the eye in Oracle, is how output is generated. In PL/SQL this is typically done by calling

DBMS_OUTPUT.PUT_LINE() which generates a single line of output text. It may be tempting to convert this to a PRINT statement in ASE (or MESSAGE...TO CLIENT in IQ/SA). However, this would, strictly speaking, not be correct, since DBMS_OUTPUT.PUT_LINE() does not actually send anything to the client. It does indeed generate a line of output, but this is stored in an internal buffer which can be read later by the client application, and displayed to the user, once control passes back to the client from the server. If the client chooses not to retrieve and display this output, it is discarded.


By default, Exodus converts such calls to an identical approach where the generated output is buffered for the session until it is read by the client. Optionally, Exodus can be configured to convert calls to DBMS_OUTPUT.PUT_LINE() to a direct PRINT statement.

(I'm leaving the Oracle commands DBMS_OUTPUT.ENABLE() and SET SERVEROUTPUT ON aside for now, but full details are included in the Exodus documentation).




It is very common in PL/SQL, and indeed highly convenient, to declare variables with the datatype of a particular table column. This is done with the %TYPE qualifier:

     MyVar MyTable.MyColumn%TYPE;

Exodus will look up the datatype of MyTable.MyColumn, and substitute this in the variable declaration (so that you don't have to).


A similar mechanism allows defining a record whose fields resemble a table's columns. This uses the %ROWTYPE qualifier:

     MyVar2 MyTable%ROWTYPE;

However, none of the SAP databases supports the concept of a record data structure.Exodus will expand such record variables to individual variables with the correct name and datatype. For example, if the above table has 3 columns with different datatypes, the declaration above looks like this after converting to ASE T-SQL.

     DECLARE @MyVar2@MyColumn BIGINT

     DECLARE @MyVar2@MyColumn2 DATE

     DECLARE @MyVar2@MyColumn3 VARCHAR(30)

This expansion is performed in all places where the original record variable occurs.

This same approach is used for variables that are declared as RECORD datatypes.



In summary...

We do not claim that Exodus fully automates the migration of all possible SQL constructs that may be found in a real-life Oracle or MS SQL Server-based application. But the Exodus DBMS Migration tool has certainly tackled a number of particularly challenging aspects that would otherwise have required large amounts of effort and manual work.


Bottom line: with Exodus, migrating custom applications to SAP databases has become very significantly less risky and complex.


Please contact ExodusHelp@sap.com to discuss migration opportunities in your organization.

Also contact this same address if you are an SAP Partner and you want to get your own copy of the Exodus migration tool.

It's been longer than I wanted before publishing this blog post, but it's been busy getting the new version of Exodus ready (that's Exodus v.2.5, will cover that in a subsequent blog).


In the previous post in this series about the Exodus migration tool, we looked at some simple examples of how Exodus converts SQL code from a non-SAP SQL dialect to the SQL flavour of one of the SAP databases. In many cases, syntax differences can be compensated for by the Exodus migration tool.


It gets more interesting when we consider semantic differences between the SQL dialects. Let's look at two examples that are highly likely to occur in practice.


Numeric division

Consider the following PL/SQL statement, and see if you can predict what the correct result will be:

     SELECT 1/10 INTO my_var FROM DUAL;


In Oracle, the result of this division will be 0.1. But in SAP ASE, SAP IQ and SAP SQL Anywhere, the result will be 0. This is because these databases use different ways of determining the datatype of the result of an expression.

Clearly, this is a problem when migrating from Oracle since the same SQL expression can easily produce a different result. For this reason, Exodus identifies such divisions and ensures the result in the SAP databases is 0.1, like in Oracle. It does this by adding terms to the expression that force the datatype to be a number with decimals rather than an integer:


          /* ASE T-SQL syntax */

     SELECT @my_var = (1*1.0)/(1.0*10) 


Each occurrence of such a division is also flagged separately by Exodus, and it is recommended to verify the correctness of the resulting expressions, as well as the datatypes of variables such an expression is assigned to.



The empty string

Another example of something that looks innocent, but can be nasty, is the infamous 'empty string'.

In most databases, '' (=two single quotes) denotes a zero-length empty string. However, ASE is special in that the empty string is actually not empty, but evaluates to a single space. Exactly why ASE behaves this way will probably remain a mystery -- it's just always been this way.

Regardless, this is a fact that needs to be taken into consideration when migrating: also here, converted SQL statements can easily produce different results due to this semantic difference between source and target database.

When migrating to ASE, Exodus handles this migration issue by replacing all empty strings by NULL in the converted ASE SQL code. This will almost always produce the same result as in the original PL/SQL  code that specified an empty string.

For example, consider these PL/SQL statements:


     /* PL/SQL syntax */

     my_var2 := '';

     my_var3 := my_var2 || 'abcde';

     my_var4 := substr(my_var3,4,1);


Please observe that the resulting value in variable my_var4 is a one-character string 'd'.

Now, Exodus converts this code as follows to ASE T-SQL syntax:


     /* ASE T-SQL syntax */    

     SET @my_var2 = NULL  /* ORIGSQL: '' */

     SET @my_var3 = @my_var2 || 'abcde'

     SET @my_var4 = SUBSTRING(@my_var3,4,1)


These ASE T-SQL statements produce the same result as the original PL/SQL code. If the empty string had not been replaced by NULL, then the result would have been 'c' instead of 'd' (this is left as an exercise to the reader to verify).


(BTW: IQ and SQL Anywhere have the common semantics for an empty string, i.e. equivalent to NULL, so this adjustment does not apply when migrating to those databases).


In summary, semantic differences between SQL dialect need to be taken into account when migrating existing SQL code.

The Exodus DBMS migration tool tries to help reduce the complexity of the migration by minimizing the risk of ending up with SQL code that produces different results than in the original application.

In the previous episode, we discussed conversion between SQL dialects from a more philosophical angle.

I would now like to look at some more concrete examples of what the Exodus DBMS migration tool can do.


When converting from one SQL dialect to another, the necessary first step is to compensate for syntactic differences.

So let's start with a pretty simple Oracle PL/SQL statement:


     DBMS_OUTPUT.PUT_LINE('hello world!');  

Exodus will convert this statement to the following T-SQL code in ASE:

     /* ORIGSQL: DBMS_OUTPUT.PUT_LINE('hello world!'); */

     PRINT 'hello world!


In case we'd be migrating to SAP IQ or to SQL Anywhere, the resulting Watcom SQL would be as follows:

     /* ORIGSQL: DBMS_OUTPUT.PUT_LINE('hello world!'); */

     MESSAGE 'hello world! TO CLIENT;

Note how the original code is added as a comment - so in case something went wrong in the conversion, it will be easier for you to figure out what the correct result should have been.

Since not everyone likes their code cluttered with these ORIGSQL comments, they can be switched off through an Exodus configuration setting. For brevity I'll be omitting these comments in the examples from now.

Let's try something more interesting. The following PL/SQL prints the number of rows in a table:




     select Count(*) into cnt from mytable;

     DBMS_OUTPUT.PUT_LINE('#rows in table: '||cnt);

This is converted to the following ASE T-SQL code:

     DECLARE @cnt INT




        @cnt = COUNT(*)




     SET @DBMTK_TMPVAR_STRING_1 = '#rows in table: '||CONVERT(VARCHAR(100),@cnt)


There are various things worth noting here:

  • First, note how the PL/SQL SELECT-INTO statement is converted to the corresponding ASE syntax which selects a value into a variable (ASE's own SELECT-INTO has entirely different semantics).
    In case we'd convert to
    IQ or SQL Anywhere, the SELECT-INTO would be retained as Watcom SQL supports this syntax too.
  • Second, the original expression in the PUT_LINE() statement is actually an expression where a string is concatenated with an integer. In ASE, it is not allowed to specify expressions as arguments to the PRINT statement, so Exodus takes the expression out and assigns it to a help variable which is then specified as the argument to PRINT.
    (NB: for converting stored procedure calls to ASE, the same approach is used)
  • Third, Exodus knows that the cnt variable is an integer and it converts it to a string before concatenating it. Unlike PL/SQL, ASE's T-SQL does not support automatic conversion to string datatypes in such expressions, so leaving this expression unchanged would have cuased a syntax error in ASE.
    And just in case you'd ask: if the concatenated variable had been declared as a string itself, Exodus would not generate the CONVERT() call.
  • Lastly, as the converted SELECT-INTO statement shows, Exodus tries to format the generated SQL code nicely and in a standarized manner. If the formatting of the input SQL code is messy, the result will look better.




Finally, the following could well occur in a PL/SQL application:


     DBMS_OUTPUT.PUT_LINE(INITCAP('hello world!'));


For those not familiar with PL/SQL, the INITCAP() built-in function capitalizes the first letter of every word in a string. Since none of the SAP databases provide such a built-in function, Exodus comes with a library of so-called "run-time components" which are SQL stored procedures or SQL functions that implement such functionality.

Exodus converts the above statement to ASE as follows:


     SET @DBMTK_TMPVAR_STRING_1 = dbo.sp_f_dbmtk_capitalize_word('hello world!')



The SQL function sp_f_dbmtk_capitalize_word() gives the same result as PL/SQL's INITCAP().

When executed, the result is as expected:


     Hello World!

Obviously, much of this is pretty basic stuff. In fact, Exodus goes far beyond these elementary conversion requirements.

More examples in my next blog. Watch this space!

Rob Verschoor

Sixty Shades of SQL

Posted by Rob Verschoor Sep 4, 2014

Converting SQL code from one SQL dialect to another, like Oracle's PL/SQL to SAP ASE's Transact-SQL probably sounds like a boring, nerdy whaddever to most people. But since you, dear reader, are reading this blog, you are likely not "most people" but a member of the tech crowd. Admittedly though, even in those circles SQL code conversion may not seem terribly exciting to many folks, since databases, and especially geeky topics like SQL syntax and semantics, are kinda specialized, developer-oriented stuff. So why bother? (keep reading for the answer)


Personally, I find SQL conversion to be one of the more exciting things that life has to offer (we can discuss the other ones over a drink at the bar). That I've been working with SQL since 1989 may have something to do with that. And yes, I should get out more.


Even for SQL-infected readers, converting a PL/SQL statement to its equivalent in Transact-SQL or Watcom-SQL may not sound like a terribly complex, or even interesting, problem. After all, all those SQLs are pretty similar, right? And there is even an ANSI SQL standard all SQL dialects pledge adherence to.



Right. What could possibly go wrong?



Back down here in reality, converting between SQL dialects actually appears to be surprisingly hard -- as anyone who has tried this will know.


So what about that supposedly universal ANSI SQL standard?

Indeed, most SQL dialects claim to be ANSI SQL-compliant. But when you look closer, those claims often boil down to something more narrow like "ANSI SQL-92 entry-level" compliance.

To understand what that means, consider that the ANSI SQL-92 standard dates back to -could you guess?- 1992. In those days the world of SQL was much simpler than it is today. For example, stored procedures were not even defined by ANSI until the SQL:1999 standard appeared (to the ANSI standard fanatics who disagree: yes, you're formally correct, but SQL-92/PSM wasn't there until years later and is generally considered to be part of SQL:1999; and it's not part of SQL-92 entry level anyway).

Despite all that ANSI compliance, in practice the SQL implementations by most database vendors are chock-full with vendor-specific "extensions" to the ANSI standard - which is a polite way of stating that aspects of a SQL feature are not ANSI SQL-compliant at all. And thus, also likely incompatible with other SQL dialects.


Not fully complying with the ANSI SQL standard may sound like a Bad Thing. But let's keep things in perspective: standards will always lag the natural progression of a technology.

It starts when some vendors pioneer a concept, like SAP Sybase ASE did with stored procedures and triggers in the 1980's. Other vendors then also adopt those concepts but in the absence of a standard, everyone implements their own variant. Years later, a standards body like ANSI then tries to define a "standard" even though the existing products have already done their own thing. So it is pretty much unavoidable there will always be discrepancies between the standard and the actual products. That's life.


Bottom line: while there is indeed a lot of similarity across SQL dialects, the number if aspects that are not ANSI-compliant typically far exceeds the parts that do conform to the ANSI SQL standard.

It's pretty safe to that no SQL dialect is fully ANSI SQL-compliant or fully implements the ANSI SQL standard (the one exception perhaps being "Ocelot SQL" who claimed full implementation of a particular ANSI SQL standard at some point; but then, Ocelot didn't quite win the RDBMS race so you shouldn't feel bad not knowing about them).

And BTW, which ANSI SQL standard are we talking about anyway? We haven't even discussed the more recent incarnations like ANSI SQL:2003, SQL:2008 or SQL:2011 (I know you've heard it before but indeed: the good thing about standards is that there are so many of them).



If you're still reading this article at this point, it must mean that you don't find this a boring topic after all (if you were attracted by the blog title and you're still hoping for some E.L.James-style raunchy prose, well, just keep reading).



Why should we bother discussing cross-dialect SQL conversion in the first place?

As I pointed out in earlier blog posts, SAP wants to enable customers to migrate their custom applications from non-SAP databases to a SAP DBMS. One of the biggest challenges in such migrations is converting the SQL code, especially the server-side SQL in stored procedures/functions etc.: such code can contain many complexities that may not always be easy to find. Consequently, converting server-side SQL code is an area where migration projects often overrun or fail.


As it happens, converting stored procedures is one of the main functions of SAP's Exodus DBMS migration tool. Not only will Exodus quickly analyze all server-side SQL code and report precisely which features are being used; it will also highlight those features which do not convert easily to the target SAP database of choice. This allows for running a quick complexity assessment before starting the migration project.


As for all those vendor-specific extensions to the ANSI SQL standard, Exodus takes these into account as much as possible. When the difference between the source and target SQL dialect is merely different syntax, then Exodus can often compensate by generating the syntax as required by the target SQL dialect.

It gets more difficult when there is a difference in semantics (i.e. functionality of a SQL feature). In such cases, Exodus may also be able to compensate, but human intervention may also be required. In case Exodus spots any constructs which it cannot convert automatically, it will alert the user to the construct in question, and often suggests a possible solution direction.

In my next blog post we will look at some actual examples and how Exodus handles these.


Incidentally, database vendors usually don't see their non-ANSI-compliance as a problem. On the contrary: if it makes it hard for customers to migrate away to a competitor's database, then that is good for the vendor's future business prospects. Customers often see this differently however, and words like "lock-in", "stranglehold" and "help!" often appear in related conversations.


With the Exodus DBMS migration tool, customers no longer need to feel handcuffed to a particular database vendor just because migrating to a SAP database seems too hard to even consider. So if the relationship with your DBMS vendor has turned into a painful affair, contact ExodusHelp@sap.com to discuss how SAP can provide a fresh perspective.




So, you may wonder, what would an Exodus engagement be like? Well, it may go something like this...




She had been waiting for more than an hour. Outside, it was already getting dark.

The chair had become uncomfortable by now, but the instructions had been clear. She had to wait.


Suddenly, the door opened.


A middle-aged woman stepped into the waiting room.

"She must be his secretary", it flashed through her mind.

The secretary looked around, but there was nobody else in the room.

She could only be coming for her.


"Miss Outer Join?"


When she heard her name, a shiver ran down her spine.

She opened her mouth to answer, but her breath faltered with excitement.

For a brief moment she closed her eyes.

This was what she had been waiting for, she had prepared herself for.

She took a breath and opened her eyes.


"People call me O.J."


The secretary looked at her slightly longer than would have been necessary.

Her tone was more determined.


"As you wish.

O.J., please come in.

Mr. Exodus will see you now."

When discussing Exodus, SAP's DBMS migration tool for migrating custom (non-SAP) applications, invariably this question is asked:


      "where can I download Exodus?"


The answer may be somewhat disappointing: you cannot download Exodus anywhere. Even SAP employees can't.

Not surprisingly, the next question always is: "so how do I get a copy?"


First, it should be noted that Exodus is not an SAP product. Instead, it is a tool. One of the implications is that Exodus cannot be purchased; instead SAP makes it available at no cost.

Now, for SAP-corporate reasons, Exodus is only available to two specific target groups, namely (a) SAP employees and (b) SAP Partner companies who have joined the Exodus Partner Program.

For all users of Exodus, each copy of the migration tool is personalized and registered to the name of the individual or organization. SAP will generate such a personalized copy for those entitled to use Exodus, and make it available to the user whenever requested or required. This is why Exodus cannot be downloaded from a central location.


SAP Partners (i.e. members of the SAP Partner Edge program) can join the Exodus Partner Program. This is a no-cost program, but it does require some paperwork. For example, a license agreement for Exodus needs to be signed. Once the formalities are completed, the Partner receives its copy of Exodus, which can then be used by all employees of the partner organization for commercial opportunities with their customers or prospects.

One thing that the Partner cannot do, is charge the customer for Exodus specifically; however, the Partner can charge for their services, which can use Exodus (similarly, SAP itself does not charge for Exodus alone).


If you are an SAP Partner and you are interested in joining the Exodus partner program, contact your SAP Partner Manager (if you are not sure who that is,  contact ExodusHelp@sap.com).


As may be clear from the above, currently Exodus is not available to customers or to the wider community (unless the customer also happens to be an SAP Partner). Customers who are interested in performing migrations should therefore work with either SAP or with an Exodus-equipped SAP Partner in order to benefit from the Exodus tool. In many cases, such customers would probably do that anyway since, as we've seen in earlier blog posts, custom-app migrations can be challenging and may require specific expertise.


If a customer is unable to work with an SAP Partner, please contact ExodusHelp@sap.com. At SAP we will try to find a way to ensure that such customers can still get the benefits of the Exodus migration tool.

(for an overview of all Exodus-related blog posts, see here)

I would like to share just a simple tip to get a list with the biggest tables in SAP.


Here it is:


Go to tcode DB02OLD, click on "Detailed Analysis", and fill the fields as follows: "Object type: TABLE” and "Size / kbyte: 1000000”.






Recently I described the SAP Exodus DBMS migration tool as a new offering by SAP to help migrate custom (non-SAP) applications from a non-SAP to an SAP database.


In this blog post, let's take a closer look at one of the most important Exodus features, namely: the pre-migration complexity assessment.


In any migration project, the first question that needs to be answered is: how complex will this migration be?

More precisely, you'll need to understand which technical difficulties should be anticipated. For example, are there any DBMS features used in the application-to-be-migrated which do not have a direct equivalent in the target DBMS?


It is pretty clear that this information is needed, but that is easier said than done: how can you determine exactly which SQL constructs are used in the application's stored procedures - there may be hundreds of these, consisting of tens of thousands of lines of SQL code.


In fact, many discussions about possible migration opportunities are terminated early since there are simply too many unknowns, making it too risky to proceed. Indeed, when migrations projects fail or overrun the planned schedule, this is often caused by unexpected complexities being discovered too late in the project. Had these complexities been identified earlier, then a different migration strategy might have been chosen, or it might have been decided it was best not to start the migration project at all.


Exodus comes to the rescue here, with its feature for performing a pre-migration complexity assessment.

This works as follows: you point Exodus at the DBMS server hosting the application to be migrated, and Exodus will discover what's in that DBMS and provide a detailed report on the SQL constructs found there. This is divided in two parts: one assessment is about the database schema, the other about the server-side SQL code found in stored procedure, functions, etc.


In the output of the pre-migration complexity assessment, Exodus will highlight SQL aspects that cannot be fully migrated automatically to the selected target DBMS and therefore represent additional migration complexity. For example, if an Oracle-based application uses before-row triggers as well as after-statement triggers, and we're interested in migrating to SAP ASE, Exodus will highlight the fact that ASE does not support before-row triggers (but only after-statement triggers), meaning that migrating those before-row-triggers needs additional manual work (for example, the functionality in the before-row triggers will need to be worked into the ASE-supported after-statement triggers, or implemented elsewhere in the migrated application).

In contrast, when migrating to SAP SQL Anywhere, Exodus would not highlight any issues here since SQL Anywhere supports both of these trigger types. But when migrating to SAP IQ, which does not support triggers on IQ tables at all, Exodus will report both trigger types as cases where migration complexities should be expected.


Based on the results of the Exodus pre-migration complexity assessment, we're in a much better position to assess the areas of complexity to be expected, and consequently, the level of risk of a particular migration.


But Exodus goes one step further. It also tries to quantify the amount of effort required (in person-days) to migrate the application to the target DBMS. It does this by defining a "migration cost" (as a unit of time) for each particular SQL construct found, and multiplying this cost by the number of cases found for that SQL construct. This effort estimate is about migrating to functionally equivalent SQL code, but does not include things such as testing and performance tuning (more on that in later blog posts).

For example, let's assume our application contains 7 before-row triggers and 3 after-statement triggers.

If we're migrating to ASE, Exodus will estimate 1 hour of manual work for migrating every before-row trigger to ASE (so 7*1 = 7 hours); if we're migrating to IQ, it would estimate 2 hours per trigger, irrespective of the trigger type (so 10*2 = 20 hours). Note Exodus uses a higher migration cost for migrating triggers to IQ than to ASE to reflect that migrating trigger functionality to IQ is more difficult due to IQ not supporting nay triggers on IQ tables.

Also, when migrating to SQL Anywhere, Exodus will not estimate additional time since SQL Anywhere supports both trigger types.

Lastly, Exodus budgets 15 minutes for every trigger, irrespective of its type or target DBMS. This is to reflect the fact that some amount of manual migration work (like functional verification, syntax changes or debugging) is likely to be needed anyway.


Now, the question has to be: How reliable are the migration effort estimates by Exodus?

It is important to point out these effort estimates should be seen as an order-of-magnitude indicator, and not as a precise statement of work that can be put

directly into a contract.

For example, when Exodus estimates that the functional migration of a particular application will take 40 days, that should primarily be interpreted as meaning: this won't be possible to complete in two weeks -- but it's also not likely to take half a year.

Obviously, if sufficient time can be spent on analyzing the Exodus estimates and the application's SQL code in greater detail, a more realistic effort estimate may be reached.


In practice however, a large factor in a migration project will be the SQL skills and migration experience of the team performing the migration. An automated tool like Exodus can handle a large part of the work, but ultimately every migration remains a manual effort where humans need to put it all together and address those parts that Exodus cannot handle automatically. The quality and experience of that team may have a big impact on the actual amount of effort that needs to be spent. To reflect this, users of Exodus can redefine the migration cost definitions as they think is best.


(for an overview of all Exodus-related blog posts, see here)

Here's today's quiz question: Have you heard of a company called Delphix?

If your first association is about ancient Greek temple priestesses, then this blog post will be useful for you, so keep reading.

(in my case, my first thought was actually about Borland's Pascal programming suite - and I guess that just says something about me. But I digress).


Delphix, to keep you guessing no longer, is a company from California that makes nifty software for database storage virtualization. The reason for mentioning them here is that they have just released support for SAP ASE (y'know... the database formerly known as Sybase ASE first and SAP Sybase ASE later. But I digress).


The main attraction of Delphix is that it helps reduce database storage costs.

Think of the scenario where many copies of a particular database are hanging around in your application development department. For example, your development teams all have their own copies of a particular production database. And additional copies of that database are also present in the various test environments. At the end of the day, there could easily be tens of database copies around, which, ultimately, are largely identical since they are all based on the same original.

Consequently, there is a lot of duplicate storage of identical disk blocks going on. Simply put, the Delphix product will optimize storage space by avoiding to store identical blocks twice. How does this work?


Basically, Delphix keeps a single copy of a particular database in its own storage server. Copies of that storage (which will look like ASE database device files from an ASE server perspective ) can be provisioned to multiple 'users' (e.g. ASE servers that need to have a copy of that database). If the original database is 1 TB in size, and 35 copies are in use around the various departments, Delphix will -in essence- only store that 1 TB, despite the fact that the users see 35 copies of that 1 TB database.


Now, the important thing here is that these ASE databases are read/write: there is no functionality restriction. When a modification is made in one of those databases, Delphix will ingest the modification into its centralized/virtualized copy, in the most storage-efficient manner. All of this is fully functionally transparent to the end users who experience nothing special: they have their own copy of the database and they can do with it whatever they like.

In the mean time, you're using significantly less storage space then when all 35 copies would exist on their own.


Some additional points worth noting:

  • An ASE server doesn't see any difference between a 'regular' ASE database that is created in the classic way on local storage, and a Delphix-based ASE database where the ASE devices files are actually served up by the Delphix engine. To the ASE server, it's just accessing database device files, regardless of where they originate from.This means there can be an arbitrary number of Delphix-based databases in an ASE server, and these co-exist seamlessly with 'normal' ASE databases.
  • Given how Delphix stores its data, the overhead for provisioning an additional copy of a database is very low. So there is little reason for each developer NOT to have their own copy of the development database.
  • Delphix can keep its virtualized ASE database in sync with the original ASE production database (by detecting database dumps being made, and then  gobbling them up, thus updating the Delphix copy). This makes it easy to refresh the copies that were provisioned out to developers or testers.


Delphix already supported certain other database brands (whose names shall remain unmentioned but which, I just realized, are  located in similarly named cities that can be described as the following regular expression: /Red.o.d/. But I digress again).

Anyway, over the past year I had the pleasure of providing technical assistance to Delphix during their effort to develop support for SAP ASE. This was released in July 2014 in Delphix version 4.1.

I quite like the concept of how Delphix virtualizes an actual database. I have to say I am impressed by the way Delphix have designed and engineered their product -- there is some above-average complex stuff going on in there (had you asked me earlier if this approach was a good idea, I would probably have dismissed it as too complex and too little gain. I guess I would have been wrong). Yet, Delphix looks simple and easy to use from the outside, and I guess that is proof of a well-designed product.


There is a lot more to say about Delphix -- more, in fact, than I will claim to understand. Fortunately, the Delphix web site has all the information you'd want: http://docs.delphix.com/display/DOCS41/Delphix+Engine+4.1+Documentation.

Happy reading.

This is the first post in a series of blogs on the topic of migration custom applications to SAP databases.


Update: these additional blog posts were published in the mean time:


First, some history.


When Sybase was acquired by SAP in 2010, the general perception about the long-term viability of the Sybase database products changed quite dramatically.

Previously, discussions with customers were often centered around justifying why investing in Sybase technology was not a risky proposition with a doubtful future - often inspired by active spreading of FUD by Sybase competitors.

But ever since Sybase became part of SAP, those perceptions have pretty much disappeared as it was now clear that Sybase's future was not in doubt.

At the same time, a new element started to appear in those customer conversations. Namely, we started receiving inquiries whether SAP could assist in migrating some applications from a non-SAP database to SAP Sybase ASE.

Such requests had upsides as well as downsides. Upsides, because it underlined how ASE was increasingly being seen as a viable alternative to certain other well-known DBMS brands (BTW, at Sybase, we knew that all along). At the same time however, SAP did not actually provide migration tools to support such migrations. Unfortunately, what this meant was that the best help SAP/Sybase could offer to such customers was, basically, to wish them good luck.


Time passed...


But we did not sit idle...


Since we were unable to find existing migration tools that met SAP's requirements, we decided to build our own.

Therefore, let me now please introduce (drumroll):


    Exodus, the SAP database migration tool for migrating custom applications to SAP databases.


This is great news! Today, with Exodus, SAP is in a position to provide substantially better support to customers interested in database migrations.


In a nutshell, Exodus supports migration of customer applications between the following databases:

  • Supported source databases: Oracle (v.9 and later) and Microsoft SQL Server (v.2000 and later)
  • Supported target databases: SAP ASE, SAP IQ and SAP SQL Anywhere


There is much to say about this topic (and indeed I will, keep watching this space). But first, here are some key points I need to get straight rightaway. Experience has shown that, otherwise,  confusion may quickly take hold.


Key point #1: Exodus is about migrating 'custom applications'

With Exodus, SAP aims at migration of custom applications, which means: non-SAP applications. For SAP apps such as Business Suite, well-established migration practices are already available and Exodus would not contribute much.

  • A 'custom application' is typically a one-off application operated by a particular customer. Such a custom app was usually either built by a customer itself, or built specifically for the customer by a third party.
  • Custom applications are often transaction-oriented, meaning their basic function is to retrieve, insert or modify individual data rows. To contrast, consider analytics-oriented applications which are typically read-only (apart from bulk-loading the data), and access large numbers of data rows in a single operation.
  • Exodus aims primarily at migration of custom OLTP application which are based on server-side SQL, commonly referred to as "stored procedures".
  • Applications which the customer purchased or licensed from a software vendor (like SAP, but I am sure you can think of others) are not 'custom applications' in the sense as meant above; if a customer wants to migrate such an application to an SAP database, the software vendor itself is typically driving this. Exodus does not apply in such cases (although we are certainly interested to work directly with those software vendors to help them port their application to a SAP/Sybase database).


Key point #2: Exodus is free - though not freely available

The SAP Exodus migration tool is not charged for. At the same time, you will search in vain for a location where Exodus can be downloaded, since it cannot.

The Exodus migration tool is available to SAP employees as well as to qualified SAP Partners participating in the Exodus Partner Program.The partner program offers SAP partners access to Exodus at no cost, but does require some administrative steps (more about that in a later blog).

For customers interested in migration, this means they should engage with SAP directly, or with an SAP Partner that can use Exodus.

Please be assured that there is no evil scheme behind the decision not to make Exodus freely available to customers, but some rather more practical reasons. Regardless, we will make an effort to ensure that customers get the benefits of Exodus.


Key point #3: Exodus is not an SAP product

Exodus is a tool, not a product. That may sound like splitting hairs, but it is actually an important distinction. For example, did I mention Exodus is not charged for? Also, support for projects using Exodus is not provided through the regular SAP product support channels, but directly by the Migration Solutions team in SAP's Database & Technology group.

Another aspect is that migration tools are typically unable to provide 100% automatic and functionally correct migration results. Exactly how well Exodus performs in practice really depends on the application being migrated -- and in the world of custom applications, no two applications are the same.


Key point #4: Migrations can be tricky

I'd be lying if I said that with Exodus, you can now migrate every custom application with one click of the mouse on Friday afternoon, and then switch production to the migrated system by Monday morning.

Folks who got their hands dirty in database migrations know that these can be challenging on multiple levels. While the Exodus tool will provide support in crucial areas such as schema migration and automatic conversion between SQL dialects, there will probably be some bits and pieces that need to be migrated manually. The good news is that Exodus helps to identify where those bits and pieces are, and in many cases also suggests possible solutions.



Let me stop here for now.


Bottom line:

With Exodus, SAP is serious about helping customers migrate their custom applications from non-SAP databases to SAP.


More information coming soon -- watch this space!

(if you have questions that cannot wait until the next blog post, contact your local SAP representative or ExodusHelp@sap.com)



Rob Verschoor

Global DBMS Migration Lead

Migration Solutions, SAP Database & Technology


Date: January 29, 2014

Time: 1:00 p.m. EST/10:00 a.m. PST


Featured Speakers:

Paul Medaille

Director, Solutions Management, Enterprise Information Management, SAP


Ina Felsheim

Director, Solutions Management, Enterprise Information Management, SAP


Many companies have invested heavily in mission-critical business software initiatives. Yet too often these worthy but expensive initiatives fail to deliver anticipated benefits because of poor data. What’s needed is an integrated software solution that improves collaboration between data analysts and data stewards with the tools for understanding and analyzing the trustworthiness of enterprise information.


SAP Information Steward software provides continuous insight into the quality of your data, giving you the power to improve the effectiveness of your operational, analytical, and governance initiatives.


Join us on Wednesday, January 29, 2014, for an insightful Webinar, Gain Data Quality Insights with SAP Information Steward, to learn how this SAP solution can help your organization:

  • Analyze, monitor, and report on the quality of your data
  • Adopt one solution for data stewardship
  • Create a collaborative environment for your IT and business users
  • Turn data quality into a competitive advantage


Don't miss this webinar event!




TECHCAST  — SAP Sybase Replication Server: Future and Roadmap

December 11, 2013  1pm EDT / 10am PDT

Register now!


Join our next ISUG-TECHcast as guest Speaker Chris Brown joins us for an informative discussion about the future and roadmap of SAP Sybase Replication Server.


We’ll discuss how the latest version of SAP Sybase Replication Server replicates transactional data for non-SAP applications in real time directly into SAP HANA – without slowing or disrupting the systems that are running the business – to create a real-time analytics solution. And you’ll learn about new high availability and disaster recovery functionality for customers running SAP Business Suite on SAP Sybase ASE to ensure continuous availability during planned and unplanned downtime.


Learn about these new capabilities and functionalities, including:

  • Storage optimization
  • Operational scalability capabilities
  • Performance enhancements for handling very large data volumes
  • Latency monitoring and alerting capabilities as part of an essential, value-added solution
Emma Capron

Going to extremes

Posted by Emma Capron Nov 4, 2013

In today's ultracompetitive business environments, it is critical to have the ability to collect, store, manage, protect, query, and generate reports from larger and larger volumes of complex data. From retail stores to hospital emergency rooms, immediate access to accurate, relevant data is a basic business requirement.


To create and sustain a competitive advantage in the face of exponential data growth and increasing customer expectations for superior service, your data management system must deliver extremely high performance, unconstrained scalability, rich functionality, bulletproof security, and cost-effectiveness.


For more than 30,000 customers around the world, the solution is SAP Sybase Adaptive Server Enterprise (ASE).1


For example, Globe Telecom in the Philippines turned their business upside down with the help of SAP Sybase ASE. They replaced industry-standard calling cards with air-loading, enabling them to build a data management infrastructure that lets them market aggressively and closely manage costs. Rodell Garcia, CIO, explains why: “In terms of ROI, our SAP Sybase ASE system is a critical enabler of revenue generation; the payback period is certainly very short.”2


Meanwhile, electronic medical records specialist, MIQS, used SAP Sybase ASE to present doctors with the critical combination of information they need when treating kidney dialysis patients. The result was a 40% reduction in mortality rates.2


Finally, India is famous for its huge railway network, but also for its congested stations, as passengers queue to buy tickets for trains departing the same day. Working with SAP technology, Indian Railways and the Center for Railway Information Systems built a ticketing system so passengers without reservations can buy tickets at any station, at any time, from dedicated terminals and automatic machines. The result is no more long queues – and lots of valuable data for the company to analyze and act on.2


You can learn more at www.sap.com/realtime_data/transactional_data. Or join the conversation #redefinedata


1Video: Top 5 Reasons to Choose SAP Sybase Adaptive Server Enterprise

2Customer ebook: Going to Extremes


Filter Blog

By author: By date:
By tag: