1 2 3 5 Previous Next


62 Posts
Mohamed Rafi

graphdb for business

Posted by Mohamed Rafi Feb 2, 2016

The world we live in is defined not just by the things that are around us but also how they are connected and influence each other. The same goes to the data. The business value that the data we have at hand is defined not just numbers that represent them but the way these numbers are connected to each other and the way they influence each other. Traditional databases models like RDBMS are really stable and have stood the test of time. But like all human inventions they have their own inherent defects. The way the data is stored in RDBMS is not really modeled after the real world meaning there is limit to the amount of complexity that we can get with these systems. Beyond which it becomes really hard and lots of energy is spent on just running the system rather than the solving the problem at hand. There are many new solutions out there that promises this solve the problem. But graphdb is one of the promising solutions out there.


What's graphdb?

To put in simple words the data in graph database is stored in nodes and relation between them rather than tables thus it is more natural way of representing data than in tables. To represent complex relations you don't need to have many tables connected in complex way thus increasing the performance. such systems are called as GDBMS (graph database management system).


Why would want to have your business data in graphdb?

RDBMS is generally good for transactional data since in general erp systems the transactions have to deal with only particular part and mostly they don't care about complex relations but business intelligence is all about finding existing relations and predicting new ones. So graphdb is instantly the beset way to store data in business warehouse.


Can existing RDBMS system be used for graphdb storage?

There are many solutions out there in the market that provide implementation of graphdb on existing RDBMS system thus reducing the need for extra investment for such systems by simply providing an abstraction layer over an existing RDBMS so that the end user gets the best of both worlds the graphdb and RDBMS. The systems like hana which boasts for its in-memory processing capability can be made use much more efficiently if some such abstraction layer is implemented on top of it so that the users who need such systems can use the abstraction layer while others can choose to ignore it.

Completely new graphdb systems have some problems but hybrid systems don't have them and they are bound to succeed and it may be the path that business intelligence might pursue in future to drive business forward.

Cloudera navigator was released almost two years back with the name Cloudera navigator 1.0 to manage and handle large volume of data for enterprises and organizations. Then after a year, Cloudera navigator 2.0 was released with significant improvements and technology advancements. The major highlight of this version was metadata management and Hadoop lineage.

There are leading companies also giving Hadoop consulting services to explain its benefits to users. You would not believe but today more than two hundred customers are using Cloudera navigator to maximize trust and visibility in Hadoop deployments.

                        Future Eye On Cloudera Navigator To Proliferate Data Management In Apache Hadoop.jpg

What’s hidden in store for Cloudera Navigator in near future – Let us have a quick tour …

The exciting news about Cloudera Navigator is that it has just joined a partner program that will accelerate the deployment of partner applications. This partner includes many governance partners to proliferate overall data management, Hadoop lineage and Hadoop consulting activities. This collaboration will not only improve Hadoop deployments but interoperability will also be enhanced significantly.


In this blog post, we will focus on Cloudera Navigator versions and their capabilities how they can help enterprises in data management and getting full-fledged governance solutions. It would not only be interesting but easy for Hadoop consulting firms to notice the major changes that will be made in Hadoop architect in coming years.


Cloudera Navigator Roadmap

     1. Cloudera Navigator 2.0


  • Metadata management for enterprises and high quality governance solutions.
  • Lineage for various Hadoop features like HDFS, Hive file system, MapReduce, etc.


     2. Cloudera Navigator 2.1


  • In 2.1 version Cloudera navigator metadata management and lineage capabilities were improved dramatically.
  • In this version, it would be easy for users to play with metadata and searching the right attribute but he will not be allowed to audit history.
  • There is one improved auditing interface with enhanced governance capabilities to give powerful data management solutions.
  • There is comprehensive auditing facility to expand coverage.

    3. Cloudera Navigator 2.2


  • This 2.2 version will include set of capabilities data curation and data stewardship.
  • It has been featured with powerful engine to automate capabilities data curation and data stewardship activities.
  • Using this version, data can be transformed or converted to a meaningful structure that can be presented to partner firm later.


    4. Cloudera Navigator 2.3


  • In this version, two capabilities are highlighted mainly; these are self service discovery and expanded coverage of components.
  • Now users can find, analyze or build new data sets themselves for more powerful and wonderful business solutions.
  • The component coverage has also been revised using different schemas and design definitions.


What will be coming in 2016?


We have already been discussed what can be included in future Cloudera navigator versions like expanded lifecycle management, expanded auditing and expanded coverage. Now let us have a quick on facts what will be coming in the year 2016.


  • The experts will leverage technology with Hadoop architect to optimize data management and lineage activities.
  • Users would be able to discover new data sets themselves to maximize data transformation rate.
  • Data management service can be customized to handle sensitive and non-sensitive data separately.
  • Now component could be studied out in detail from every angle for unified and improved Hadoop deployments.


There could be lots of surprises relates Hadoop architect and developers ther than these discussed earlier. We will keep updating you to leverage the best of all.

Of all the challenges to overcome when migrating SQL code from, say, Oracle, to, say, SAP ASE, you would probably not think that it's important whether the converted SQL statements are in uppercase or in lowercase. But as you'll see below, in fact it is important, and the SAP Exodus DBMS migration tool takes care of this.


SQL statements want to look pretty too!

But first,  one of the noticable features of the Exodus-generated SQL code is that it is quite nicely formatted. Indeed, Exodus reformats and re-indents the SQL code to make it look better and more standardized. In most cases, the output is well-formatted even if the SQL code used as input was looking like a mess.


Here's an example of some Oracle PL/SQL:

p_total := 0; p_Error_Code := -1; p_error_text = 'n/a';

if p_flag = 'Y' then

select c_name, C_ID into v_NAME, v_id from CUST_TB where c_id is not null order by C_date;

end if;


And this is what Exodus generates after conversion to ASE T-SQL. Also note how all identifiers are formatted uniformly in lowercase (more on that below).

SET @p_total = 0 

SET @p_error_code = -1 

SET @p_error_text = 'n/a'

IF @p_flag = 'Y'



        @v_name = c_name ,

        @v_id = c_id




        c_id IS NOT NULL





Why uppercase/lowercase matters

While consistently formatted and indented SQL is certainly nice to have, it has no impact on the functionality or correctness of the generated SQL code.

There is however another seemingly innocent cosmetic aspect that is in fact quite important for correctness. This concerns the uppercase/lowercase spelling of identifiers.


The issue of uppercase/lowercase matters because some databases, like Oracle or DB2, are case-insensitive for identifiers. This means that mytable, MyTable and MYTABLE all refer to one and the same object.

However, other databases such as SAP ASE and SAP IQ are case-sensitive for identifiers by default (and their case-sensitivity for identifiers is coupled to case-sensitivity for data comparison, so you can't simply configure ASE or IQ as case-insensitive). This means that mytable, MyTable and MYTABLE refer to three different objects as far as ASE is concerned.

Consequently, when converting SQL code from Oracle, syntax errors will result ('object mytable not found') if the original developers were not sufficiently disciplined in following a consistent spelling in their Oracle coding (having seen lots of Oracle PL/SQL code from a wide range of customer applications I can tell you that usually, those developers were usually not so disciplined).

The Exodus migration tool aims to compensate for this, and ensure that all identifiers are uniformly spelled. For this, Exodus can format identifiers according to user-configurable preferences, thus eliminating syntax errors due to uppercase/lowercase mismatches in the converted SQL code (keep reading for details).


Exodus will also perform a spelling consistency analysis and report how consistently identifiers have been spelled, and whether there's the overall tendency is towards uppercase or lowercase. The Exodus user can use this information to pick an identifier case mapping option that is closest to the original SQL code.


Why uppercase/lowercase matters even when it isn't necessary

It is worth noting that identifier case mapping is still relevant even when converting to a target DBMS which is case-insensitive for identifiers (like SQL Anywhere). This is because identifiers are stored in the Oracle database catalogs in uppercase, independently of how they were originally specified (for brevity, I'm ignoring discussing delimited identifiers for now).

In such a situation, the target DBMS will not raise any syntax errors due to uppercase/lowercase mismatches, so you could conclude there is no need for Exodus to perform any case mapping on the identifiers.

However, the Exodus-generated DDL statements to recreate the schema in the target database are based on information extracted from the source DBMS's catalogs. Since these identifiers are stored in uppercase in the Oracle catalogs, the converted table and column names will therefore be in uppercase too:


User's original Oracle DDL:

CREATE TABLE MyTable (MyKey NUMBER, MyAttributes VARCHAR2(100))


  • Identifiers are stored in Oracle catalogs as MYTABLE, MYKEY, MYATTRIBUTES


Exodus-generated DDL statement:



Compared with the DDL specified originally by the user in Oracle, some information about the identifier spelling has now been lost. Other applications that expect the original identifier spelling, like a client app which performs  SELECT * from MyTable, will have a problem now. Also, it is quite possible that certain corporate coding standards were followed in the original Oracle system, which are no longer adhered to in this converted code.

To address this  challenge, Exodus allows the user to specify a list of (partial) words which will be used to format the identifiers.

In the example above, by specifying a list of the words My, Table, Key and Attributes, Exodus will generate the original identifiers again:


Exodus-generated DDL statement with 'dictionary' case mapping:

CREATE TABLE MyTable (MyKey INT, MyAttributes VARCHAR(100))


The only downside of using this formatting option is that the user needs to specify this list of words, which may or may not be easy to do. Also, it will cause the overall conversion to be somewhat slower.


Exodus case mapping options

In summary, Exodus lets you specify how to perform case mapping for 3 types of identifiers, namely:

  • user names
  • table/column names (and names of indexes, sequences, cursors)
  • names of stored procedures/functions.

For each of these categories, one of the following case mapping options can be specified by the Exodus user:

  • no mapping at all (the default)
  • uppercase
  • lowercase
  • 'sentence case' (=lowercase with initial capital, i.e. Mytable)
  • 'word case' (=lowercase with initial capital after underscore, i.e. My_Big_Table)
  • 'dictionary case' (=using a user-specified list of words, see the example above).


Note that names of SQL variables and parameters will not be mapped: the spelling of the first declaration of a variable/parameter will be applied to all subsequent occurrences.


In conclusion...

Migrating SQL code for custom applications between different DBMSs is not an easy task, and there are many hurdles to overcome. The SAP Exodus DBMS migration tool for custom application goes a few extra miles to make the migration job easier.


For more information about SAP Exodus, see the main Exodus blog post here

One of my professional frustrations is that most non-IT people don't seem to care about things that I find interesting.

For example, rarely do I meet interesting people at a party who don't walk away when I start talking about the fascinations of converting one SQL dialect to another. I mean, are there really that many topics that are more fascinating?  (okay, I admit I'll bow to string theory, nuclear fusion, and why it's so much harder to loose weight when you're over 45 years old).


For the SAP Exodus project, we needed to find a way to demonstrate what it was that we were trying to do. That ain't easy: I mean, when you migrate an application from one DBMS to another, then what do you get? Well, initially you had an application that was working fine.... and after it was migrated it is still working fine, except on a different database. For some reason, many people find that boring (as I mentioned earlier, this SQL migration stuff gets me really excited -- but I guess that just says a lot about me). So how can we make this more visible?


Finally, I think we found a way to show what we're trying to do.


Below is a link to a video recording of a demonstration of the SAP Exodus DBMS migration tool.

The video shows a 2-tier, stored-procedure-based, transactional client-server application. The application is for a publications database, with entities like authors, titles and publishers (for those with a Sybase background: it's loosely modeled on the 'pubs2' database). There's some 2000 lines of PL/SQL involved.

The following steps are demonstrated:

  1. The video first demonstrates how the application works on an Oracle database, and how for example Oracle PL/SQL exceptions are used to handle error conditions.
  2. Then, Exodus is run against the Oracle application. Exodus extracts the schema and all stored procedures, functions and packages from the Oracle server, and converts these to the Transact-SQL dialect for SAP ASE.
  3. The generated SQL code for creating the schema is executed against an SAP ASE server, and so are the converted SQL objects (e.g. stored procedures, packages, etc.)
  4. The data is copied from Oracle to ASE (using the former Sybase DirectConnect product)
  5. The client application is disconnected from Oracle and reconnected to ASE, and,magically, the application works exactly the same, including the handling error conditions (despite ASE not having an exception mechanism like Oracle -- see here for more examples of things that may seem impossible).


The real important point in this demo is that no changes had to be made to either the generated SQL code or the client application: Exodus performed a 100% correct and automatic conversion from Oracle to ASE (admittedly, things may not be 100% correct and automatic when we would run Exodus against a real-life customer application, but hey, that's why it is a demo: to show what is potentially possible).


Now, watch the demo here:



If you're a customer and you're interested in exploring how to migrate custom applications away from Oracle, MS SQL Server or DB2, contact ExodusHelp@sap.com to discuss how SAP can help.

If you're an SAP Partner Edge partner and you would like to get a copy of Exodus to use in migration projects with your customers, contact ExodusHelp@sap.com as well.

Rob Verschoor

Exodus v.2.6 released

Posted by Rob Verschoor May 27, 2015

Yes, I've been quiet for a while in this space.

But I assure you, dear reader, that was not due to laziness.


We were, in fact, working heads-down on a bunch of things. For example: Exodus version 2.6, which has been available for a while now; a demo application to show how Exodus migrates applications (see next blog post, coming soon); a video of that demo (see same upcoming blog post); and a bunch customer engagements with Exodus.


To focus on Exodus version 2.6: the main new features are these:

  • Support for IBM DB2 as source DBMS (LUW platforms only). This means that the SAP Exodus migration tool now supports conversion from Oracle, Microsoft SQL Server and DB2.
  • Conversion of MS SQL Server TRY-CATCH exception handling to ASE
  • A new GUI, solving a number of issues with the old GUI
  • Configurable transaction mode (chained/unchained) for converted SQL code
  • Improved handling of arbitrary data encodings
  • Conversion of synonyms (incl. Oracle public synonyms) for tables & views
  • Various performance improvement and bug fixes


Stay tuned for more!

I spend quite a considerable amount of my spare time reading. Mostly, I read fiction, science-fiction and fantasy, but sometimes I read crime novels and non-fiction. I have in the past swallowed whole books in a single sitting and when done, I find it hard to part with these literary beasts. I really enjoy the smell of a new book and how as you move through the tale, each page shapes your imagination with a bend or crease of the books spine. For me it is one of life’s simple pleasures, and while I am heavily involved in technology, I just don’t see how an eBook Reader could possibly compare. As you can imagine, this has resulted in a very many books lying around differing nooks and crannies of my home.

With this in mind and under a slight amount of pressure from my better half, I decided to buy a book shelf and get to organising my mass of published pulp. Strolling through the display aisles of Ikea’s storage department, I pondered. How much space do I need to store my current book collection? How much space will I need in the future? How will I organise these books to best make them easily accessible? What if I add more books or run out of space? What if I am not the only one reading from my collection? I suppose these questions amongst others need consideration when storing and organising any information. I remember my Uncle describing a similar issue he faced when starting his first business.

During the early 80’s my Uncle bought a milk float. With no intent of just giving milk away, he spent a few days canvasing for customers. With 21 names and addresses jotted down on a list, one cold morning he began delivering his first batch of Cleopatra’s bubble bath. As you can well imagine finding 1 of 21 names on a piece of paper was quite a simple task. Problems began to occur over the next 3 months, when my Uncle became the most popular milk man in his local area. His single page of 21 names grew to 415 and simply scanning down the page was no longer an option. One evening my Uncle set himself a challenge; organise.

My Uncle bought an address book and his initial approach was to organise his customer’s by name. It was in some way a solution to his problem, but left him driving from one side of the area to the other, although John Smith and Paul Taylor lived next to each other in my Uncle’s book, they lived nowhere near each other in the real world. A different approach was needed. My Uncle bought a new address book and at the top of each page wrote the street name. He then listed each street number down the page. For his customers, he populated the row with their order details and unpopulated rows were addresses that could be canvassed. Hey presto, my Uncle had indexed his data, and in turn sped up his delivery time.

Over the next few months my Uncle faced another problem, his service was too good. As new customers flocked in, indexing was no longer enough, he would need to buy another milk float and hire another milk man.

Initially this worked out quite well. They split the area geographically in two, my Uncle would work from 4am to 10am and his associate Fred, worked from 10am to 4pm. At 10am my Uncle would hand Fred the address book so he could do his area, there was only one, so they could not use it concurrently. One day my uncle suggested that Fred bring the address book home and make a copy. It would mean they had an address book each, overcoming the concurrency problem, and could take turns with shifts, vacations and cover periods of illness. They were in simple terms creating a replicated, distributed version of the address book.

Fred like any new employ was working avidly to demonstrate his value. Every day when his shift would end, Fred would pick a street from his area of the address book and knock on each door. He would canvas new customers and offer existing customers a chance to change their daily deliveries. This was a major success and when Fred brought it to my Uncle’s attention, he gave him the next day off.

Getting up at 4am, my uncle began his milk round, he hurried through his deliveries realizing he also had Fred’s area to do. At 3pm he was finished. He was both delighted and surprised at how quickly he had finished. The day was a Friday, which normally was quite slow, as customer’s got a double weekend delivery. When Monday morning came, Fred faced many angry faces. Some existing customers got their old deliveries and newer customers got no deliveries at all. By Fred only updating his address book, he had left my Uncle’s address book in an old, outdated state. Both books were no longer consistent. How would they overcome this new problem?

Fred’s first suggestion was, at the end of every week they would meet and make changes to their respective address books. The problem here would be, any changed deliveries or new orders would not be update until the end of the week. So they were still always slightly out of date and inconsistent, a major problem in the event of absenteeism due to illness. My Uncle suggested that Fred should take both books at the end of his shift and update them. But if for any reason Fred was unable to drop the book back on time, my Uncle’s book would be unavailable. A final suggestion was for my Uncle and Fred to log their changes on a separate piece of paper. Every morning they would check these logs and although it added an administrative overhead, in balance it was an acceptable compromise.

My Uncle eventually sold his milk round, but had his business kept growing, I assume these minor inconsistencies would have been the least of his worries. In modern databases we suffer a lot of the same issues experienced by my Uncle, however we have now scaled from hundreds of records to hundreds of millions. We have systems that are being read and updated by thousands of users every few minutes. Now more than ever is the trade-off of consistency, concurrency and availability paramount to enterprise success. These hugely scaled databases can never be perfect, but as long as the user is unaware of these imperfections, job done.

Now what colour bookshelf should I get?

SAP HANA XS offers the possibility to schedule applications as jobs. As database backups are triggered by executing SQL commands an application can be built which calls these SQL commands. This blog will provide an overview about howto write a simple XS application which creates a database backup and to schedule it as an XS job.

The first step is to install the SAP HANA database server. In this example SAP HANA SP 9 is used.

After the installation has finished successfully the Service for the XS engine is already running. The status can be checked within SAP HANA Studio or on the command line using the command "HDB info" as user gtiadm.



To make a first call to the the XS engine use your favorite browser and type in the URL below.


So, the database is succesfully installed, up and running. The XS engine is also working.
The next step is to create an SQL user whose permissions are limited to do backups and to work with the XS engine.

In this example the name of this user is BACKUP_OPERATOR.




To verify that the user is able to create backups, logon to the database using SAP HANA Studio and create the first backup.


So the user is able to create backups. The next step is to logon to the XS Admin tool using BACKUP_OPERATOR to make sure this is working fine too.




As everything is working fine we can start to build the small SAP HANA XS application which is executing the backup command.

We also build a test application to be able to check the final backup statement and to create a backup manually.

To keep things separated I create a new workspace first.


After the studio has restarted choose the development perspective by clicking "Open Development".


The project data will be stored in the database GTI. Due to this we have to add the database instance to the current workspace.




Finally we can create to actual project.






After clicking through all the popups the project should be created succesfully. It has no files yet.


For this project we need five files:

  1. DataBackup.xsjob
    It contains the job definition.
  2. DataBackup.xsjs
    This file is called during the job execution. It only forwards the call to the file DataBackup.xsjslib
  3. DataBackup.xsjslib
    This file contains the actual command to create the database backup.
  4. index.html
    This is a simple html file which allows us to run the backup manually.
  5. DataBackupTest.xsjs
    This is a JavaScript file which is used by file index.html


The content of the files is as follows. You can use copy and paste.:



  function getSql(prefix) {

         return "BACKUP DATA USING FILE ('" + prefix + "_BATCH_JOB')";



  function getTimestamp() {

        var date = new Date();

        var year = date.getFullYear();

        var month = date.getMonth() +1;

        var day = date.getDate();

        var hour = date.getHours();

        var minute = date.getMinutes();

        var second = date.getSeconds();

       return year + "_" + month + "_" + day + "_" + hour + "_" + minute + "_" + second;



function createBackupStatement(prefix) {

        return getSql(prefix + "_" + getTimestamp());



function createBackup(prefix) {

       var sql = createBackupStatement(prefix);

       var conn = $.db.getConnection();

      var stmt = conn.prepareStatement(sql);







  var dataBackup = $.import("DataBackup.xsjslib");

var prefix = $.request.parameters.get("prefix");

  var action = $.request.parameters.get("action");


  if ("Create backup command" === action) {

  var sql = dataBackup.createBackupStatement(prefix);

  $.response.setBody("</p>SQL command for data backup:</p>" + sql);



if ("Create backup" === action) {


     $.response.setBody("Backup created successfully. Check backup directory.");




function createBackup(input) {

       var dataBackup = $.import("DataBackup.xsjslib");






      "description": "Create data backup of database GTI",

      "action": "Automatic_Backup:DataBackup.xsjs::createBackup",

      "schedules": [ {

        "description": "Create data backup of database GTI",

                "xscron": "* * * * * * 59",

                "parameter": { "prefix": "MyScheduledBackup" }

        } ]





      Project to create a database backup.  </p>

          <form action="DataBackupTest.xsjs">

               <p>Prefix for backup:<br>

                      <input name="prefix" type="text" size="30" maxlength="30">  </p>

              <p>  <input type="submit" name="action" value="Create backup command">

                      <input type="submit" name="action" value="Create backup">

           <input type="reset" value="Reset">

             </p>  </form>



To save and activate the application press the "forward"-button. After the project was activated successfully we can start a test.
The application creates a backup using a custom prefix followed by the current timestamp.





The backup location is the same as it was when the backup was created within the Studio. Different paths or the usage of backint can be achieved with different backup commands.
The last step is to use this application as an XS job. For this we have to activate the scheduler inside the XS engine.



You might have to create the parameter "scheduler" if it does not exist.

The scheduling is active once the parameter is set. It is not necessary to restart the database or the XS engine.

To schedule the application as a job we have to logon to the XS Admin.


After clicking on the field "XS Job Dashboard" the job is shown with status INACTIVE. By clicking on the job you get into the details of the job.



By clicking the button 'Edit Schedule' you get a pop up where you can set the job active.




After pressing the button OK, the job has the new status ACTIVE.

But the job is not scheduled yet. To schedule the job the field 'Active' in the upper part of the browser needs to be checked. You also have to set the password for user BACKUP_OPERATOR and a Start- and End Time.


After pressing the button "Save Job" at the bottom of your browser the status of the job will change to "SCHEDULED" or, "RUNNING".



Using the configuration from above the backup is scheduled every 59 seconds which is very often of course.

More details about the XS job itself can be found here.

     Technology is rapidly changing and with the world moving from physical bits to digital bytes, there is an ever growing chasm amongst those who take analytics to the next level and those who are stuck with management reporting. This growing separation amongst businesses results in what we call an analytical divide. For those who are stuck with management reporting the use of new tools, strategies, skills and competencies, as well as new roles, organizational models and governance mechanisms are required to cross this analytical divide and take your business to the next level.

Register now for the Gartner Business Intelligence & Analytics Summit 2015, you’ll learn how to re-master your skills and how to deliver the analytic advantage that your organization needs to succeed in the digital age.

At the summit, you will learn:

  • Communicate business value of BI, analytics
  • Develop BI skills necessary for success
  • Modernize core technologies for data integration
  • Get more out of mobile, social, cloud, in-memory
  • Understand business impact of advanced analytics
  • Craft strategy to launch/reboot BI initiative
  • Learn how others are achieving BI innovation



Date: March 30 – April 1,

Location: Las Vegas, NV

Duration: 2 Days

                                                                                                                                                                             gartner-bi-summit-298x200.pngREGISTER HERE

           Big Data Analytics are going mainstream with the growing demand from Business and IT leaders who want to utilize analytics to improve both strategic and operational decisions The old, slow and expensive process of having to collect data on one system and analyse it on another are in the past, with product offerings such as SAP HANA organizations are now able to transform their businesses with real time Big Data analytics. This will not only improve decision making but it will save time and money, allowing companies to gain new levels of insight into their customers behaviour, operations, financial performance, social media trends and more.

The TDWI Solution Summit is a two-day event geared toward senior business and technology leaders who approve or recommend BI and analytical systems and solutions that run against large and complex data sets, and are planning a project in the next 12 months.

Join an exclusive, hosted gathering of experienced professionals, industry thought leaders and top solution providers for real-world tips and best practices.


Attend  the TDWI Solution Summit  to learn:

  • Innovative technologies and practices for enterprise big data analytics
  • Find out how to transform your business with big data analytics
  • Real world tips and best practices to help you harness the power of big data analytics

Date: March 15-17

Location:Savannah, Georgia

Duration: Two days


Register Here

Rob Verschoor

Exodus Migration Magic

Posted by Rob Verschoor Dec 19, 2014

When you're trying to migrate SQL stored procedures from Oracle or Microsoft SQL Server to SAP ASE, IQ or SQL Anywhere, inevitably you'll be facing the problem that the source DBMS has particular features or syntax constructs that the target database don't have.

That may sound like a hopeless situation. However, the SAP Exodus DBMS migration tool manages to convert SQL code in a number of cases that might seem to be too difficult to migrated successfully. Let's look at some selected examples below.




Oracle PL/SQL applications often use sequences to generate unique numbers like primary keys, for example:

     /* PL/SQL syntax */

     INSERT INTO MyTable (keycol, attrib1)

     VALUES (MyKeySequence.nextval, 'Hello!');


SAP ASE does not support sequences, but Exodus migrates this to identical functionality anyway:

     /* ASE T-SQL syntax */

     INSERT INTO MyTable (keycol, attrib1)

     VALUES (dbo.sp_f_dbmtk_sequence_nextval('MIGRATED_DB','dbo.MyKeySequence'), 'Hello!')

What happens here is that the Exodus run-time component sp_f_dbmtk_sequence_nextval() delivers the same functionality as the Oracle nextval function. This component is an ASE SQL function that uses the reserve_identity() built-in function, as well as a few other tricks to effectively get a full sequence implementation in ASE (the arguments 'MIGRATED_DB','dbo.MyKeySequence' are the name of the target ASE database and the name of the ASE table posing as a sequence, respectively).

When migrating to SQL Anywhere, there is no migration problem since SQL Anywhere supports sequences natively.

When migrating to IQ, you'd think that those same SQL Anywhere sequences could be used - but unfortunately sequences are not supported when inserting into an IQ table. Fortunately, Exodus provides a run-time component similar to the ASE example above (but slightly different) thus allowing sequences to be used directly with IQ tables.




Oracle PL/SQL has a number of datatypes that do not exist in the some of the SAP target databases.

For example, PL/SQL has an INTERVAL datatype (2 flavours) and a date/time-with-timezone datatype. ASE, IQ do not support such datatypes natively, and SQL Anywhere supports only date/time-with-timezone.

A more challenging example is PL/SQL's ANYDATA datatype, and MS SQL Server's SQLVARIANT datatype. Both of these are 'type-less' and can represent any datatype.

To allow these datatypes to be migrated anyway, Exodus provides its own predefined user-defined datatypes, plus a number of stored procedures and functions that operate on these datatypes. This allows most of the functionality around these datatypes to be retained after conversion. Some additional manual editing may be needed to complete fully equivalent SQL code after conversion.



Try-catch Exception Handling

Oracle PL/SQL uses a try-catch method for handling exceptions where control transfers to a declared exception handler when an exception occurs. IQ and SQL Anywhere provide a similar exception handling method, but ASE does not. In ASE, error status should be checked explicitly after every DML statement, and action should be taken depending on the error status found. Oracle exception handlers will be jumped to automatically; moreover if a statement block or a stored procedure does not handle an exception, it is passed on the next outermost block or to the caller.

As a result, there are big differences between Oracle and ASE when it comes to exception handling. The resulting control flow, as well as the transactional aspects of implicitly rolling back in case of unhandled PL/SQL exceptions, and very different.


At first (and second) glance, it would seem impossible to convert Oracle's exception handling to ASE since the error handling approaches are so fundamentally different. Yet, that is exactly what the just-released latest version 2.5 of Exodus achieves. By generating additional SQL statements and inserting these at strategic locations in the ASE T-SQL code, we have managed to achieve nearly-identical run-time behaviour when it comes to handling exceptions.

(Please appreciate that this stuff is too complex to show as an example here)



PL/SQL Packages

A "package" is a very commonly used construct in PL/SQL.This is a named collection of stored procedures and functions, plus a number of session-specific variables that are global to the package. Even though the SAP databases do not provide a package mechanism, Exodus makes creative use of available SQL features and manages to convert PL/SQL packages to a functionally identical series of SQL objects, including the semantics around the package-global variables. Nearly all semantics are fully retained, except for some of the more rarely used features (like explicitly resetting a package).



Buffered PL/SQL output

For Sybase users, one of the first things that meets the eye in Oracle, is how output is generated. In PL/SQL this is typically done by calling

DBMS_OUTPUT.PUT_LINE() which generates a single line of output text. It may be tempting to convert this to a PRINT statement in ASE (or MESSAGE...TO CLIENT in IQ/SA). However, this would, strictly speaking, not be correct, since DBMS_OUTPUT.PUT_LINE() does not actually send anything to the client. It does indeed generate a line of output, but this is stored in an internal buffer which can be read later by the client application, and displayed to the user, once control passes back to the client from the server. If the client chooses not to retrieve and display this output, it is discarded.


By default, Exodus converts such calls to an identical approach where the generated output is buffered for the session until it is read by the client. Optionally, Exodus can be configured to convert calls to DBMS_OUTPUT.PUT_LINE() to a direct PRINT statement.

(I'm leaving the Oracle commands DBMS_OUTPUT.ENABLE() and SET SERVEROUTPUT ON aside for now, but full details are included in the Exodus documentation).




It is very common in PL/SQL, and indeed highly convenient, to declare variables with the datatype of a particular table column. This is done with the %TYPE qualifier:

     MyVar MyTable.MyColumn%TYPE;

Exodus will look up the datatype of MyTable.MyColumn, and substitute this in the variable declaration (so that you don't have to).


A similar mechanism allows defining a record whose fields resemble a table's columns. This uses the %ROWTYPE qualifier:

     MyVar2 MyTable%ROWTYPE;

However, none of the SAP databases supports the concept of a record data structure.Exodus will expand such record variables to individual variables with the correct name and datatype. For example, if the above table has 3 columns with different datatypes, the declaration above looks like this after converting to ASE T-SQL.

     DECLARE @MyVar2@MyColumn BIGINT

     DECLARE @MyVar2@MyColumn2 DATE

     DECLARE @MyVar2@MyColumn3 VARCHAR(30)

This expansion is performed in all places where the original record variable occurs.

This same approach is used for variables that are declared as RECORD datatypes.



In summary...

We do not claim that Exodus fully automates the migration of all possible SQL constructs that may be found in a real-life Oracle or MS SQL Server-based application. But the Exodus DBMS Migration tool has certainly tackled a number of particularly challenging aspects that would otherwise have required large amounts of effort and manual work.


Bottom line: with Exodus, migrating custom applications to SAP databases has become very significantly less risky and complex.


Please contact ExodusHelp@sap.com to discuss migration opportunities in your organization.

Also contact this same address if you are an SAP Partner and you want to get your own copy of the Exodus migration tool.

It's been longer than I wanted before publishing this blog post, but it's been busy getting the new version of Exodus ready (that's Exodus v.2.5, will cover that in a subsequent blog).


In the previous post in this series about the Exodus migration tool, we looked at some simple examples of how Exodus converts SQL code from a non-SAP SQL dialect to the SQL flavour of one of the SAP databases. In many cases, syntax differences can be compensated for by the Exodus migration tool.


It gets more interesting when we consider semantic differences between the SQL dialects. Let's look at two examples that are highly likely to occur in practice.


Numeric division

Consider the following PL/SQL statement, and see if you can predict what the correct result will be:

     SELECT 1/10 INTO my_var FROM DUAL;


In Oracle, the result of this division will be 0.1. But in SAP ASE, SAP IQ and SAP SQL Anywhere, the result will be 0. This is because these databases use different ways of determining the datatype of the result of an expression.

Clearly, this is a problem when migrating from Oracle since the same SQL expression can easily produce a different result. For this reason, Exodus identifies such divisions and ensures the result in the SAP databases is 0.1, like in Oracle. It does this by adding terms to the expression that force the datatype to be a number with decimals rather than an integer:


          /* ASE T-SQL syntax */

     SELECT @my_var = (1*1.0)/(1.0*10) 


Each occurrence of such a division is also flagged separately by Exodus, and it is recommended to verify the correctness of the resulting expressions, as well as the datatypes of variables such an expression is assigned to.



The empty string

Another example of something that looks innocent, but can be nasty, is the infamous 'empty string'.

In most databases, '' (=two single quotes) denotes a zero-length empty string. However, ASE is special in that the empty string is actually not empty, but evaluates to a single space. Exactly why ASE behaves this way will probably remain a mystery -- it's just always been this way.

Regardless, this is a fact that needs to be taken into consideration when migrating: also here, converted SQL statements can easily produce different results due to this semantic difference between source and target database.

When migrating to ASE, Exodus handles this migration issue by replacing all empty strings by NULL in the converted ASE SQL code. This will almost always produce the same result as in the original PL/SQL  code that specified an empty string.

For example, consider these PL/SQL statements:


     /* PL/SQL syntax */

     my_var2 := '';

     my_var3 := my_var2 || 'abcde';

     my_var4 := substr(my_var3,4,1);


Please observe that the resulting value in variable my_var4 is a one-character string 'd'.

Now, Exodus converts this code as follows to ASE T-SQL syntax:


     /* ASE T-SQL syntax */    

     SET @my_var2 = NULL  /* ORIGSQL: '' */

     SET @my_var3 = @my_var2 || 'abcde'

     SET @my_var4 = SUBSTRING(@my_var3,4,1)


These ASE T-SQL statements produce the same result as the original PL/SQL code. If the empty string had not been replaced by NULL, then the result would have been 'c' instead of 'd' (this is left as an exercise to the reader to verify).


(BTW: IQ and SQL Anywhere have the common semantics for an empty string, i.e. equivalent to NULL, so this adjustment does not apply when migrating to those databases).


In summary, semantic differences between SQL dialect need to be taken into account when migrating existing SQL code.

The Exodus DBMS migration tool tries to help reduce the complexity of the migration by minimizing the risk of ending up with SQL code that produces different results than in the original application.

In the previous episode, we discussed conversion between SQL dialects from a more philosophical angle.

I would now like to look at some more concrete examples of what the Exodus DBMS migration tool can do.


When converting from one SQL dialect to another, the necessary first step is to compensate for syntactic differences.

So let's start with a pretty simple Oracle PL/SQL statement:


     DBMS_OUTPUT.PUT_LINE('hello world!');  

Exodus will convert this statement to the following T-SQL code in ASE:

     /* ORIGSQL: DBMS_OUTPUT.PUT_LINE('hello world!'); */

     PRINT 'hello world!


In case we'd be migrating to SAP IQ or to SQL Anywhere, the resulting Watcom SQL would be as follows:

     /* ORIGSQL: DBMS_OUTPUT.PUT_LINE('hello world!'); */

     MESSAGE 'hello world! TO CLIENT;

Note how the original code is added as a comment - so in case something went wrong in the conversion, it will be easier for you to figure out what the correct result should have been.

Since not everyone likes their code cluttered with these ORIGSQL comments, they can be switched off through an Exodus configuration setting. For brevity I'll be omitting these comments in the examples from now.

Let's try something more interesting. The following PL/SQL prints the number of rows in a table:




     select Count(*) into cnt from mytable;

     DBMS_OUTPUT.PUT_LINE('#rows in table: '||cnt);

This is converted to the following ASE T-SQL code:

     DECLARE @cnt INT




        @cnt = COUNT(*)




     SET @DBMTK_TMPVAR_STRING_1 = '#rows in table: '||CONVERT(VARCHAR(100),@cnt)


There are various things worth noting here:

  • First, note how the PL/SQL SELECT-INTO statement is converted to the corresponding ASE syntax which selects a value into a variable (ASE's own SELECT-INTO has entirely different semantics).
    In case we'd convert to
    IQ or SQL Anywhere, the SELECT-INTO would be retained as Watcom SQL supports this syntax too.
  • Second, the original expression in the PUT_LINE() statement is actually an expression where a string is concatenated with an integer. In ASE, it is not allowed to specify expressions as arguments to the PRINT statement, so Exodus takes the expression out and assigns it to a help variable which is then specified as the argument to PRINT.
    (NB: for converting stored procedure calls to ASE, the same approach is used)
  • Third, Exodus knows that the cnt variable is an integer and it converts it to a string before concatenating it. Unlike PL/SQL, ASE's T-SQL does not support automatic conversion to string datatypes in such expressions, so leaving this expression unchanged would have cuased a syntax error in ASE.
    And just in case you'd ask: if the concatenated variable had been declared as a string itself, Exodus would not generate the CONVERT() call.
  • Lastly, as the converted SELECT-INTO statement shows, Exodus tries to format the generated SQL code nicely and in a standarized manner. If the formatting of the input SQL code is messy, the result will look better.




Finally, the following could well occur in a PL/SQL application:


     DBMS_OUTPUT.PUT_LINE(INITCAP('hello world!'));


For those not familiar with PL/SQL, the INITCAP() built-in function capitalizes the first letter of every word in a string. Since none of the SAP databases provide such a built-in function, Exodus comes with a library of so-called "run-time components" which are SQL stored procedures or SQL functions that implement such functionality.

Exodus converts the above statement to ASE as follows:


     SET @DBMTK_TMPVAR_STRING_1 = dbo.sp_f_dbmtk_capitalize_word('hello world!')



The SQL function sp_f_dbmtk_capitalize_word() gives the same result as PL/SQL's INITCAP().

When executed, the result is as expected:


     Hello World!

Obviously, much of this is pretty basic stuff. In fact, Exodus goes far beyond these elementary conversion requirements.

More examples in my next blog. Watch this space!

Rob Verschoor

Sixty Shades of SQL

Posted by Rob Verschoor Sep 4, 2014

Converting SQL code from one SQL dialect to another, like Oracle's PL/SQL to SAP ASE's Transact-SQL probably sounds like a boring, nerdy whaddever to most people. But since you, dear reader, are reading this blog, you are likely not "most people" but a member of the tech crowd. Admittedly though, even in those circles SQL code conversion may not seem terribly exciting to many folks, since databases, and especially geeky topics like SQL syntax and semantics, are kinda specialized, developer-oriented stuff. So why bother? (keep reading for the answer)


Personally, I find SQL conversion to be one of the more exciting things that life has to offer (we can discuss the other ones over a drink at the bar). That I've been working with SQL since 1989 may have something to do with that. And yes, I should get out more.


Even for SQL-infected readers, converting a PL/SQL statement to its equivalent in Transact-SQL or Watcom-SQL may not sound like a terribly complex, or even interesting, problem. After all, all those SQLs are pretty similar, right? And there is even an ANSI SQL standard all SQL dialects pledge adherence to.



Right. What could possibly go wrong?



Back down here in reality, converting between SQL dialects actually appears to be surprisingly hard -- as anyone who has tried this will know.


So what about that supposedly universal ANSI SQL standard?

Indeed, most SQL dialects claim to be ANSI SQL-compliant. But when you look closer, those claims often boil down to something more narrow like "ANSI SQL-92 entry-level" compliance.

To understand what that means, consider that the ANSI SQL-92 standard dates back to -could you guess?- 1992. In those days the world of SQL was much simpler than it is today. For example, stored procedures were not even defined by ANSI until the SQL:1999 standard appeared (to the ANSI standard fanatics who disagree: yes, you're formally correct, but SQL-92/PSM wasn't there until years later and is generally considered to be part of SQL:1999; and it's not part of SQL-92 entry level anyway).

Despite all that ANSI compliance, in practice the SQL implementations by most database vendors are chock-full with vendor-specific "extensions" to the ANSI standard - which is a polite way of stating that aspects of a SQL feature are not ANSI SQL-compliant at all. And thus, also likely incompatible with other SQL dialects.


Not fully complying with the ANSI SQL standard may sound like a Bad Thing. But let's keep things in perspective: standards will always lag the natural progression of a technology.

It starts when some vendors pioneer a concept, like SAP Sybase ASE did with stored procedures and triggers in the 1980's. Other vendors then also adopt those concepts but in the absence of a standard, everyone implements their own variant. Years later, a standards body like ANSI then tries to define a "standard" even though the existing products have already done their own thing. So it is pretty much unavoidable there will always be discrepancies between the standard and the actual products. That's life.


Bottom line: while there is indeed a lot of similarity across SQL dialects, the number if aspects that are not ANSI-compliant typically far exceeds the parts that do conform to the ANSI SQL standard.

It's pretty safe to that no SQL dialect is fully ANSI SQL-compliant or fully implements the ANSI SQL standard (the one exception perhaps being "Ocelot SQL" who claimed full implementation of a particular ANSI SQL standard at some point; but then, Ocelot didn't quite win the RDBMS race so you shouldn't feel bad not knowing about them).

And BTW, which ANSI SQL standard are we talking about anyway? We haven't even discussed the more recent incarnations like ANSI SQL:2003, SQL:2008 or SQL:2011 (I know you've heard it before but indeed: the good thing about standards is that there are so many of them).



If you're still reading this article at this point, it must mean that you don't find this a boring topic after all (if you were attracted by the blog title and you're still hoping for some E.L.James-style raunchy prose, well, just keep reading).



Why should we bother discussing cross-dialect SQL conversion in the first place?

As I pointed out in earlier blog posts, SAP wants to enable customers to migrate their custom applications from non-SAP databases to a SAP DBMS. One of the biggest challenges in such migrations is converting the SQL code, especially the server-side SQL in stored procedures/functions etc.: such code can contain many complexities that may not always be easy to find. Consequently, converting server-side SQL code is an area where migration projects often overrun or fail.


As it happens, converting stored procedures is one of the main functions of SAP's Exodus DBMS migration tool. Not only will Exodus quickly analyze all server-side SQL code and report precisely which features are being used; it will also highlight those features which do not convert easily to the target SAP database of choice. This allows for running a quick complexity assessment before starting the migration project.


As for all those vendor-specific extensions to the ANSI SQL standard, Exodus takes these into account as much as possible. When the difference between the source and target SQL dialect is merely different syntax, then Exodus can often compensate by generating the syntax as required by the target SQL dialect.

It gets more difficult when there is a difference in semantics (i.e. functionality of a SQL feature). In such cases, Exodus may also be able to compensate, but human intervention may also be required. In case Exodus spots any constructs which it cannot convert automatically, it will alert the user to the construct in question, and often suggests a possible solution direction.

In my next blog post we will look at some actual examples and how Exodus handles these.


Incidentally, database vendors usually don't see their non-ANSI-compliance as a problem. On the contrary: if it makes it hard for customers to migrate away to a competitor's database, then that is good for the vendor's future business prospects. Customers often see this differently however, and words like "lock-in", "stranglehold" and "help!" often appear in related conversations.


With the Exodus DBMS migration tool, customers no longer need to feel handcuffed to a particular database vendor just because migrating to a SAP database seems too hard to even consider. So if the relationship with your DBMS vendor has turned into a painful affair, contact ExodusHelp@sap.com to discuss how SAP can provide a fresh perspective.




So, you may wonder, what would an Exodus engagement be like? Well, it may go something like this...




She had been waiting for more than an hour. Outside, it was already getting dark.

The chair had become uncomfortable by now, but the instructions had been clear. She had to wait.


Suddenly, the door opened.


A middle-aged woman stepped into the waiting room.

"She must be his secretary", it flashed through her mind.

The secretary looked around, but there was nobody else in the room.

She could only be coming for her.


"Miss Outer Join?"


When she heard her name, a shiver ran down her spine.

She opened her mouth to answer, but her breath faltered with excitement.

For a brief moment she closed her eyes.

This was what she had been waiting for, she had prepared herself for.

She took a breath and opened her eyes.


"People call me O.J."


The secretary looked at her slightly longer than would have been necessary.

Her tone was more determined.


"As you wish.

O.J., please come in.

Mr. Exodus will see you now."

When discussing Exodus, SAP's DBMS migration tool for migrating custom (non-SAP) applications, invariably this question is asked:


      "where can I download Exodus?"


The answer may be somewhat disappointing: you cannot download Exodus anywhere. Even SAP employees can't.

Not surprisingly, the next question always is: "so how do I get a copy?"


First, it should be noted that Exodus is not an SAP product. Instead, it is a tool. One of the implications is that Exodus cannot be purchased; instead SAP makes it available at no cost.

Now, for SAP-corporate reasons, Exodus is only available to two specific target groups, namely (a) SAP employees and (b) SAP Partner companies who have joined the Exodus Partner Program.

For all users of Exodus, each copy of the migration tool is personalized and registered to the name of the individual or organization. SAP will generate such a personalized copy for those entitled to use Exodus, and make it available to the user whenever requested or required. This is why Exodus cannot be downloaded from a central location.


SAP Partners (i.e. members of the SAP Partner Edge program) can join the Exodus Partner Program. This is a no-cost program, but it does require some paperwork. For example, a license agreement for Exodus needs to be signed. Once the formalities are completed, the Partner receives its copy of Exodus, which can then be used by all employees of the partner organization for commercial opportunities with their customers or prospects.

One thing that the Partner cannot do, is charge the customer for Exodus specifically; however, the Partner can charge for their services, which can use Exodus (similarly, SAP itself does not charge for Exodus alone).


If you are an SAP Partner and you are interested in joining the Exodus partner program, contact your SAP Partner Manager (if you are not sure who that is,  contact ExodusHelp@sap.com).


As may be clear from the above, currently Exodus is not available to customers or to the wider community (unless the customer also happens to be an SAP Partner). Customers who are interested in performing migrations should therefore work with either SAP or with an Exodus-equipped SAP Partner in order to benefit from the Exodus tool. In many cases, such customers would probably do that anyway since, as we've seen in earlier blog posts, custom-app migrations can be challenging and may require specific expertise.


If a customer is unable to work with an SAP Partner, please contact ExodusHelp@sap.com. At SAP we will try to find a way to ensure that such customers can still get the benefits of the Exodus migration tool.

(for an overview of all Exodus-related blog posts, see here)

I would like to share just a simple tip to get a list with the biggest tables in SAP.


Here it is:


Go to tcode DB02OLD, click on "Detailed Analysis", and fill the fields as follows: "Object type: TABLE” and "Size / kbyte: 1000000”.







Filter Blog

By author:
By date:
By tag: