1 2 3 4 Previous Next

Database

50 Posts
Rob Verschoor

Sixty Shades of SQL

Posted by Rob Verschoor Sep 4, 2014

Converting SQL code from one SQL dialect to another, like Oracle's PL/SQL to SAP ASE's Transact-SQL probably sounds like a boring, nerdy whaddever to most people. But since you, dear reader, are reading this blog, you are likely not "most people" but a member of the tech crowd. Admittedly though, even in those circles SQL code conversion may not seem terribly exciting to many folks, since databases, and especially geeky topics like SQL syntax and semantics, are kinda specialized, developer-oriented stuff. So why bother? (keep reading for the answer)

 

Personally, I find SQL conversion to be one of the more exciting things that life has to offer (we can discuss the other ones over a drink at the bar). That I've been working with SQL since 1989 may have something to do with that. And yes, I should get out more.

 

Even for SQL-infected readers, converting a PL/SQL statement to its equivalent in Transact-SQL or Watcom-SQL may not sound like a terribly complex, or even interesting, problem. After all, all those SQLs are pretty similar, right? And there is even an ANSI SQL standard all SQL dialects pledge adherence to.

 

 

Right. What could possibly go wrong?

 

 

Back down here in reality, converting between SQL dialects actually appears to be surprisingly hard -- as anyone who has tried this will know.

 

So what about that supposedly universal ANSI SQL standard?

Indeed, most SQL dialects claim to be ANSI SQL-compliant. But when you look closer, those claims often boil down to something more narrow like "ANSI SQL-92 entry-level" compliance.

To understand what that means, consider that the ANSI SQL-92 standard dates back to -could you guess?- 1992. In those days the world of SQL was much simpler than it is today. For example, stored procedures were not even defined by ANSI until the SQL:1999 standard appeared (to the ANSI standard fanatics who disagree: yes, you're formally correct, but SQL-92/PSM wasn't there until years later and is generally considered to be part of SQL:1999; and it's not part of SQL-92 entry level anyway).

Despite all that ANSI compliance, in practice the SQL implementations by most database vendors are chock-full with vendor-specific "extensions" to the ANSI standard - which is a polite way of stating that aspects of a SQL feature are not ANSI SQL-compliant at all. And thus, also likely incompatible with other SQL dialects.

 

Not fully complying with the ANSI SQL standard may sound like a Bad Thing. But let's keep things in perspective: standards will always lag the natural progression of a technology.

It starts when some vendors pioneer a concept, like SAP Sybase ASE did with stored procedures and triggers in the 1980's. Other vendors then also adopt those concepts but in the absence of a standard, everyone implements their own variant. Years later, a standards body like ANSI then tries to define a "standard" even though the existing products have already done their own thing. So it is pretty much unavoidable there will always be discrepancies between the standard and the actual products. That's life.

 

Bottom line: while there is indeed a lot of similarity across SQL dialects, the number if aspects that are not ANSI-compliant typically far exceeds the parts that do conform to the ANSI SQL standard.

It's pretty safe to that no SQL dialect is fully ANSI SQL-compliant or fully implements the ANSI SQL standard (the one exception perhaps being "Ocelot SQL" who claimed full implementation of a particular ANSI SQL standard at some point; but then, Ocelot didn't quite win the RDBMS race so you shouldn't feel bad not knowing about them).

And BTW, which ANSI SQL standard are we talking about anyway? We haven't even discussed the more recent incarnations like ANSI SQL:2003, SQL:2008 or SQL:2011 (I know you've heard it before but indeed: the good thing about standards is that there are so many of them).

 

 

If you're still reading this article at this point, it must mean that you don't find this a boring topic after all (if you were attracted by the blog title and you're still hoping for some E.L.James-style raunchy prose, well, just keep reading).

 

 

Why should we bother discussing cross-dialect SQL conversion in the first place?

As I pointed out in earlier blog posts, SAP wants to enable customers to migrate their custom applications from non-SAP databases to a SAP DBMS. One of the biggest challenges in such migrations is converting the SQL code, especially the server-side SQL in stored procedures/functions etc.: such code can contain many complexities that may not always be easy to find. Consequently, converting server-side SQL code is an area where migration projects often overrun or fail.

 

As it happens, converting stored procedures is one of the main functions of SAP's Exodus DBMS migration tool. Not only will Exodus quickly analyze all server-side SQL code and report precisely which features are being used; it will also highlight those features which do not convert easily to the target SAP database of choice. This allows for running a quick complexity assessment before starting the migration project.

 

As for all those vendor-specific extensions to the ANSI SQL standard, Exodus takes these into account as much as possible. When the difference between the source and target SQL dialect is merely different syntax, then Exodus can often compensate by generating the syntax as required by the target SQL dialect.

It gets more difficult when there is a difference in semantics (i.e. functionality of a SQL feature). In such cases, Exodus may also be able to compensate, but human intervention may also be required. In case Exodus spots any constructs which it cannot convert automatically, it will alert the user to the construct in question, and often suggests a possible solution direction.

In my next blog post we will look at some actual examples and how Exodus handles these.

 

Incidentally, database vendors usually don't see their non-ANSI-compliance as a problem. On the contrary: if it makes it hard for customers to migrate away to a competitor's database, then that is good for the vendor's future business prospects. Customers often see this differently however, and words like "lock-in", "stranglehold" and "help!" often appear in related conversations.

 

With the Exodus DBMS migration tool, customers no longer need to feel handcuffed to a particular database vendor just because migrating to a SAP database seems too hard to even consider. So if the relationship with your DBMS vendor has turned into a painful affair, contact ExodusHelp@sap.com to discuss how SAP can provide a fresh perspective.

 

 

 

So, you may wonder, what would an Exodus engagement be like? Well, it may go something like this...

 

 

 

She had been waiting for more than an hour. Outside, it was already getting dark.

The chair had become uncomfortable by now, but the instructions had been clear. She had to wait.

 

Suddenly, the door opened.

 

A middle-aged woman stepped into the waiting room.

"She must be his secretary", it flashed through her mind.

The secretary looked around, but there was nobody else in the room.

She could only be coming for her.

 

"Miss Outer Join?"

 

When she heard her name, a shiver ran down her spine.

She opened her mouth to answer, but her breath faltered with excitement.

For a brief moment she closed her eyes.

This was what she had been waiting for, she had prepared herself for.

She took a breath and opened her eyes.

 

"People call me O.J."

 

The secretary looked at her slightly longer than would have been necessary.

Her tone was more determined.

 

"As you wish.

O.J., please come in.

Mr. Exodus will see you now."




When discussing Exodus, SAP's DBMS migration tool for migrating custom (non-SAP) applications, invariably this question is asked:

 

      "where can I download Exodus?"

 

The answer may be somewhat disappointing: you cannot download Exodus anywhere. Even SAP employees can't.

Not surprisingly, the next question always is: "so how do I get a copy?"

 

First, it should be noted that Exodus is not an SAP product. Instead, it is a tool. One of the implications is that Exodus cannot be purchased; instead SAP makes it available at no cost.

Now, for SAP-corporate reasons, Exodus is only available to two specific target groups, namely (a) SAP employees and (b) SAP Partner companies who have joined the Exodus Partner Program.

For all users of Exodus, each copy of the migration tool is personalized and registered to the name of the individual or organization. SAP will generate such a personalized copy for those entitled to use Exodus, and make it available to the user whenever requested or required. This is why Exodus cannot be downloaded from a central location.

 

SAP Partners (i.e. members of the SAP Partner Edge program) can join the Exodus Partner Program. This is a no-cost program, but it does require some paperwork. For example, a license agreement for Exodus needs to be signed. Once the formalities are completed, the Partner receives its copy of Exodus, which can then be used by all employees of the partner organization for commercial opportunities with their customers or prospects.

One thing that the Partner cannot do, is charge the customer for Exodus specifically; however, the Partner can charge for their services, which can use Exodus (similarly, SAP itself does not charge for Exodus alone).

 

If you are an SAP Partner and you are interested in joining the Exodus partner program, contact your SAP Partner Manager (if you are not sure who that is,  contact ExodusHelp@sap.com).

 

As may be clear from the above, currently Exodus is not available to customers or to the wider community (unless the customer also happens to be an SAP Partner). Customers who are interested in performing migrations should therefore work with either SAP or with an Exodus-equipped SAP Partner in order to benefit from the Exodus tool. In many cases, such customers would probably do that anyway since, as we've seen in earlier blog posts, custom-app migrations can be challenging and may require specific expertise.

 

If a customer is unable to work with an SAP Partner, please contact ExodusHelp@sap.com. At SAP we will try to find a way to ensure that such customers can still get the benefits of the Exodus migration tool.


(for an overview of all Exodus-related blog posts, see here)

I would like to share just a simple tip to get a list with the biggest tables in SAP.

 

Here it is:

 

Go to tcode DB02OLD, click on "Detailed Analysis", and fill the fields as follows: "Object type: TABLE” and "Size / kbyte: 1000000”.

 

img1.jpg

img1.jpg

Regards,

Richard

Recently I described the SAP Exodus DBMS migration tool as a new offering by SAP to help migrate custom (non-SAP) applications from a non-SAP to an SAP database.

 

In this blog post, let's take a closer look at one of the most important Exodus features, namely: the pre-migration complexity assessment.

 

In any migration project, the first question that needs to be answered is: how complex will this migration be?

More precisely, you'll need to understand which technical difficulties should be anticipated. For example, are there any DBMS features used in the application-to-be-migrated which do not have a direct equivalent in the target DBMS?

 

It is pretty clear that this information is needed, but that is easier said than done: how can you determine exactly which SQL constructs are used in the application's stored procedures - there may be hundreds of these, consisting of tens of thousands of lines of SQL code.

 

In fact, many discussions about possible migration opportunities are terminated early since there are simply too many unknowns, making it too risky to proceed. Indeed, when migrations projects fail or overrun the planned schedule, this is often caused by unexpected complexities being discovered too late in the project. Had these complexities been identified earlier, then a different migration strategy might have been chosen, or it might have been decided it was best not to start the migration project at all.

 

Exodus comes to the rescue here, with its feature for performing a pre-migration complexity assessment.

This works as follows: you point Exodus at the DBMS server hosting the application to be migrated, and Exodus will discover what's in that DBMS and provide a detailed report on the SQL constructs found there. This is divided in two parts: one assessment is about the database schema, the other about the server-side SQL code found in stored procedure, functions, etc.

 

In the output of the pre-migration complexity assessment, Exodus will highlight SQL aspects that cannot be fully migrated automatically to the selected target DBMS and therefore represent additional migration complexity. For example, if an Oracle-based application uses before-row triggers as well as after-statement triggers, and we're interested in migrating to SAP ASE, Exodus will highlight the fact that ASE does not support before-row triggers (but only after-statement triggers), meaning that migrating those before-row-triggers needs additional manual work (for example, the functionality in the before-row triggers will need to be worked into the ASE-supported after-statement triggers, or implemented elsewhere in the migrated application).

In contrast, when migrating to SAP SQL Anywhere, Exodus would not highlight any issues here since SQL Anywhere supports both of these trigger types. But when migrating to SAP IQ, which does not support triggers on IQ tables at all, Exodus will report both trigger types as cases where migration complexities should be expected.

 

Based on the results of the Exodus pre-migration complexity assessment, we're in a much better position to assess the areas of complexity to be expected, and consequently, the level of risk of a particular migration.

 

But Exodus goes one step further. It also tries to quantify the amount of effort required (in person-days) to migrate the application to the target DBMS. It does this by defining a "migration cost" (as a unit of time) for each particular SQL construct found, and multiplying this cost by the number of cases found for that SQL construct. This effort estimate is about migrating to functionally equivalent SQL code, but does not include things such as testing and performance tuning (more on that in later blog posts).

For example, let's assume our application contains 7 before-row triggers and 3 after-statement triggers.

If we're migrating to ASE, Exodus will estimate 1 hour of manual work for migrating every before-row trigger to ASE (so 7*1 = 7 hours); if we're migrating to IQ, it would estimate 2 hours per trigger, irrespective of the trigger type (so 10*2 = 20 hours). Note Exodus uses a higher migration cost for migrating triggers to IQ than to ASE to reflect that migrating trigger functionality to IQ is more difficult due to IQ not supporting nay triggers on IQ tables.

Also, when migrating to SQL Anywhere, Exodus will not estimate additional time since SQL Anywhere supports both trigger types.

Lastly, Exodus budgets 15 minutes for every trigger, irrespective of its type or target DBMS. This is to reflect the fact that some amount of manual migration work (like functional verification, syntax changes or debugging) is likely to be needed anyway.

 

Now, the question has to be: How reliable are the migration effort estimates by Exodus?

It is important to point out these effort estimates should be seen as an order-of-magnitude indicator, and not as a precise statement of work that can be put

directly into a contract.

For example, when Exodus estimates that the functional migration of a particular application will take 40 days, that should primarily be interpreted as meaning: this won't be possible to complete in two weeks -- but it's also not likely to take half a year.

Obviously, if sufficient time can be spent on analyzing the Exodus estimates and the application's SQL code in greater detail, a more realistic effort estimate may be reached.

 

In practice however, a large factor in a migration project will be the SQL skills and migration experience of the team performing the migration. An automated tool like Exodus can handle a large part of the work, but ultimately every migration remains a manual effort where humans need to put it all together and address those parts that Exodus cannot handle automatically. The quality and experience of that team may have a big impact on the actual amount of effort that needs to be spent. To reflect this, users of Exodus can redefine the migration cost definitions as they think is best.

 

(for an overview of all Exodus-related blog posts, see here)

Here's today's quiz question: Have you heard of a company called Delphix?

If your first association is about ancient Greek temple priestesses, then this blog post will be useful for you, so keep reading.

(in my case, my first thought was actually about Borland's Pascal programming suite - and I guess that just says something about me. But I digress).

 

Delphix, to keep you guessing no longer, is a company from California that makes nifty software for database storage virtualization. The reason for mentioning them here is that they have just released support for SAP ASE (y'know... the database formerly known as Sybase ASE first and SAP Sybase ASE later. But I digress).

 

The main attraction of Delphix is that it helps reduce database storage costs.

Think of the scenario where many copies of a particular database are hanging around in your application development department. For example, your development teams all have their own copies of a particular production database. And additional copies of that database are also present in the various test environments. At the end of the day, there could easily be tens of database copies around, which, ultimately, are largely identical since they are all based on the same original.

Consequently, there is a lot of duplicate storage of identical disk blocks going on. Simply put, the Delphix product will optimize storage space by avoiding to store identical blocks twice. How does this work?

 

Basically, Delphix keeps a single copy of a particular database in its own storage server. Copies of that storage (which will look like ASE database device files from an ASE server perspective ) can be provisioned to multiple 'users' (e.g. ASE servers that need to have a copy of that database). If the original database is 1 TB in size, and 35 copies are in use around the various departments, Delphix will -in essence- only store that 1 TB, despite the fact that the users see 35 copies of that 1 TB database.

 

Now, the important thing here is that these ASE databases are read/write: there is no functionality restriction. When a modification is made in one of those databases, Delphix will ingest the modification into its centralized/virtualized copy, in the most storage-efficient manner. All of this is fully functionally transparent to the end users who experience nothing special: they have their own copy of the database and they can do with it whatever they like.

In the mean time, you're using significantly less storage space then when all 35 copies would exist on their own.

 

Some additional points worth noting:

  • An ASE server doesn't see any difference between a 'regular' ASE database that is created in the classic way on local storage, and a Delphix-based ASE database where the ASE devices files are actually served up by the Delphix engine. To the ASE server, it's just accessing database device files, regardless of where they originate from.This means there can be an arbitrary number of Delphix-based databases in an ASE server, and these co-exist seamlessly with 'normal' ASE databases.
  • Given how Delphix stores its data, the overhead for provisioning an additional copy of a database is very low. So there is little reason for each developer NOT to have their own copy of the development database.
  • Delphix can keep its virtualized ASE database in sync with the original ASE production database (by detecting database dumps being made, and then  gobbling them up, thus updating the Delphix copy). This makes it easy to refresh the copies that were provisioned out to developers or testers.

 

Delphix already supported certain other database brands (whose names shall remain unmentioned but which, I just realized, are  located in similarly named cities that can be described as the following regular expression: /Red.o.d/. But I digress again).

Anyway, over the past year I had the pleasure of providing technical assistance to Delphix during their effort to develop support for SAP ASE. This was released in July 2014 in Delphix version 4.1.

I quite like the concept of how Delphix virtualizes an actual database. I have to say I am impressed by the way Delphix have designed and engineered their product -- there is some above-average complex stuff going on in there (had you asked me earlier if this approach was a good idea, I would probably have dismissed it as too complex and too little gain. I guess I would have been wrong). Yet, Delphix looks simple and easy to use from the outside, and I guess that is proof of a well-designed product.

 

There is a lot more to say about Delphix -- more, in fact, than I will claim to understand. Fortunately, the Delphix web site has all the information you'd want: http://docs.delphix.com/display/DOCS41/Delphix+Engine+4.1+Documentation.

Happy reading.

This is the first post in a series of blogs on the topic of migration custom applications to SAP databases.

 

Update: these additional blog posts were published in the mean time:

 

First, some history.

 

When Sybase was acquired by SAP in 2010, the general perception about the long-term viability of the Sybase database products changed quite dramatically.

Previously, discussions with customers were often centered around justifying why investing in Sybase technology was not a risky proposition with a doubtful future - often inspired by active spreading of FUD by Sybase competitors.

But ever since Sybase became part of SAP, those perceptions have pretty much disappeared as it was now clear that Sybase's future was not in doubt.

At the same time, a new element started to appear in those customer conversations. Namely, we started receiving inquiries whether SAP could assist in migrating some applications from a non-SAP database to SAP Sybase ASE.

Such requests had upsides as well as downsides. Upsides, because it underlined how ASE was increasingly being seen as a viable alternative to certain other well-known DBMS brands (BTW, at Sybase, we knew that all along). At the same time however, SAP did not actually provide migration tools to support such migrations. Unfortunately, what this meant was that the best help SAP/Sybase could offer to such customers was, basically, to wish them good luck.

 

Time passed...

 

But we did not sit idle...

 

Since we were unable to find existing migration tools that met SAP's requirements, we decided to build our own.

Therefore, let me now please introduce (drumroll):

 

     Exodus, the SAP database migration tool for migrating custom applications to SAP databases.

 

This is great news! Today, with Exodus, SAP is in a position to provide substantially better support to customers interested in database migrations.

 

In a nutshell, Exodus supports migration of customer applications between the following databases:

  • Supported source databases: Oracle (v.9 and later) and Microsoft SQL Server (v.2000 and later)
  • Supported target databases: SAP ASE, SAP IQ and SAP SQL Anywhere

 

There is much to say about this topic (and indeed I will, keep watching this space). But first, here are some key points I need to get straight rightaway. Experience has shown that, otherwise,  confusion may quickly take hold.

 

Key point #1: Exodus is about migrating 'custom applications'

With Exodus, SAP aims at migration of custom applications, which means: non-SAP applications. For SAP apps such as Business Suite, well-established migration practices are already available and Exodus would not contribute much.

  • A 'custom application' is typically a one-off application operated by a particular customer. Such a custom app was usually either built by a customer itself, or built specifically for the customer by a third party.
  • Custom applications are often transaction-oriented, meaning their basic function is to retrieve, insert or modify individual data rows. To contrast, consider analytics-oriented applications which are typically read-only (apart from bulk-loading the data), and access large numbers of data rows in a single operation.
  • Exodus aims primarily at migration of custom OLTP application which are based on server-side SQL, commonly referred to as "stored procedures".
  • Applications which the customer purchased or licensed from a software vendor (like SAP, but I am sure you can think of others) are not 'custom applications' in the sense as meant above; if a customer wants to migrate such an application to an SAP database, the software vendor itself is typically driving this. Exodus does not apply in such cases (although we are certainly interested to work directly with those software vendors to help them port their application to a SAP/Sybase database).

 

Key point #2: Exodus is free - though not freely available

The SAP Exodus migration tool is not charged for. At the same time, you will search in vain for a location where Exodus can be downloaded, since it cannot.

The Exodus migration tool is available to SAP employees as well as to qualified SAP Partners participating in the Exodus Partner Program.The partner program offers SAP partners access to Exodus at no cost, but does require some administrative steps (more about that in a later blog).

For customers interested in migration, this means they should engage with SAP directly, or with an SAP Partner that can use Exodus.

Please be assured that there is no evil scheme behind the decision not to make Exodus freely available to customers, but some rather more practical reasons. Regardless, we will make an effort to ensure that customers get the benefits of Exodus.

 

Key point #3: Exodus is not an SAP product

Exodus is a tool, not a product. That may sound like splitting hairs, but it is actually an important distinction. For example, did I mention Exodus is not charged for? Also, support for projects using Exodus is not provided through the regular SAP product support channels, but directly by the Migration Solutions team in SAP's Database & Technology group.

Another aspect is that migration tools are typically unable to provide 100% automatic and functionally correct migration results. Exactly how well Exodus performs in practice really depends on the application being migrated -- and in the world of custom applications, no two applications are the same.

 

Key point #4: Migrations can be tricky

I'd be lying if I said that with Exodus, you can now migrate every custom application with one click of the mouse on Friday afternoon, and then switch production to the migrated system by Monday morning.

Folks who got their hands dirty in database migrations know that these can be challenging on multiple levels. While the Exodus tool will provide support in crucial areas such as schema migration and automatic conversion between SQL dialects, there will probably be some bits and pieces that need to be migrated manually. The good news is that Exodus helps to identify where those bits and pieces are, and in many cases also suggests possible solutions.

 

 

Let me stop here for now.

 

Bottom line:

With Exodus, SAP is serious about helping customers migrate their custom applications from non-SAP databases to SAP.

 

More information coming soon -- watch this space!

(if you have questions that cannot wait until the next blog post, contact your local SAP representative or ExodusHelp@sap.com)

 

 

Rob Verschoor

Global DBMS Migration Lead

Migration Solutions, SAP Database & Technology

Exodus.logo.png

Date: January 29, 2014

Time: 1:00 p.m. EST/10:00 a.m. PST

 

Featured Speakers:

Paul Medaille

Director, Solutions Management, Enterprise Information Management, SAP

 

Ina Felsheim

Director, Solutions Management, Enterprise Information Management, SAP

 

Many companies have invested heavily in mission-critical business software initiatives. Yet too often these worthy but expensive initiatives fail to deliver anticipated benefits because of poor data. What’s needed is an integrated software solution that improves collaboration between data analysts and data stewards with the tools for understanding and analyzing the trustworthiness of enterprise information.

 

SAP Information Steward software provides continuous insight into the quality of your data, giving you the power to improve the effectiveness of your operational, analytical, and governance initiatives.

 

Join us on Wednesday, January 29, 2014, for an insightful Webinar, Gain Data Quality Insights with SAP Information Steward, to learn how this SAP solution can help your organization:

  • Analyze, monitor, and report on the quality of your data
  • Adopt one solution for data stewardship
  • Create a collaborative environment for your IT and business users
  • Turn data quality into a competitive advantage

 

Don't miss this webinar event!

 

REGISTER TODAY

Logo


TECHCAST  — SAP Sybase Replication Server: Future and Roadmap

December 11, 2013  1pm EDT / 10am PDT


Register now!

 

Join our next ISUG-TECHcast as guest Speaker Chris Brown joins us for an informative discussion about the future and roadmap of SAP Sybase Replication Server.

 

We’ll discuss how the latest version of SAP Sybase Replication Server replicates transactional data for non-SAP applications in real time directly into SAP HANA – without slowing or disrupting the systems that are running the business – to create a real-time analytics solution. And you’ll learn about new high availability and disaster recovery functionality for customers running SAP Business Suite on SAP Sybase ASE to ensure continuous availability during planned and unplanned downtime.

 

Learn about these new capabilities and functionalities, including:

  • Storage optimization
  • Operational scalability capabilities
  • Performance enhancements for handling very large data volumes
  • Latency monitoring and alerting capabilities as part of an essential, value-added solution
Emma Capron

Going to extremes

Posted by Emma Capron Nov 4, 2013

In today's ultracompetitive business environments, it is critical to have the ability to collect, store, manage, protect, query, and generate reports from larger and larger volumes of complex data. From retail stores to hospital emergency rooms, immediate access to accurate, relevant data is a basic business requirement.

 

To create and sustain a competitive advantage in the face of exponential data growth and increasing customer expectations for superior service, your data management system must deliver extremely high performance, unconstrained scalability, rich functionality, bulletproof security, and cost-effectiveness.

 

For more than 30,000 customers around the world, the solution is SAP Sybase Adaptive Server Enterprise (ASE).1

 

For example, Globe Telecom in the Philippines turned their business upside down with the help of SAP Sybase ASE. They replaced industry-standard calling cards with air-loading, enabling them to build a data management infrastructure that lets them market aggressively and closely manage costs. Rodell Garcia, CIO, explains why: “In terms of ROI, our SAP Sybase ASE system is a critical enabler of revenue generation; the payback period is certainly very short.”2

 

Meanwhile, electronic medical records specialist, MIQS, used SAP Sybase ASE to present doctors with the critical combination of information they need when treating kidney dialysis patients. The result was a 40% reduction in mortality rates.2

 

Finally, India is famous for its huge railway network, but also for its congested stations, as passengers queue to buy tickets for trains departing the same day. Working with SAP technology, Indian Railways and the Center for Railway Information Systems built a ticketing system so passengers without reservations can buy tickets at any station, at any time, from dedicated terminals and automatic machines. The result is no more long queues – and lots of valuable data for the company to analyze and act on.2

 

You can learn more at www.sap.com/realtime_data/transactional_data. Or join the conversation #redefinedata

 

1Video: Top 5 Reasons to Choose SAP Sybase Adaptive Server Enterprise

2Customer ebook: Going to Extremes

In its first Magic Quadrant for Operational Database Management Systems[1], Gartner has positioned SAP operational databases – SAP HANA, SAP Sybase ASE, and SAP Sybase SQL Anywhere – in the Leaders quadrant!

According to Gartner, the Leaders quadrant contains the vendors that demonstrate the greatest degree of support for a broad range of operational applications based on a broad range of data types and large numbers of concurrent users. These vendors have the greatest longevity in the market have built a wide partner ecosystem for their products. Consequently, the Leaders generally have the lowest risk for their customers in the areas of performance, scalability, reliability and support. And as market demands change, the Leaders generally demonstrate strong vision in support of not only the needs of the current market but also of new and emerging trends. Indeed, in completeness of vision, SAP was rated higher than Microsoft and IBM for its ability to understand the functional requirements needed to support transactional environments, develop a product strategy that meets the market’s requirements, comprehend overall market trends and influence or lead the market when necessary.

Gartner cited the following strengths for the SAP portfolio of products:

  • Vision leadership — Moving into DBMS technology, SAP has introduced SAP HANA as an in-memory platform for hybrid transaction/analytical processing (HTAP) and acquired Sybase to add to the DBMS product line.
  • Strong DBMS offerings — In addition to SAP HANA, SAP Sybase ASE continues to support global-scale applications and was first to introduce an in-memory DBMS (IMDBMS) version.
  • Performance — References cited performance (scalability and reliability) as a major strength (one of the highest scores reported), mostly for SAP Sybase ASE.

 

Learn about the evaluation criteria and how other vendors fared, read the full Gartner MQ here: http://www.gartner.com/technology/reprints.do?id=1-1MNA5V2&ct=131105&st=sb

   GARTNER 2013 Operational DBMS Magic Quadrant_graphic.pptx.png

Figure 1: Magic Quadrant for Operational Database Management Systems*

 

* This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available at http://www.gartner.com/technology/reprints.do?id=1-1MNA5V2&ct=131105&st=sb Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

 


[1] Gartner, Magic Quadrant for Operational Database Management Systems by Donald Feinberg, Merv Adrian and Nick Heudecker, October 21, 2013.

During TechEd Las Vegas last week, two SAP partners with expertise in SAP Sybase ASE and SAP HANA discussed the value of running SAP databases. Oxya has completed several Oracle to SAP Sybase ASE on SAP Business Suite migrations over the last 18 months and described the migration process as straightforward, even easy. Migrations were typically accomplished over a 2-3 month project and Oxya’s customers have benefited from much lower licensing, maintenance, and administrative costs, sometimes achieving ROI much earlier than expected.

 

Beyond Technologies, a partner and SAP Mentor, described how business transformations can be achieved with SAP Business Suite on SAP HANA. It expects that many innovations in the functionality of SAP Business Suite will result by leveraging SAP HANA technology.

 

Watch the full discussion here: http://events.sap.com/teched/en/session/8465

Provide Continuous, Real-Time Data to SAP Data Services with for Smarter, More Accurate Business Intelligence

Join us for a Webcast on November 7, 2013, to learn how SAP Sybase Replication Server, change data capture edition, now supports SAP Data Services for real-time data updates to your data warehouse.

 

To seize opportunities and gain a competitive edge in today’s 24/7 world, you need to know what’s going on in your business in the moment—not two hours ago.

 

With SAP Sybase Replication Server, change data capture edition, now providing continuous, up-to-the-second data combined with complex data extractions, transformation, and data quality capabilities from SAP Data Services, you can create a data warehouse that supports more accurate and actionable business intelligence than ever before.


Learn how real-time change data capture from SAP Sybase Replication Server, supports smarter, timelier decision making during our Webcast on Tuesday, November 7, 2013 at 2:00 p.m. EDT.

 

We’ll show you how SAP Sybase Replication Server, change data capture edition delivers: a 360-degree view of your business in real time.

Any data - Get real-time data from all your databases (Oracle, SAP Sybase Adaptive Server Enterprise, IBM DB2, and Microsoft SQL Server), including unstructured data, through SAP Data Services into your choice of data warehouse platform.

Anywhere - SAP Sybase Replication Server delivers data from anywhere—regardless of distance—over existing networks.

Anytime - Data changes are continuously captured from all databases and loaded in real time to SAP Data Services, significantly lowering the total cost of ownership (TCO) by reducing manual IT resource requirements.


Take advantage of this opportunity to learn how your organization can build a better data warehouse solution to provide better business intelligence than ever before. We hope to see you online!

 

REGISTER HERE

pastedImage_5.jpg

Is your SAP application environment built on an inflexible legacy architecture that is complex to manage and expensive to maintain? Would you like to free up IT resources to focus on driving new business objectives instead?

 

Join us for this special webinar to learn how deploying a database optimized for SAP applications on a high-performance converged infrastructure platform can simplify management and maintenance, increase agility and resource utilization, and scale seamlessly as your needs grow. And you can test your new solution before you migrate to make sure performance and functionality requirements are met, and then migrate without disrupting business operations.

 

Transform your SAP application environment with a modern approach - we’ll show you how.

 

REGISTER HERE

Date: Wednesday, October 30, 2013 at 2pm EDT

 

To seize opportunities and gain a competitive edge in today’s 24/7 world, you need to know what’s going on in your business in the moment – not two hours ago.


With SAP Sybase Replication Server now providing up-to-the-second data from any transactional system to SAP HANA for enterprise-wide analysis, you gain the data management power necessary to glean insights when they matter most.


Learn how SAP Sybase Replication Server provides up-to-the-second, continuous data for smarter, timelier decision making during our Webcast on Wednesday, October 16, 2013.


We’ll show you how SAP Sybase Replication Server delivers:

  • Any data – Get real-time replication from all your databases into SAP HANA, including Oracle, SAP Sybase Adaptive Server Enterprise, IBM DB2, Microsoft SQL Server, and unstructured data.
  • Anywhere - Real-time transactional replication delivers data from anywhere – regardless of distance – over existing networks.
  • Anytime – Data is continuously loaded in real time.

 

Take advantage of this opportunity to learn how your organization can build a better real-time analytics solution. Register Today for our upcoming Webcast for an inside look.

 

REGISTER HERE

Winnie Li

EDW Webinar Series

Posted by Winnie Li Sep 10, 2013

Big data represents a significant paradigm shift in today’s enterprise technology. Big data has fundamentally changed the nature of data management, introducing new challenges with the volume, velocity and variety of corporate data. This change is driving organizations to adjust their enterprise data warehousing technologies and strategies in order to turn the massive amounts of data into valuable and actionable information. Big data enables companies to gain new insight into business opportunities, transforming enterprises for the new real-time world.

Join us in this webinar series as we discuss how to implement data management strategies for a Big Data-enabled Enterprise Data Warehouse.

  • Session One: Enabling a Big Data-enabled EDW with SAP’s Data Management
    October 15th 12:00PM -1:00 PM (ET)
    Presented by: Tom Traubitz, Director Product Marketing
           

SAP understands the importance of Big Data but we also understand that you can’t take of advantage of it without a data-management platform to help find relevancies within your data to turn them into business processes - essentially turning your big data into a key enterprise asset. See how by tying together your organization’s data assets – from operational data to external feeds and Big Data – SAP dramatically simplifies data management landscapes for both current and next-generation business applications, delivering information at unprecedented speeds and empowering a Big Data-enabled Enterprise Data Warehouse.

Register HERE

  • Session Two: Integrating Hadoop with SAP HANA and SAP Sybase IQ
    November 12th 12:00PM -1:00 PM (ET)
    Presented by: Courtney Claussen, Director Product Management

SAP recognizes that not all data in your enterprise will exist in your SAP data warehouse, and that there are also different processing environments for handling big data that the SAP data-management platform needs to interact with. In this session learn how to utilize Big Data for interesting insights into your business, using the SAP data platform and Hadoop.  We will show how these solutions have been engineered to work together through data federation and data integration methods for a Big Data-enabled enterprise data warehouse.

Register HERE

  • Session Three: Integrating Data Streams into your Big Data Architecture
    December 12th 12:00PM -1:00 PM (ET)
    Presented by Jeff Wooton, Product Management

Increasingly, data sources are able to deliver data in real-time. How do you collect data that is arriving continuously at very high speeds, and in a way that it is most useful? Even more importantly, how can you extract insight from that data and respond as things happen, rather than only being able to respond much later, once you’ve had a chance to analyze the historical data?  See how event stream processing adds critical capabilities to your big data architecture.

Register HERE

Actions

Filter Blog

By author: By date:
By tag: