Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member
0 Kudos

When I was young, my dad lugged home a Radio Shack TRS-80 with dual 8” floppy drives.  That machine captured my imagination (and ultimately defined my career), but it had very little to work with in terms of processor power and memory.  In high school I tinkered with newer and faster computers, and finally got my very own in college – a Zenith 80286-based PC with Windows 1.0, a black-and-white screen, 1MB of RAM and a 20MB hard disk.

It was 1987.

As a computer science student, I remember clearly the gymnastics required to program machines with so few resources.  It’s fair to say that programmers were obsessed with doing more with less – and it led to some of the most amazing feats of creativity and genius you can imagine.  But it also created complexity brought on by all the tricks used to skirt the constraints. This complexity sometimes led to instability. Remember the “Blue Screen of Death?”  A direct result of this phenomenon.

By contrast, developers today face an embarrassment of riches.  CPU power, RAM and storage continue to plummet in cost.  More and more is packed into form factors that are smaller and smaller.  The available development tools and the capabilities of enterprise applications and middleware are phenomenal.  Gone is the necessity of dealing first hand with severe resource limitations.

This is an opportunity for every SAP customer.

As we see at clients every day, complexity is the dark force that inevitably ratchets up costs in SAP landscapes, leading to bloated cost structures and operational instability.  World-class architectures rely on the innate capabilities of the platforms, use the appropriate tools suited for the task, and minimize expensive (and potentially destabilizing) customization.  A truly great SAP-based architecture should banish complexity to the extent possible in order to minimize TCO and lay down a reliable backbone for an organization.

This is why every SAP customer should be paying close attention to SAP HANA.  In a very real way, HANA has the potential to dramatically simplify enterprise architecture, reinforce stability and generate big savings as a result.

In the early ‘90s I helped manage large-scale SAP systems as a Basis consultant, Unix administrator and Oracle DBA.  We, of course, were still obsessed with resource scarcity.   We dealt with enterprise-class systems and enterprise-class volumes, but we were still dogged by persistent constraints. So we developed workarounds – and in the process created enterprise-class complexity to boot. As just one example (among many), an inordinate amount of our time, effort and angst went into managing disk space at the database and operating system level, because disk was expensive. We were obsessed with conserving every last byte through whatever means necessary.  Today, despite a massive drop in prices, many of the same tools and processes are in place to manage a cost that is approaching insignificance.

This industry continues to change rapidly and falling resource prices have enabled new technology and new approaches.  These dynamics dictate that every SAP site should re-examine architectural assumptions on a regular basis, because the justification for a certain roadmap or configuration may no longer be valid.  This exercise can be lucrative.  Looking ahead, every SAP shop should identify and evaluate where complexity was introduced in order to deal with constraints that may no longer exist or can be removed economically.

In our view, there is one component of SAP-based architectures that is fast becoming a prime target: the relational database.

In the “old” architectures, a trade-off is made between memory, disks, and offline storage. Somewhat arbitrary lines are drawn between the three – on one hand, data expected to be needed soon is cached in memory and ready to be accessed instantly.  In the middle is disk, significantly slower to access, but durable and less expensive than memory – a good choice for data that needs to be accessed less often but is still reasonably expected to be needed.  And of course there is offline storage, the cheapest option, where archiving of old data occurs and also for purposes of recovering from disasters, should something bad happen to data in the other buckets.

Today, the squeeze is on.  As memory becomes plentiful and cheaper, it’s feasible to simply load everything you’ve got into an in-memory database.  Bam!  There goes the complexity of dealing with caching and disks, conserving disk space, and making sure what is changed in memory is accurately recorded on disk.  Among many other features, these complexities are uniquely the domain of the relational database.  So, does an SAP system really need one?

Increasingly, the answer to this question is “probably not”.  The relational database was built for a certain period in time, where constraints dominated and a general-purpose data management platform was really useful.  It served its purpose well and continues to be a reliable and important building block for many applications.  But it has limitations that are a reflection of the era in which it was initially designed.  It’s time to rethink what is needed for the future, where transactions (the domain of the relational database) certainly play a part, but where analysis of the seriously massive amounts of structured and unstructured data inundating organizations is crying out for a new approach.  Oh by the way, did we mention that that analysis should be in real-time, and have predictive and even prescriptive capabilities?  Don’t even think about it.  Relational databases underlying SAP systems can’t realistically handle this, they are traditionally very expensive to scale, and their utility and cost/benefit should now be officially up for debate:

We at Optimal strongly believe it is economical and rational to now employ an SAP-based architecture that only uses an in-memory database and a logical equivalent to offline/“near-line” storage.  Sure, a relational database can chip in within this architecture, but its mission-criticality will take a massive hit.  And if it’s much less important, it should be much less expensive

This is the inexorable trend happening today – and you stand to directly benefit because, chances are, you’re wasting money on an architecture that is going the way of the Dodo bird. Every SAP customer should be looking at how HANA can simplify their lives.  We acknowledge that the relational database is here to stay – the point is that its role is due for an update.  HANA has the potential to supplant it or, at the very least, relegate it to a supporting role.

For their part, SAP offers a relational database option, Sybase Adaptive Server Enterprise (ASE), in addition to the columnar-store Sybase IQ and HANA platforms.  Available since 1987, ASE is mature, enterprise-class and supremely capable.  SAP has publicly stated that it intends to be the #2 database provider by 2015. They are currently #4 and aggressively pursuing the database agenda.  It behooves every SAP shop to have a conversation with your SAP rep about your options here.  You might be surprised by what you learn.

++++++++++

For those of you who are so inclined, there is a fascinating article called “The End of an Architectural Era”, written by Michael Stonebraker, Samuel Madden, Daniel Abadi, and Stavros Harizopoulos (all from MIT) as well as Nabil Hachem and Pat Helland.  This article represents some of the intellectual inspiration for HANA. Of course, Vishal Sikka, Hasso Plattner, the Hasso Plattner Institute and Stanford should also get plenty of credit.   Check it out and let us know what you think.

Labels in this area