Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

In my last blog entry, I started to dive into SAP Lumira Server and how it uses SAP HANA.  To refresh your memory, recall that Lumira Server uses SAP HANA as an execution engine and application platform rather than as a straight-up in-memory database.  That provides significant efficiencies and enables new capabilities and workflows that would either take a very long time to execute, or at least longer than the typical BI user is willing to wait. I highly recommend reading that blog entry before continuing because in this installment, I’m going to go down the next level detail:


How SAP Lumira Server Runs on SAP HANA

Anatomy of an SAP Lumira Storyboard

Think about how your existing BI system works today – reports are run, data is applied to the report definition, and an instance of the output is saved in the repository for recall by the user.  All BI architecture based on traditional database designs require this replication because generating reports on the fly is simply not realistic.

SAP Lumira Storyboards are fundamentally different than traditional BI reports in that they do not inherently have “saved data” like a Crystal Report or Web Intelligence document.  This decoupling allows a many-to-many relationship between storyboard definitions and the datasets underlying them:  a storyboard can be based on multiple datasets, but perhaps more importantly, multiple storyboards by multiple users can be based on a single dataset.  Fundamentally, it looks like this:

BTW - The “technical purists” (of which I claim to be one) will point out that this is exactly what a Lumira document (or .LUMS file) is – but in reality this is just a serialization mechanism – when a LUMS file is opened in Lumira Desktop, it is in fact one or more storyboards  based on one or more datasets loaded into Lumira’s embedded data engine.

So will we see the end of “LUMS” files? Probably not – a single artifact that represents a logical grouping of storyboard components is valuable for humans to comprehend – however that is not to say that from a technical perspective it must be implemented that way.

Showtime: What happens when a user accesses a Lumira Storyboard


The trick with decoupling the storyboard (or “report definition”) and its underlying data so reports are not stale is that they must be put together on the fly.  This means every time a user opens a storyboard, the system must run all the calculations against dataset at runtime to create the metadata required to render the visualizations.  As you learned from the previous blog article, Lumira only calculates enough to render the visualizations that need to be rendered on the screen immediately. That’s pretty efficient isn’t it?

Take a look at the above diagram - when the user performs an action that changes the state of the current view (such as moving to another board in a story or selecting a filter), the process is repeated – all the calculations are now run to render the result of the user’s action.  Exploring a dataset requires something to render it, and viewing a storyboard requires application of the story against the dataset. So to spell it out: Even simply viewing a storyboard requires a computation engine because there is no concept of a “report” – there is only the dynamically created view of what the user wants to see right now.


Stories Always Represent Near-Real Time Data


Since everything the user sees is dynamically generated on-the-fly, any change in the underlying dataset is immediately reflected the next time the user triggers a change in the view.  At first this sounds a bit alarming – the data in my visualization could change simply by moving to another board in a story and moving back.  Some customers have argued that this creates an inconsistent representation of the data.

But let’s think about this – if I am looking at historical data in my story, it is obvious that there is no issue since there isn’t any new data to be represented anyways.  If I am looking at a current view where new data could change the visualization, why wouldn’t I want to have the latest information added to my analysis?  Would it really be better to make decisions on yesterday’s data that is already invalid?

A Whole New Way of Doing BI


Traditional BI customers get very nervous because SAP Lumira appears to not have certain features that their other BI content does.  But let’s take a quick look at a few of them:

Saved Data and saved instances:  It is great to keep a historical record of the data over the course of days, weeks, or months – but Lumira is not really a reporting tool, it is meant for data discovery – if you are looking for saving a record of the past, you really should be using a reporting tool like Crystal Reports or Web Intelligence.

Scheduling:  Since there is no concept of a report and views are dynamically generated, what is there to schedule?  If you’ve been following along with the above explanation of how Lumira works, you will see that it is the data, not the storyboard that needs to be refreshed. There are a number of ways to do this depending on the source of the data.

Publishing and distribution: Many customers are used to distributing reports to file shares or having them emailed after execution. A Lumira Storyboard has a URL, and now you know that any time a user goes to that URL, the view is dynamically generated – so if we were to implement “email distribution”, the recipient would get the same URL link every day! If you want an email every time the data is refreshed – you need that process to trigger the email because this doesn’t have anything to do with Lumira.

Things Keep Changing So Stay Tuned


The cool part of new ways of doing things is that what is true today may not be true tomorrow.  We see new, and better ways of providing BI to our users, and requirements will continue to change.  For example, once we implement PDF export in Lumira, all of a sudden scheduling and publishing become relevant again – although they might be implemented differently that what we do with Crystal today (then again, maybe they won’t).

In the next installment of this blog series, I’m going to go into much more detail about our upcoming BI 4 integration and the most frequently asked question from BI4 customers about SAP Lumira: “Why do I need a HANA just to view a storyboard”.  I bet you have a pretty good idea of WHY now, but we’re not quite done this story yet 😉


(Update: The next article that covers BI4 integration is here: Why Do I Need Lumira Server (on HANA) To View Storyboards in SAP BI 4?).

7 Comments