1 2 3 23 Previous Next

BI Platform

331 Posts

Most of the BI Landscapes in industry utilize a content driven BI Approach rather than a user focused BI approach. While the content centrist approach is great for IT or IS organization it posed challenges to Business as business have to juggle through a lot of content which can be dashboards , reports and explorer information spaces or any other BI contents to do the analysis. This can lead to a lot of frustration and confusion and in the process and also wastes a lot of time of business to get all the relevant information for a specific analysis. Also when a new business user wants to do the same analysis the path he might take can be time consuming as he might need to understand which BI contents available for an analysis and what type of information they have and then switch between those contents to reach to an answer.




To overcome this problem we came up with a novel way to build user focused BI utilizing custom websites with embedded BI contents. Now before going there you might argue why anyone would need one more website when we already have BI launchpad in BusinessObjects as the default portal. And the answer is quite simply BI Launchpad can have multiple type of content like reports (Webi/Crystal) , Dashboards , data exploration Information space and they are most of the time just sitting in different folder and sub folders and there is no logical way to tie them to a specific type of activity or users and the process can be very cumbersome. Also some times the contents are not linked together for example there could be a sales dashboard and sales detail report but user have to go to sales dashboard find out the scenario which he wants to analyze and then go to the reports and select all prompts and filters to get to the details for that scenario.




How this solution works from a bird’s eye view : The most critical feature to make the solution work is open document URL for specific BI contents and enabling single sign on for BusinessObjects. The solution leverage the Opendoc links of Business Objects contents and combining it with i-frames in a customer portal. The portal being rendered via an IIS website which has a user friendly DNS alias. Let’s say user can access all the relevant sales information tying http://sales vs. http://businessobjects-dev (followed by a bunch of clicks to get to your desired folder), which one makes more sense and easier to remember when you are looking for all BI Contents related to sales? We created the sites and named them as http://Sales.yourcompanydomain.com , short, meaningful and easier to remember for users. The IIS websites make use of i-frames within which the open document links for dashboards, explorer information spaces and Webi reports are called. Also we make sure to make the website code in a way that it loads the dashboard contents while loading utilizing the parallel processing without wasting user time and once loaded the dashboard does not refreshed automatically.

 

Let take an Example:

Let’s take a fictitious scenario; assume you are a product manager in a large organization selling products to consumers across to globe and you are assigned to some product line in the company. Your job requires to ensure your have enough inventory for next week for your top selling products for last quarter for in North American region and ensure the plants which supplied the product is going to produce enough of them for the next quarter.

 

In a traditional content driven BI scenario you would have go to sales folder and find out which reports or dashboard gives you the top customer for last quarter by region. Then find out which is your top selling product for North America by filtering your product lines and regions. Then after you find the product, you would need to go to inventory folder and find out which report or dashboard shows the current inventory by product. Then find what is the current inventory levels for your top product which you have got from sales report. Then go to forecast report find out the forecast of the product for the next quarter and then then compare the number with the current inventory to understand how much of the product you would need to produce during next quarter.This whole process can take many hours to get a answer.

 

Now let’s take the scenario in this new approach where there is dedicated web link like http://PM-Analytics which has the sales dashboard with inventory dashboard and Forecasting report at the same Weblink as different tabs. The user just goes into the sales tab , finds the top selling product then gets to the next tab which inventory while still preserving his sales analysis.Then he finds the inventory numbers and goes to the next tab which is forecast report filtering the product and compare the additional inventory that will be needed based on forecast. Sounds simple!! This process will also save user a lot of headache to find the right content use the contents correctly as everything needed in one place and his sales analysis is not lost and he has just do the similar analysis for south America region quite easily as his old analysis does not automatically reset to default and the session should be still active. This process should be no more than few minutes.

 

 

How does it Look:

In a traditional content focused BI user have to go to Launch Pad , Public Folder and then find all the contents that are needed for an analysis.

Bi Launchpad 2.png

 

In the new Process just need to Type a URL in a browser which can be as simple as http://Sales which allows the user to directly view the landing dashboard without the hassles of finding it in a folder and all the additional BI contents to support an analysis. They do not see anything else except what they need.



Geo1.jpg

Inventory.jpg


The application can have reports which support analysis and also explorer information spaces to do data exploration.

When users wanted another set of related data they just click on another tab which takes him for additional analysis.


report.png


Solution Architecture:

Here is how the solution looks like. User types in a custom url like http://sales which is hosted on IIS web server as a web application.Then the request is redirected through a load balancer into Business Objects webserver and subsequently to BOBJ application server to cater the BI contents requested.

Architecture_v2.jpg



Creating a Web Application to

Deploying Business Objects Dashboard with a Custom IIS Website Name

I am going to discuss how to build a custom application URL to host BI contents so that a user group gets their BI contents available in just one place rather than having to go through launchpad and bunch of folders. The below solution is meant for IIS webserver so all the screen shots are specific to IIS only.

Prerequisites

Three items need to be installed/configured on the server in order to prepare to serve up IIS websites:

  • IIS services should be configured on the server
  • .Net Framework 4.5 should be installed

Configure IIS Services on the server

Go to the Server Manager console on the server and select the option Add Roles -
  image017.png

Select the web server IIS role anc click next -
image018.png

 

Once the installation is over, you will be able to see the role and services installed -
  image020.png

Install .Net Framework 4.5


Download the .Net 4.5 setup from Microsoft site.
Double click on the downloaded .exe file to start the setup.
Follow the on screen instructions to complete the setup.

How to Setup a Custom IIS Website for housing opendoc links

1. Content Home Folder for Site

Create Directory Folder

Create a folder that will server as the home folder for the website, this is required while creating the website.

Apply Access Levels to Site Folder

Go to the properties of the home folder that was created for the web site and add the ‘Everyone’ group with execute access –
  image005.png

2. Create the Website In IIS

Add the Web Site

Open up the Windows Server IIS manager console in one of two ways:

Start > Run > inetmgr > hit enter

or …

Start > Administrative Tools > Internet Information Services Manager

Right click on ‘Sites’ and select the option – Add Web Site.
  image006.png

Fill in the detail fields corresponding to the application area for which we are creating the site. These are…

Site Name: This name should match that of the Application Area established in the BO Launchpad

Physical Path: This is the path to the home contenet folder for the site that you created in an earlier step

Host Name: This equates to the web URL that users will enter to visit the web page (see example, below, for the “Inventory” application).
  image007.png

Application Pool Settings

In IIS left pane, click on Application Pools to see all application pools for your sites.  For your new site, make sure that the application pool is set to use the latest version of .Net Framework .  If it is not, double click the application pool and in the dialoge window select the latest version for .Net Framework.

 

image008.pngimage009.png


Bindings

In IIS left pane, right-click on your new application site and select Edit Bindings…  Make sure that both bindings are present on the website – the short name and the fully qualified name.

  image010.pngimage011.png

 

3. Finalize Web Content Customization

Populate the Home Directory with Sample Web Content

Once the website is created, the code needs to be put in the home folder we created.

 

 

Modify Customized Content Files

There are couple of things that we need to modify for the sites for each application area that we are rolling out the site for.

The following three files need to be modified to change the site as per the new application.
  image012.png

Default.aspx

The timeout popup setting is in this file in the section for function Init() , if required it can be changed. We are currently using a standard timer value of 7140000.

  image013.png

Web.config

The title of the Web Site and the working environment is present in this file –
  image014.png

The Workingenv parameter decides which links will be used from the links.xml file.
The Title parameter decided what will be the title of the webpage.

Links.xml

the opendoc links, title of the different tabs and the tooltip help is present in this file –
 
image015.png

Based on the working environment we set in the web.config, the opendoc links will be picked from the links.xml file.

 

while inserting the links, we needto modify them a bit –

image016.png

4. Request DNS Alias for the server/loadbalancer


Once the website is created, make sure to create an simple alias for the users to accessthe site – for example http://sales, http://quality etc.
Alias names being requested should be SAME as the bindings that have been provided for the web site.

Once the alias has been created, access the web site using any browser and confirm that it is working as expected.

 

Now finally once you are done with these you will have a website where you can embed the BI content for a personalized experience of your end user community.

 

Please keep in mind as there is no logoff button included in the custom website even the user closes the browser the sessions are still active in the server until it times out. However if you are in BI 4.1 SP6 BOBJ drops the session in 2mins after user closes the browser.

I have got multiple queries from various forums around how we are planning to support UDT and MSU connectivity in future. I have compiled below, our approach towards these going forward – from both BI 4.1 SPs as well as upcoming BI 4.2 release perspective.


Universe Designer Tool (UDT):

As you all know, UDT is used for creating new UNVs based on various supported data sources. Starting from SAP BI 4.x, we additionally ship Information Designer Tool (IDT) as part of the SAP BI product suite, which helps users to create multi-dimensional universes namely UNXs. IDT and UNX combination is forward looking and have advanced features/enhancements.

 

While users can open UNVs from IDT and convert existing UNVs to UNX format, user can continue to use UDT for creating new UNVs on the supported Data sources.

 

However, going forward in BI 4.x releases:

  • We will continue support UDT mandatorily for DBs/Sources which are supported in BI 3.1 version, to make sure there is no regression in the upgrade scenario
  • Newer versions of these DBs (if introduced by the vendor) will be tested and certified for UDT.
  • UDT will not be certified for the new data sources which got/being introduced newly in BI 4.x release.   
  • Current / Latest status of UDT support is upto-date and can be found here in Product Availability Matrix (PAM) - under UNV column.

 

Example:

Customer is using Oracle 10g as database for his UNV created using UDT as part of BI 3.1/BI 4.0.

  • Future Oracle versions will continue to be supported in UDT (as part of BI 4.1 / BI 4.2) – so that customer can seamlessly migrate.
  • Data sources which are/will be new in BI 4.1 / BI 4.2 (like Hadoop Hive, Amazon Redshift etc.) – will not have UDT support.

 

Multi Source Universe (MSU):

Going forward MSU will be tested and certified against Top 6 DB / datasources only, including SAP HANA and SAP BW (Teradata 15, Oracle, MSSQL, Progress, SAP HANA, SAP BW).

For other datasources, MSU support will only be considered based on a business case or customer request.  We will add the support with a justified request – through FPs or SPs, based on the priority.

Current status of MSU support for various data sources are upto-date in Product Availability Matrix (PAM).

Dear All,

 

we are pleased to announce that SAP BusinessObjects BI4.1 SP06 has been released and is available for download on http://support.sap.com

 

Additional recourses on SAP BusinessObjects BI4.1 SP06:

 

 

* requires logon to SAP Support Page with a valid account

 

Regards

Merlijn

SAP BusinessObjects Business Intelligence Support Pack 6 is Here!

 

Today, SAP released SAP BusinessObjects Business Intelligence 4.1 Support Pack 6 to the SAP Support Portal, both as a full build and as a patch to previous versions. Support Pack 6 has something we haven't seen from a support pack in a long time— new features! Christian Ah-Soon, SAP Product Expert, has written a great summary here on the SAP Community Network (see related article, SAP BI 4.1 SP6 - What's New in Web Intelligence and Semantic Layer). Web Intelligence users will no doubt put document-level input controls to great use. There's small yet significant usability improvements. For example, Export Data functionality has been added to the Reading mode (previously, you had to remember to go to Design mode for that feature). There's improvements to Microsoft Excel data providers. And while I'm not a huge fan of Free-Hand SQL (see related article on my personal blog, Free-Hand SQL Isn't Free), I'm thankful that SAP has closed yet another Web Intelligence feature gap with Desktop Intelligence. And if you're a Live Office fan (don't be ashamed), you'll be glad to know that Live Office has not only been given UNX universe access in BI 4.1 SP6, but the product also has a road map and a future (see related SCN article SAP BusinessObjects BI 4.1 SP06 What's New by Merlijn Ekkel for a comprehensive overview of what's coming to the entire platform). I've barely scratched the surface here, so please read Christian's and Merlijn's much more detailed articles.

 

BI 4.1 SP6 is the last support pack released in 2015. Read that sentence again, I'll wait... Support for XI 3.1 and BI 4.0 ends on December 31, 2015 and it is unlikely that BI 4.2 will be generally available by that time (although it might be in ramp-up, cross your fingers). This means that BI 4.1 SP6 is going to be the go-to release of BI 4.1 for the foreseeable future. And with just a bit of nostalgia, the article that you're reading now will likely be the last "State of the BusinessObjects BI4 Upgrade" I'll write this year (check out the State of the BusinessObjects BI 4 Upgrade archive on the EV Technologies web site). Tomorrow morning- before the first cup of coffee is finished- I'll begin helping a customer download the 4.1 SP6 full build for their XI 3.1 migration kickoff. And I've already downloaded the SP6 patch to apply to one of our internal sandbox servers tonight.

 

You are no doubt wondering if BI 4.1 SP6 is a stable release. And I am, too. I'd be lying if I said that BI 4.1 and its first five support packs were completely pain free. Let's hope that the product quality is just as impressive as the new features.

 

SAP Lumira v1.25 for the BI Platform - Now with Free Sizing Guide!


The big deal at last month's SAP SAPPHIRE was the release of SAP Lumira v1.25, which brought the first iteration of integration with the BI 4.1 platform. I've been lucky to follow Lumira v1.25  from a special SAP Partner training program to its Customer Validation program and finally to its general availability. Release 1.25 brings SAP Lumira from the desktop to the BI 4.1 platform without the requirement for SAP HANA, a stumbling block for a significant number of BI platform customers. But until today, Lumira for the BI platform was missing a critical component- sizing guidelines. SAP has published an updated SAP Lumira Sizing Guide to the SAP Community Network that includes sizing for the BI 4.1 add-on. The add-on brings the same in-memory database engine to the BI 4.1 platform that SAP introduced to the Lumira Desktop in version 1.23 a few weeks ago.


Time to Start Migrating!


The software and documentation released today, combined with the SAP Lumira v1.25 and Design Studio 1.5 software that was released last month (see related article, State of the SAP BusinessObjects BI 4.1 Upgrade - May 2015 (SAPPHIRE Edition)), bring all of the pieces together to take your BI landscape into the future. I hope that these pieces and their installation will be more tightly integrated in BI 4.2. But for me, as well as many of you, the adventure begins tomorrow. Just as soon as all of the software is downloaded.

 

More to come...

This is part 3 from yesterday's webcast.  Part 1 is askSAP Analytics Innovations Community Call Notes Part 1 and Part 2 askSAP Analytics Innovations Call Notes Part 2 SAP Lumira Roadmap

 

Please note the usual legal disclaimer applies that things in the future are subject to change.  What I liked particularly about this call was the time spent on question & answer (see below).

1fig.jpg

Figure 1: Source: SAP

 

SAP said they value customers’ feedback

2fig.jpg

Figure 2: Source: SAP

 

Coming for Design Studio includes increasing the number of rows that universes can bring back (today it is 5K), mobile offline support and more as shown in Figure 2

3fig.jpg

Figure 3: Source: SAP

 

Figure 3 covers Analysis Office with a converged Excel client to include EPM, and a new formula editor for 2.1

 

4fig.jpg

Figure 4: Source: SAP

 

Figure 4 covers future plans (subject to change) for Analysis Office, with improved PowerPoint integration and publishing workbooks to the HANA platform

5fig.jpg

Figure 5: Source: SAP

 

Figure 5 covers plans for the future for Web Intelligence (past BI4.1 SP06)

 

Next release for Web Intelligence includes shared objects and annotations

6fig.jpg

Figure 6: Source: SAP

 

Figure 6 covers plans for Mobile BI; SAP is seeing increasing demand for Android

7fig.jpg

Figure 7: Source: SAP

 

Figure 7 shows plans for a faster installer

 

Report comparison tool to save time during the upgrade

 

Linked universes – many projects require universes

 

“Biggest and best partner ecosystems” to extend BI Platform

 

Question & Answer

Q: Universe on BEx query – will it replace anything?

A: Makes it more business friendly for end users for consumption in Web Intelligence

 

Q: Which versions BI Web Intelligence be available

A: SP06 – next week

Future plans – BI4.2 – late this year early next year (forward looking statement)

 

Q: Any future plans for commenting solution for all BI tools

A: Commenting for Web Intelligence is at the platform – WebI is the first to use, looking at other tools

 

Q: Is the performance on WebI on BICS universes similar to BEx queries

A: no performance numbers to verify

 

Q: Lumira isn’t supported on Crystal Server? What do those customers do?

A: Technologically speaking can do this but now focused on Lumira server for teams – you should be able to connect to universes from Lumira teams on Crystal Server

 

Licensing – you can purchase Lumira Edge – team server & BI Platform

 

Q: When can we view Mobile Dashboards without going through the BI app?

A: working on, no timeframe

 

Q: is broadcasting of Design Studio reports available?

A: not available today

Ability to schedule using the BI platform is on the to do list

 

Q: SAP’s UX strategy says it will converge to Fiori – how reflect in BI platform & client tools?

A: BI platform / client – looking to integrate with Fiori

Lumira & Design Studio started this with a Fiori tile into a Lumira story – working on adding OpenDoc capabilities

More adherence to Fiori design type when working on further solutions including Cloud for Planning

 

Q: What is the future for SAP Infinite Insight?

A: brought together InfiniteInsight with SAP Predictive into SAP Predictive Analytics

 

SAP also announced SAP IT Operations Analytics - see an overview in this PC World article: SAP previews new analytics tools for IT, business users | PCWorld

 

Additionally ASUG has a webcast on this in August - Data Center Intelligence

 

ASUG also has a webcast in September titled "What is coming in BI4.2" - register here

 

Finally, if you have questions about moving from BEx tools to Analysis and Design Studio join ASUG today  - register here

Problem statement: Hard dependency with SAP HANA SPSs and BI 4.1 SPs


Currently, from SAP BI 4.1 side, there is ‘one to one’ mapping with SAP BI 4.1 SP Vs SAP HANA SPSs. i.e. SAP BI SP releases are not forward compatible to SAP HANA SPSs – as per Product Availability Matrix (PAM).


Each time customer upgrades SAP HANA to a newer SPS version, it mandates SAP BI 4.1 upgrade as well to the supported SP. This constitutes a significant burden for our customers, sometimes it is a showstopper.


Proposed guideline and solution:

 

Teams internally did additional testing on the newer / previously unclaimed version combinations, to make sure SAP BI + SAP HANA customers will not have to go through this problem in future.


With this, there is a commitment from both SAP HANA and SAP BI team, for the compatibility between them.

For example, all the existing features from BI side will continue to work with new SAP HANA SPS version in this combination. Customer will get the support from SAP’s respective team to resolve the issue with the latest SAP HANA version, if there are any, while they continue with the existing SAP BI SP version in their landscape.


SAP BI PAM documents have been updated with this new proposal, i.e. all active SAP BI 4.1 SP lines will work with latest SAP HANA SPS release. Customer need not update the SAP BI 4.1 landscape, to consume latest SAP HANA SPS version.

 

Following is the model in which we are looking at supporting this combination. Please note that SAP HANA SPS10, and NEXT are not released as of now, so please use this as guideline only. (Refer PAM for actuals)

 

HANACompatibility1.PNG

 

Summary: In general, we would like to ensure the customer that ALL ACTIVE SAP BI 4.1 SPs will connect to latest SAP HANA SPSs. However, we advise you to continue using PAM document as THE reference for the support, to get the latest update on the versions supported and if there is any work-around needed.


The Promotion Manager tool does not bring instances and the UMT refuses to move content from one 4.x system to another. I am currently testing this on BI 4.1 SP5 Patch 5.

 

Can anyone suggest a better way to to move content from 4.1 SP1 to our test box with 4.1 SP5 Patch 5?

We have Over 100,000 reports and need to move several thousand for testing. Why? Because updates to BOE often fail with critical issues, so we can't apply it to our system and hope it works.

    I read in a separate post that SAP will eventually create a thick client for customers who need to move larger amounts of content. I already tried the UMT on this SP5 and it refuses. Does anyone know if and when this new thick client might be coming?

 

CONSTANT ISSUES INCLUDE:

- Fails most of the time

- Some failures only say "Failure" go look at the logs. How about a clue for us inside PM?

- One error on a connection said this, "Relationship would not be a tree after update so bailing". I guess bailing is a strategy for this poorly designed tool. It appears to me that you must bring every universe that uses that connection before it will actually bring it. That's just plain wrong. I may not want those other universes over-writing previous work.

- Duplicate name. This and any other tool needs to allows us to overwrite ANY existing content if we so choose. Someone changed the CUID using Save-As and kept the same name. I need to replace that file -- why not let me? The only solution here is to delete that content and rerun the job. With users and groups, this is at best a large nightmare.

- No instances in scheduled reports come over. In fact, even the report won't come over if the destination report has instances. What kind of choice is that?

Most of our dashboards depend on scheduled reports, so what's the point in not bringing the instances with that content?

 

What else might help?

1. It would be EXCELLENT if SAP designed an Easy Button for mirroring content to another server. It would have to ensure nothing points back to the source system and create new cluster keys. We have tried this manually, it wasn't fun and still has artifacts of the original system.

2. If they are working on a tool to move larger amounts of content, it would be SPLENDID if they also made a way to mirror the security across all folders without having to move all content and all users at the same time. We could move the Groups in batches, then hit the easy button and it magically assigns the groups to the folders, universes, etc. 

Dear SCN user,

 

We are happy to inform you about the availability of the updated SAP Analytics Roadmap slides.

 

 

Updated features and benefits of Solutions released since the last roadmap, such as:

  • SAP BusinessObjects Business Intelligence platform 4.1, SP5
  • SAP BusinessObjects Mobile 6.1
  • SAP Lumira 1.25
  • SAP Lumira Server
  • SAP BusinessObjects Analysis, edition for Microsoft Office, version 2.0
  • SAP BusinessObjects Design Studio 1.5
  • SAP Predictive Analytics 2.0
  • Updated Planned Innovations for all Solutions

 

You can download the updated roadmaps via the links:

Overall Analytics Roadmap*

Analytics BW Roadmap*

Analytics Agnostic Roadmap*


* User Account required for SAP Support Page



Kind Regards

SAP GTM BI

This was an ASUG webcast the other week, with a focus on BI (not predictive, HANA)

 

On a different webcast I became aware of this related document about licensing - see here

1fig.jpg

Figure 1: Source: SAP

 

Everyone's contract is different

 

2fig.jpg

 

Figure 2: Source: SAP

 

There have been multiple BI license models over time

 

3fig.jpg

Figure 3: Source: SAP

 

Figure 3 shows the context of BI license models; SAP has previously had add-on models

 

SAP has moved to suite style licensing

 

Differences in BI suite license over on the right including Desktop and Lumira Server

4fig.jpg

Figure 4: Source: SAP

 

Figure 4 shows the core licensing principles

 

There is no obligation or requirement to convert licensing

 

SAP wants to be transparent in license models

 

License models are non-version specific

5fig.jpg

Figure 5: Source: SAP

 

SAP no longer sells CPU licenses to new customers but to existing

 

NUL stands for named user license – for managers, power users, most desktop tools

 

CSBL are for casual users that don’t require guaranteed

 

In the CMC configure NUL or Concurrent

6fig.jpg

Figure 6: Source: SAP

 

Figure 6 shows 1 logon = 1 session

7fig.jpg

Figure 7: Source: SAP

 

It is still one session if navigating between sessions

8fig.jpg

Figure 8: Source: SAP

 

Figure 8 shows SAP is moving away from CPU based licenses because they wanted to remove from constraints

 

Part 2 is coming when time allows

 

Reference:

 

Upcoming ASUG BI Webcast Listing

Carsten Mönning and Waldemar Schiller


Part 1 - Single node Hadoop on Raspberry Pi 2 Model B (~120 mins), http://bit.ly/1dqm8yO
Part 2 - Hive on Hadoop (~40 mins), http://bit.ly/1Biq7Ta

Part 3 - Hive access with SAP Lumira (~30mins)
Part 4 - A Hadoop cluster on Raspberry Pi 2 Model B(s) (~45mins)


Part 3 - Hive access with SAP Lumira (~30 mins)


In the first two parts of this blog series, we installed Apache Hadoop 2.6.0 and Apache Hive 1.1.0 on a Raspberry Pi 2 Model B, i.e. a single node Hadoop 'cluster'. This proved perhaps surprisingly nice and easy with the Hadoop principle allowing for all sorts of commodity hardware and HDFS, MapReduce and Hive running just fine on top of the Raspbian operating system. We demonstrated some basic HDFS and MapReduce processing capabilities by word counting the Apache Hadoop license file with the help of the word count programme, a standard element of the Hadoop jar file. By uploading the result file into Hive's managed data store, we also managed to experiment a little with HiveQL via the Hive command line interface and queried the word count result file contents.


In this Part 3 of the blog series, we will pick up things at exactly this point by replacing the HiveQL command line interaction with a standard SQL layer over Hive/Hadoop in the form of the Apache Hive connector of the SAP Lumira desktop trial edition. We will be interacting with our single node Hadoop/Hive setup just like any other SAP Lumira data source and will be able to observe the actual SAP Lumira-Hive server interaction on our Raspberry Pi in the background. This will be illustrated using the word count result file example produced in Parts 1 and 2.

 

HiveServices5.jpg


Preliminaries

Apart from having worked your way through the first two parts of this blog series, you will need to get hold of the latest SAP Lumira desktop trial edition at http://saplumira.com/download/ and operate the application on a dedicated (Windows) machine locally networked with your Raspberry Pi.


If interested in details regarding SAP Lumira, you may want to have a look at [1] or the SAP Lumira tutorials at http://saplumira.com/learn/tutorials.php.


Hadoop & Hive server daemons

Our SAP Lumira queries of the word count result table created in Part 2 will interact with the Hive server operating on top of the Hadoop daemons. So, to kick off things, we need to launch those Hadoop and Hive daemon services first.


Launch the Hadoop server daemons in your Hadoop sbin directory. Note that I chose to rename the Hadoop standard directory name into "hadoop" in Part 1. So you may have to replace the directory path below with whatever hadoop directory name you chose to set (or chose to keep).


          /opt/hadoop/sbin/start-dfs.sh

          /opt/hadoop/sbin/start-yarn.sh


Similarly, launch the Hiver server daemon in your Hive bin directory, again paying close attention to the actual Hive directory name set in your particular case.

 

     /opt/hive/bin/hiveserver2


The Hadoop and Hive servers should be up and running now and ready for serving client requests. We will submit these (standard SQL) client requests with the help of the SAP Lumira Apache Hive connector.

 

SAP Lumira installation & configuration

Launch the SAP Lumira installer downloaded earlier on your dedicated Windows machine. Make sure the machine is sharing a local network with the Raspberry Pi device with no prohibitive firewall or port settings activated in between.

 

The Lumira Installation Manager should go smoothly through its motions as illustrated by the screenshots below.

LumiraInstall1.pngLumiraInstall2.png

 

On the SAP Lumira start screen, activate the trial edition by clicking the launch button in the bottom right-hand corner. When done, your home screen should show the number of trial days left, see also the screenshot below. Advanced Lumira features such as the Apache Hive connector will not be available to you if you do not activate the trial edition by starting the 30-day trial period.


LumiraTrialActivation.png

 

With the Hadoop and Hive services running on the Raspberry Pi and the SAP Lumira client running on a dedicated Windows machine within the same local network, we are all set to put a standard SQL layer on top of Hadoop in the form of the Lumira Apache Hive connector.

 

Create a new file and select "Query with SQL" as the source for the new data set.

LumiraAddNewDataset.png

Select the "Apache Hadoop Hive 0.13 Simba JDBC HiveServer2  - JDBC Drivers" in the subsequent configuration sreen.

 

LumiraApacheHiveServer2Driver.png

Enter both your Hadoop user (here: "hduser") and password combination as chosen in Part 1 of this blog series as well as the IP address of your Raspberry Pi in your local network. Add the Hiver server port number 10000 to the IP address (see Part 2 for details on some of the most relevant Hive port numbers).

LumiraApacheHiveServer2Driver3.png

If everything is in working order, you should be shown the catalog view of your local Hive server running on Raspberry Pi upon pressing "Connect".

LumiraCatalogView2.png

In other words, connectivity to the Hive server has been established and Lumira is awaiting your free-hand standard SQL query against the Hive database. A simple 'select all' against the word count result Hive table created in Part 2, for example, means that the full result data set will be uploaded into Lumira for further local processing.

LumiraSelect1.png

Although this might not seem all that mightily impressive to the undiscerning, remind yourself of what Parts 1 and 2 taught us about the things actually happening behind the scenes. More specifically, rather than launching a MapReduce job directly within our Raspberry Pi Hadoop/Hive environment to process the word count data set on Hadoop, we launched a HiveQL query and its subsequent MapReduce job using standard SQL pushed down to the single node Hadoop 'cluster' with the help of the SAP Lumira Hive connector.

 

Since the Hive server pushes its return statements to standard out, we can actually observe the MapReduce job processing of our SQL query on the Raspberry Pi.

Hive_MapReduce3.png


An example (continued)

We already followed up on the word count example built up over the course of the first two blog posts by showing how to upload the word count result table sitting in Hive into the SAP Lumira client environment. With the word count data set fully available within Lumira now, the entire data processing and visualisation capabilities of the Lumira trial edition are available to you to visualise the word count results.

 

By way of inspiration, you may, for example, want to cleanse the license file data in the Lumira data preparation stage first by removing any punctuation data from the Lumira data set so as to allow for a proper word count visualisation in the next step.

LumiraCleansedWordColumn.png

 

With the word count data properly cleansed, the powerful Lumira visualisation capabilities can be applied freely at the data set and, for example, a word count aggregate measure as shown immediately below.

 

LumiraVisualisation1_2.png

Let's conclude this part with some Lumira visualisation examples.

LumiraVisualisation1_1.png

 

LumiraVisualisation3_1.png

 

LumiraVisualisation2_1.png

 

In the next and final blog post, we will complete our journey from a non-assembled Raspberry Pi 2 Model B bundle kit via a single node Hadoop/Hive installation to a 'fully-fledged' Raspberry Pi Hadoop cluster. (Though it will be a two-node cluster only, but it will do just fine to showcase the principle.)

 

Links

SAP Lumira desktop trial edition - http://saplumira.com/download/

SAP Lumira tutorials - http://saplumira.com/learn/tutorials.php
A Hadoop data lab project on Raspberry Pi - Part 1/4 - http://bit.ly/1dqm8yO
A Hadoop data lab project on Raspberry Pi - Part 2/4 - http://bit.ly/1Biq7Ta

References

[1] C. Ah-Soon and P. Snowdon, "Getting Started with SAP Lumira", SAP Press, 2015

Continuing with the security topics, I will cover the topic of staying up to date with security patches for BI.

While SAP practices a complete security development lifecycle, the security landscape continues to evolve, and through both internal and external security testing we become aware of new security issues in our products.  Every effort is then made to provide a timely fix to keep our customers secure. 

 

This is part 4 of my security blog series of securing your BI deployment. 

 

Secure Your BI Platform Part 1

Secure Your BI Platform Part 2 - Web Tier

Securing your BI Platform part 3 - Servers

 

Regular patching:

You're probably familiar with running monthly patches for windows updates, "patch Tuesday" on the second Tuesday of every month.

SAP happens to follow a similar pattern, where we release information about security patches available for our customers, for the full suite of SAP products.

 

BI security fixes are shipped as part of fixpacks and service packs. 

I will here walk you through signing up for notifications.

 

 

Begin by navigating to https://support.sap.com/securitynotes

 

Click on "My Security Notes*"

 

This will take you to another link, where you can "sign up to receive notifications"

https://websmp230.sap-ag.de/sap/bc/bsp/spn/notif_center/notif_center.htm?smpsrv=http%3a%2f%2fservice%2esap%2ecom

 

Click on "Define Filter" , where you can filter for the BI product suite.

 

Sign up for email notifications:

 

Defining the filter: Search for SBOP BI Platform (Enterprise)

And select the version:

 

Note that currently the search does not appear to filter on version unfortunately, so you will likely see all issues listed.

 

Your resulting filter should look something like this:

 

 

The security note listing will look something like this:

 

 

Understanding the security notes:

Older security notes have a verbal description of version affected and patches that contain the fix.

For example, the note will say "Customers should install fix pack 3.7 or 4.3"...

 

Newer notes will also have the table describing the versions affected and where the fixes shipped:

Interpreting the above, the issue affects XIr3.1, 4.0 and 4.1.  

Fixes are provided on xr3.1 Fixpacks 6.5 & 7.2, on 4.0 SP10, and 4.1 SP4.

 

The forward fit policy is the same as "normal" fixes, meaning a higher version of the support patch line will also include the fixes.

 

The security note details will also contain a CVSS score.  CVSS = Common Vulnerability Scoring System.

It is basically a 0 - 10 scoring system to give you an idea of how quickly you should apply the patch.

More info on the scoring system https://nvd.nist.gov/cvss.cfm

 

1. Vulnerabilities are labeled "Low" severity if they have a CVSS base score of 0.0-3.9.

2. Vulnerabilities will be labeled "Medium" severity if they have a base CVSS score of 4.0-6.9.

3. Vulnerabilities will be labeled "High" severity if they have a CVSS base score of 7.0-10.0.

 

In short, if you see a 10.0, you better patch quickly!

 

Not applying the latest security fixes can get you to fail things like PCI compliance, so after you have locked down & secured your environment, please make sure you apply the latest fixes and keep the bad guys out!

Share your insights for the future of BI; Complete the BARC BI Survey 2015

 

Until the end of the month the BI Survey 2015 of BARC Research is op en for everyone willing to share his/her insights in the direction of BI.

Do you want to share your insights and make your voice heard?


  • The Survey is scheduled to run until the end of May
  • It should take you about 20 minutes to complete
  • Whether you are a Business or Technical users, as well as consultants, are all welcome to participate
  • Answers will be used anonymously
  • Participants will:
    • Receive a summary of the results from the survey when it is published
    • Be entered into a draw to win one of ten $50 Amazon vouchers
    • Ensure that your experiences are included in the final analyses

 

You can take the survey via : https://digiumenterprise.com/answer/?link=2319-HZXG9J6B

 

Thanks in advance

Merlijn

This was a SAP user group webcast today.  I was late but towards the end the SAP speaker said SAP Safe Harbor statement applies:

 

"This blog, or any related presentation and SAP's strategy and possible future developments, products and or platforms directions and functionality presented herewith are all subject to change and may be changed by SAP at any time for any reason without notice. The information on this blog is not a commitment, promise or legal obligation to deliver any material, code or functionality..."

 

This means anything in the future is subject to change and these are my notes as I heard them.

 

Enterprise BI 4.2

1abi42.jpg

Figure 1: Source: SAP

 

Figure 1 shows the themes of BI4.2, overall being simplified, enhanced and extended

2fig.jpg

Figure 2: Source: SAP

 

Design Studio 1.5 has offline clickthrough applications, with the ability to reduce design time it takes to create charts, Lumira interoperability, import Lumira into Design Studio. Version 1.5 includes commentary / create use cases, and export data to PDF

 

Analysis Office/EPM will consolidate into one plug-in with one Analysis Office app for the BI suite.  On the right includes features for BI4.1 SP06 planned for next month.

 

Enterprise BI 4.2

3afig.jpg

Figure 3: Source: SAP

 

Figure 3 shows what is planned for BI4.2, including commentary for Web Intelligence, design Features for Mobility devices, HANA Direct access to Universe

 

BI4.2 Web Intelligence includes support for Big numbers and  set consumption .  With set analysis SAP is  re-introducing and consume sets in Web Intelligence

 

BI Platform features include commentary, recycle bin in CMC – enhancements to UMT and promotion tool to speed up promotions and upgrade

 

Packaged audit feature is in the suite

 

Semantic Layer includes linked UNX universes are back

 

Authored universes on BEx queries  was disabled in BI4.0/1 and is now back

 

Set Analysis is back

 

Installation improvements include one step update, faster to upgrade, as the current installation patching hasn’t been the best

 

There is a utility to remove unused binaries

 

Enhance DSL bridge, enhance BICS bridge, HANA enhancements for Web Intelligence on HANA; committed to enhancing Web Intelligence & BW experience.

SAP Lumira Roadmap

4asaplumiraroadmap.jpg

Figure 4: Source: SAP

 

Planned for SAP Lumira includes convergence and search

 

Question and Answer

Q: What happened with Dr. Who version WebI without microcube

A: Project cancelled; wanted to put support for Lumira for HANA-based integration

However, enhanced HANA based support for JDBC connection

 

Q: When will SP06 be available?

A: planned for 3rd week of June

Codeline finished yesterday – subject to safe harbour

 

Q: Recycle bin for Infoview?

A: It is just for CMC; submit for Idea Place

 

Q: Any plan to provide the option of linking data providers which was avialable in XI versions?

A: Enter in Idea Place

 

Q: We are about to upgrade from 4.0 Sp7 P5 to 4.1 SP4 should we upgrade to 4.1 SP6 instead?

A: Difficult question to answer; may be better to delay

 

Q: when will the PAM for 4.1.6 be available?

A: Third week in June (planned)

 

Q:  Specifically what offline capabilities are planned for Design studio (in context for mobile bi for iOS)?

A:  Cached based setting, when consume on device; will find out for sure

 

Q: Are there enhancements to the RESTful Web services API? Specifically can we now create and manage users using the API so we can get away from the .NET SDK?

A: Convergence to RESTful web service – strategy, nuances/needing a while paper

 

Q: Will there be full support for 'selection option' variables in Web Intelligence i.e. same functionality as in BEx?

A: put on Idea Place

 

Q: Is there provision for sensor and similar type data sources - IoT

A: Roadmap for IoT is within HANA – datasources for HANA

 

Q: Will BEx conditions be supported?

A: Look at  Idea Place

 

Q: Can we make Web Intelligence prompts hidden so that once a prompt value is set the prompt box will not appear?

A: Idea Place

 

Q: any enhancements (fixes) to integrity check in IDT tool?

A: Don’t know of anything new that have been added

 

Q: Will there be support for variables in defaults area of BEx queries?

A: Currently not supported in Web Intelligence; how much of BEx queries should surface to Web Intelligence

 

Q: Are there any plans to enhance Publications, specifically making Delivery rules available to Web Intelligence documents

A: Add to Idea Place; publications not enhanced between 3.x to 4.x

 

Q: Can you say more about differentiation of Lumira from competitors?  It looks to me that despite frequent releases you are still playing catch up.

A: This is why roadmap is substantial

 

Reference

SAP BI Suite Roadmap Strategy Update from ASUG SAPPHIRENOW

ASUG Webinars - May 2015

SAP SAPPHIRE and the ASUG Annual Conference were held last week at the Orange County Convention Center in Orlando, Florida. While most of the keynote action centered on S4/HANA and Hasso Plattner's Boardroom of the Future (see related Fortune article), there were three key messages in the analytics booths on the show floor.

 

All Roads (Still) Lead to SAP BusinessObjects BI 4.1

 

First, just in case you weren't paying attention, all roads (still) lead to SAP BusinessObjects BI 4.1 (see my previous State of the SAP BusinessObjects BI 4.1 Upgrade from December 2014). With mainstream support for SAP BusinessObjects Enterprise XI 3.1 and SAP BusinessObjects BI 4.0 ending on December 31, 2015, the race is on to get as many SAP customers as possible to the BI 4.1 platform. With the end of year quickly approaching, the time is now to get started on your BI 4.1 upgrade. SAP BusinessObjects BI 4.1 Support Pack 5 (SP5) is currently available (along with 5 patches) and Support Pack 6 (SP6) is still on track for mid-year. You couldn't see SP6 on the show room floor, but it started showing up in "coming soon" slide decks from SAP presenters. I'm curious to see free-hand SQL support in Web Intelligence and UNX support in Live Office, among other minor enhancements. SAP is also starting to talk about SAP BusinessObjects BI 4.2 (see Tammy Powlas' blog entitled  SAP BI Suite Roadmap Strategy Update from ASUG SAPPHIRENOW), but it most likely won't be ready in time for the impending support deadline. Instead, you should think of BI 4.2 as a small upgrade project once your organization is solidly using BI 4.1.

 

SAP Design Studio 1.5

 

SAP's second analytics message was about SAP Design Studio. I attended Eric Schemer's World Premier of Design Studio 1.5 session (see Tammy Powlas' blog entitled World Premiere SAP Design Studio 1.5 ASUG Annual Conference - Part 1). SAP Design Studio is the go-forward tool to replace both SAP Dashboards (formerly Xcelsius) and SAP Web Application Designer (WAD). Version 1.5 adds several new built-in UI capabilities, OpenStreetMap integration, and parallel query, just to name a few innovations. If your organization is not yet ready to start using Design Studio, remember that a new version arrives roughly every 6 months. Depending on your organization's own time table to begin using Design Studio, it might make sense to wait until the end of the year for Design Studio 1.6.

 

SAP Lumira on BI 4.1


SAP's third key message to analytics customers was about SAP Lumira. SAP Lumira v1.25 is a really big deal. The Lumira Desktop (starting with v1.23) includes a brand-new in-memory database engine that replaces the IQ-derived engine. Starting with v1.25, this engine is also available for the SAP BI 4.1 platform as an add-on, bringing SAP Lumira documents to the BI 4.1 platform (see Sharon Om's blog entitled What's New in SAP Lumira 1.25). No matter if you're currently on XI 3.1, BI 4.0 or BI 4.1, you'll want to plan for increasing the hardware footprint of your BI 4.1 landscape to accommodate the new in-memory engine, which runs best on a dedicated node (or nodes, depending on sizing) in your BI 4.1 landscape.

 

Conclusion


With BI 4.1 SP5, Design Studio 1.5, and Lumira 1.25, there are lots of new capabilities available for the BI platform starting today. And many more are planned for BI 4.1 SP6 and BI 4.2 over the next six to nine months. If you weren't able to attend SAP SAPPHIRE in person, you'll no doubt be hearing more on SAP webcasts and at the upcoming 2015 ASUG SAP Analytics and BusinessObjects User Conference, August 31 through September 2 in Austin, Texas.

Carsten Mönning and Waldemar Schiller


Part 1 - Single node Hadoop on Raspberry Pi 2 Model B (~120 mins), http://bit.ly/1dqm8yO

Part 2 - Hive on Hadoop (~40 mins)

Part 3 - Hive access with SAP Lumira (~30mins)

Part 4 - A Hadoop cluster on Raspberry Pi 2 Model B(s) (~45mins)

 

Part 2 - Hive on Hadoop (~40 mins)


Following on from the Hadoop core installation on a Raspberry Pi 2 Model B in Part 1 of this blog series, in this Part 2, we will proceed with installing Apache Hive on top of HDFS and show its basic principles with the help of last part's word count Hadoop processing example.


Hive represents a distributed relational data warehouse featuring a SQL-like query language, HiveQL, inspired by the MySQL SQL dialect. A high-level comparison of the HiveQl and SQL is provided in [1]. For a HiveQL command reference, see: https://cwiki.apache.org/confluence/display/Hive/LanguageManual.


The Hive data sits in HDFS with HiveQL queries getting translated into MapReduce jobs by the Hadoop run-time environment. Whilst traditional relational data warehouses enforce a pre-defined meta data schema when writing data to the warehouse, Hive performs schema on read, i.e., the data is checked when a query is launched against it. Hive alongside the NoSQL data warehouse HBase represent frequently used components of the Hadoop data processing layer for external applications to push query workloads towards data in Hadoop. This is exactly what we are going to do in Part 3 of this series when connecting to the Hive environment via the SAP Lumira Apache Hive standard connector and pushing queries through this connection against the word count output file.

 

HiveServices5.jpg
First, let us get Hive up and running on top of HDFS.

 

Hive installation
The latest stable Hive release will operate alongside the latest stable Hadoop release and can be obtained from Apache Software Foundation mirror download sites. Initiate the download, for example, from spacedump.net and unpack the latest stable Hive release as follows. You may also want to rename the binary directory to something a little more convenient.


cd ~/
wget http://apache.mirrors.spacedump.net/hive/stable/apache-hive-1.1.0-bin.tar.gz
tar -xzvf apache-hive-1.1.0-bin.tar.gz
mv apache-hive-1.1.0-bin hive-1.1.0


Add the paths to the Hive installation and the binary directory, respectively, to your user environment.


cd hive-1.1.0
export HIVE_HOME={{pwd}}
export PATH=$HIVE_HOME/bin:$PATH
export HADOOP_USER_CLASSPATH_FIRST=true


Make sure your Hadoop user chosen in Part 1 (here: hduser) has ownership rights to your Hive directory.


chown -R hduser:hadoop hive


To be able to generate tables within Hive, run the Hadoop start scripts start-dfs.sh and start-yarn.sh (see also Part 1). You may also want to create the following directories and access settings.


hadoop fs -mkdir /tmp
hadoop fs -chmod g+w /tmp
hadoop fs -mkdir /user/hive/warehouse
hadoop fs -chmod g+w /user/hive/warehouse


Strictly speaking, these directory and access settings assume that you are intending to have more than one Hive user sharing the Hadoop cluster and are not required for our current single Hive user scenario.


By typing in hive, you should now be able to launch the Hive command line interface. By default, Hive issues information to standard error in both interactive and noninteractive mode. We will see this effect in action in Part 3 when connecting to Hive via SAP Lumira. The -S parameter of the hive statement will suppress any feedback statements.


Typing in hive --service help will provide you with a list of all available services [1]:

cli

Command-line interface to Hive. The default service.

hiveserver

Hive operating as a server for programmatic client access via, for example, JDBC and ODBC. Http, port 10000. Port configuration parameter HIVE_PORT.

hwi

Hive web interface for exploring the Hive schemas. Http, port: 9999. Port configuration parameter hive.hwi.listen.port.
jarHive equivalent to hadoop jar. Will run Java applications in both the Hadoop and Hive classpath.
metastoreCentral repository of Hive meta data.


If you are curious about the Hive web interface, launch hive --service hwi, enter http://localhost:9999/hwi in your browser and you will be shown something along the lines of the screenshot below.

HWI.png


If you run into any issues, check out the Hive error log at /tmp/$USER/hive.log. Similarly, the Hadoop error logs presented in Part 1 can prove useful for Hive debugging purposes.


An example (continued)

Following on from our word count example in Part 1 of this blog series, let us upload the word count output file into Hive's local managed data store. You need to generate the Hive target table first. Launch the Hive command line interface and proceed as follows.


create table wcount_t(word string, count int) row format delimited fields terminated by '\t' stored as textfile;


In other words, we just created a two-column table consisting of a string and an integer field delimited by tabs and featuring newlines for each new row. Note that HiveQL expects a command line to be finished with a semicolon.

 

The word count output file can now be loaded into this target table.


load data local inpath '~/license-out.txt/part-r-00000' overwrite into table wcount_t;


Effectively, the local file part-r-00000 is stored in the Hive warehouse directory which is set to user/hive/warehouse by default. More specifically, part-r-00000 can be found in Hive Directory user/hive/warehouse/wcount_t and you may query the table contents.


show tables;

select * from wcount_t;


If everything went according to plan, your screen should show a result similar to the screenshot extract below.

 

ShowTables2.png


If so, it means you managed to both install Hive on top of Hadoop on Raspberry Pi 2 Model B and load the word count output file generated in Part 1 into the Hive data warehouse environment. In the process, you should have developed a basic understanding of the Hive processing environment, its SQL-like query language and its interoperability with the underlying Hadoop environment.

 

In the next part of this series, we will bring the implementation and configuration effort of Parts 1 & 2 to fruition by running SAP Lumira as a client against the Hive server and will submit queries against the word count result file in Hive using standard SQL with the Raspberry Pi doing all the MapReduce work. Lumira's Hive connector will translate these standard SQL queries into HiveQL so that things appear pretty standard from the outside. Having worked your way through the first two parts of this blog series, however, you will be very much aware of what is actually going on behind the scene.

 

Links

Apache Software Foundation Hive Distribution - Index of /hive

Apache Hive wiki - https://cwiki.apache.org/confluence/display/Hive/GettingStarted

Apache Hive command reference - https://cwiki.apache.org/confluence/display/Hive/LanguageManual

A Hadoop data lab project Part 1 - http://bit.ly/1dqm8yO

Configuring Hive ports - http://docs.hortonworks.com/HDP2Alpha/index.htm#Appendix/Ports_Appendix/Hive_Ports.htm

References

[1] T. White, "Hadoop: The Definitive Guide", 3rd edition, O'Reilly, USA, 2012

Actions

Filter Blog

By author:
By date:
By tag: