1 2 3 23 Previous Next

BI Platform

335 Posts

SFTP Destination support is one of the more interesting new features introduced with the recently released SAP BusinessObjects BI Platform 4.1 Support Pack 6.


Quite a lot of customer requests for this one, and it's finally here!


When you send or schedule a document to a SFTP destination, you will be asked to enter a fingerprint value.


  • What is a fingerprint?
  • Why is it important?
  • How do you determine the fingerprint?


I'll answer these questions in this blog. Additionally, I'll describe how I set up a simple environment that I've used for internal testing and teaching purposes for the SFTP feature.


SSH File Transfer Protocol (SFTP) Fingerprint


SFTP uses Secure Shell (SSH) to send files securely over the network. It's a full-fledged transfer and file management system that uses public-private key cryptography to ensure any client may send a file to a server securely.

Sometimes it's confused with FTP Secure (FTPS) or Simple FTP, but they're not compatible. FTPS is FTP over SSL and Simple FTP has no security features built in.


Why the need for secure file transfer?


I'll give the most often sited analogy, to snail mail. Say your company needs to send letters to a bank. You put it in an envelope, address the envelope, and drop it off at your company's mailroom. The clerk hands it over to the postman for delivery to the bank.


But let's say the clerk happens to be not-above-board. He steams open the envelope and reads the contents, and uses the information found within for private gain. Your letter is compromised. The clerk puts the letter back in the envelope, seals it, and sends it on its way, no-one the wiser.


To prevent that, the bank mails you special envelopes. Anyone can put contents into the envelope, but only the bank can open the envelope without destroying the contents. The shady clerk's now thwarted and would no longer be able to read the contents and steal the information.


But say the clerk's pretty crafty. He knows that the bank envelopes are delivered through his mailroom. So he waylays the package when it comes in. Instead, he has a set of those special envelopes made for himself, that only he can open, and forwards those envelopes to you. You can't tell the difference between the clerk's envelope and the bank's and so you put the letter in the clerk's envelope and drop it off at the mailroom. The clerk opens the envelope, reads the letter, steals the information, then puts the letter in one of the bank's envelope, and gives to the postman. Neither you nor the bank are aware that the letter has been compromised.


The clerk is called the man-in-the-middle, and the scheme he plays is called the man-in-the-middle attack.


To thwart a man-in-the-middle, what the bank will do is place a very unique symbol on its envelopes. This symbol would be extremely difficult for others to duplicate. They then publicly publish what this symbol looks like, allowing you to verify that the special envelopes you have is actually from the bank and not the man-int-the-middle.


This symbol is a fingerprint.


Fingerprints are extremely difficult to duplicate, since they're computed by hashing the public key, the key used for cryptography.


Discover the SFTP Fingerprint that BI Platform Expects


Now that you know the importance of a fingerprint, how do you discover the fingerprint needed, when sending/scheduling a document to SFTP?


If you use a SFTP client tool such as WinSCP or PuTTY, you'll see that they present a fingerprint value for every SFTP that you connect to. But those fingerprint value won't work with BI Platform. They won't work because the hashing algorithm used is different.


Typical client tools use a MD5 hash. BI Platform uses the more secure SHA-1 hash. Because of that, you'll need some other means to get the fingerprint.


One way is to let BI Platform tell you. When it connects to a SFTP server, it retrieves the public key and computes the SHA-1 fingerprint from it. If that expected fingerprint does not match the fingerprint you've entered for the SFTP destination parameters, then an error is entered in the trace files. That error line records both the expected and entered fingerprint values. You can use this to get the expected fingerprint. The steps are described in SAP Note 2183131, but I'll describe the steps here as well.


Log onto the Central Management Console and enable tracing for the Adaptive Job Server. Log onto BI launch pad, navigate to the public "Web Intelligence Samples" folder, right-click on a WebI document and select from the menu Send->SFTP Location:




Fill out the SFTP Server information, including hostname, port, user name and password. For the fingerprint, just enter a keyword that'll be easy to remember and search for, say FINDTHEFINGERHERE:




Click Send.  Nothing appears to happen (not even an error dialog box pops up), but the document would not have been sent to the SFTP server.


Go to the machine where the BI Platform Adaptive Job Server is running, and navigate to the logging folder for the BI Platform deployment. Find the trace file associated with the Adaptive Job Server Destination Service child process. Open the glf file associated with that Service, and search for the fingerprint keyword you entered above:




Here's the line:


destination_sftp: exception caught while connecting to sftp server [<hostname>]. Details: [83:89:8c:dd:e8:00:a2:e3:26:63:83:24:47:71:ec:8c:1b:ce:de:25 is admin input.Mis match in fingerprint. i.e hashing server fingerPrint obtained  from serverFINDTHEFINGERHERE]


The long sequence of 20 two-digit hex numbers separated by colons is the SHA-1  hash of the public key as received by BI Platform. Enter that value into the FingerPrint box of the Send dialog box:




and you'll see the document be sent successfully to the SFTP server.


Are we done?


What if I were to ask you whether the fingerprint above is the one for the SFTP server or a man-in-the-middle between your BI Platform deployment and the SFTP server?


You can't tell by looking at the fingerprint value itself, you need some other independent way to validate it. A good way is to contact the SFTP server maintainer, and ask them "Would you provide us, securely, the SHA-1 fingerprint for your SFTP server?" That's actually the best way.


But sometimes you encounter Administrators who don't know how to do that. What then?


Given the public key, a public key you've gotten from the SFTP server by secure means, you can compute the fingerprint yourself. I'll give instructions to do that.


First, let's set  up a trial, simple, SFTP server, so we can see things from the SFTP server side of things.



Generating the Cryptographic Public Key and Private Key


First, generate public and private keys that the SFTP server will use for cryptography. There's various ways to do this, some SFTP server products have their own ways.


What I'll use is the popular and common PuTTY tools.


Download the PuTTYgen RSA key generation utility from here.


It's a fairly easy tool to use. In the "Parameters" section, specify the type and length of key, and click the "Generate" button:




You'll see that the public key in "OpenSSH format" will be displayed in the text area titled "Public key for pasting into OpenSSH authorized_keys file:" So copy and paste the key into a text file using a text editor, such as Notepad or Notepad++. Save the contents to a file named public_key_openssh.pub. By the way, you see the "Key fingerprint:" value in the above screenshot. Ignore it. That's a MD5 hash fingerprint, not the SHA-1 fingerprint we want.


Next go to the menu selection Conversions -> "Export OpenSSH key" to export the private key to a file, that I name private_key.key




Why OpenSSH key? It's because I'm going to use a SFTP implementation that expects private keys to be in OpenSSH format. There are other formats, and you'd need to refer to your SFTP server documentation to find out which one, if you're going to be using something different from I.


Now that we have the keys, let's set up the SFTP server.



Setting up the freeFTPd SFTP Server


For simplicity, I'll use the open-source freeFTPd implementation of the SFTP server. There are others, but freeFTPd is the one I find is easiest to set up and use.


Download and run. First go to the SFTP -> Hostkey page, and specify the private_key.key RSA key you generated previously:




Then go to the Users page and create a test user. Call it testuser:




Now go to the SFTP page and start up the SFTP server, making sure you first set where the SFTP is to store the incoming file in "SFTP root directory" setting:




And finally check the Status to ensure the SFTP us running:




That's it!


Now connect to this SFTP server using instructions given above, and get the fingerprint value that BI Platform expects.  Now, what we want to do is compute the fingerprint directly from the public key file public_key_openssh.pub and verify that the value is correct.



Use OpenSSL tools to Compute the SHA-1 Fingerprint


Let's have a look at the public key file contents (in OpenSSH format):


ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIEAnx3a1iYFDX4HY8Ysf2hOE1UJwha+rLD0iq82gn3+Lgla3ZzPOTuU4R39yQ5cgtzfvQrUq+NIEVEKrw1Vm3CuYVs/UrCUEhDhYOc4AfzszDGaLPnIIJjrZt9i2TnZ+9OeLakno4bgNntVglr8GbL2tryg+FWTzPGcq9O6O5gnavE=



Now the first line, 'ssh-rsa', specifies that the type of key is RSA, and the last line 'rsa-key-20150626' is merely an optional comment line (I just had PuTTY denote the type and date when I generated it).


In between, the gibberish, is the Base64 encoded string value for the public key binary value. What we need to do is extract this value from the file, Base64 decode it to get the binary value back, then generate the SHA-1 Digest for this value (in colon-separated hex 2-digit format).


Now, the last step you can do using OpenSSL command-line tools. But if you'd like to make life much easier, you can use command-line tools to accomplish the other two pre-steps.


The easiest, if you're not on a Unix machine, is to download Unix tools, the Cygwin toolset. The Cygwin command-line tools contain the textfile manipulation and base64 tools to automate the other steps.  Go to the Cygwin site, and install the tools (the default install won't include the OpenSSL toolset, so make sure you manually select those as well during the installation of Cygwin packages).


Now, the way to compute the fingerprint is a single (albeit longish) command-line:




Breaking down the individual commands on the pipe, the command:


    cut -d ' ' -f 2 < public_key_openssh.pub


reads the file public_key_openssh.pub, cuts the contents at whitespace, and streams out the second component. Essentially, it's extracting the Base64 encoded public key from the public key file. The command:


    base64 -d


merely reads the input pipe, base64 decode it, and streams out the binary value. And finally, the command:


    openssl dgst -c -sha1


uses the OpenSSL tool to compute the SHA-1 Digest from the binary value.


As you can see, the fingerprint we compute directly from the public key corresponds to the one BI Platform says it got from the SFTP server.  The public key the BI Platform is using is the one from the SFTP server, and not from the man-in-the-middle.




If you require ways to send or schedule BI Platform documents across the network securely, the recommended solution is to upgrade your deployment to BI 4.1 SP6 or higher, and use the new SFTP destination functionality.


One quirk is the fingerprint value. This blog describes how to determine the fingerprint value to use, and how to validate the fingerprint for correctness.


Hope you find this information useful, and you're able to integrate this new functionality into your BI architecture!



Ted Ueda has supported SAP BusinessObjects BI Platform and its predecessors for almost 10 years. He still finds fun stuff to play with.

An understanding for how your BI Platform is used and utilised will enable you as a BI Platform Administrator to take the necessary steps to improve its reliability, performance and adoption within your organisation.


The Auditing database coupled with a new comprehensive Universe and a set of Web Intelligence documents that I have developed will help give you that insight you need and this is what I'd like to share with you now.


My Universe and documents have been in development, on and off, for some time but they have now reached a maturity level where I’m happy to share them with a wider community.


I’m overall pretty happy with the Universe and the documents, however they need a little performance testing on large data sets. This is where you can help me, help you!


Please download my latest ‘build’ (available for a limited time) and give them a blast. They are provided ‘as is’. I’m looking for feedback on any defects, performance issues and also additional reporting/business requirements. If you can get back to me with your feedback I can improve the content for everyone else to benefit.  I may occasionally published a newer ‘build’ in the same container, so check every now and then for an update.


Once I’m happy with the amount of feedback and testing I will make the Universe and documents more widely and permanently available.


I have ported the universe to various databases and is currently available for:

  • Microsoft SQL Server
  • Oracle
  • SQL Anywhere

Feedback on which database I should next port to would be helpful too!


There’s a large set of documents, each with a number of ‘reports’. The number of reports ranges from 1 to over 50 within a single document. So you can see I’ve been busy! They will take you some time to go through them all.


Here’s a list of documents:

1.     STA1 - Start here - Events over time.wid

2.     FRA1 - Fraud Detection - 1 machine more than 1 user.wid

3.     FRA2 - Fraud Detection - 1 machine more with multiple logon failures.wid

4.     LIC1 - License - 1 user more than 1 machine.wid

5.     LIC2 - License - Periods when sessions exceeded X.wid

6.     LIC3 - License - Users no longer using the system.wid

7.     SYS1 - System - Event Log.wid

8.     SYS2 - System - Delay in Recording of events to Audit Database.wid

9.     SYS3 x1 - System - Overall System Load Analysis (without Mode).wid

          SYS3 x2 mi - System - Overall System Load Analysis (Mode is Interactive Only).wid

          SYS3 x2 ms - System - Overall System Load Analysis (Mode is Scheduled Only).wid

          SYS3 x4 - System - Overall System Load Analysis inc Mode.wid

10.     SYS4 x1 - System -  Refresh Analysis (Mode is Interactive).wid

          SYS4 x1 - System - Refresh Analysis (Mode is Scheduled).wid

          SYS4 x2 - System - Refresh Analysis (inc Mode).wid

11.     USA1 x1 - Usage - Session Analysis.wid

          USA1 x15 u - Usage - Session Analysis (With Users, Without Mode).wid

          USA1 x2 mI - Usage - Session Analysis (Mode is Interactive Only).wid

          USA1 x2 mS - Usage - Session Analysis (Mode is Scheduled Only).wid

          USA1 x30 umI - Usage - Session Analysis (With Users) (Mode is Interactive Only).wid

          USA1 x30 umS - Usage - Session Analysis (With Users) (Mode is Scheduled Only).wid

          USA1 x4 m - Usage - Session Analysis (With Mode).wid

12.     USA2 - Usage - Large number of Data Providers.wid

13.     USA3 - Usage - Documents no longer used in the system.wid

14.     USA4 - Usage - Universe Objects usage. Identify infrequent used objects.wid

15.     USA5 - Usage - Universes no longer used.wid


Each document has an ‘About’ page that provides a few more details on its purpose.

The Universe is, of course, documented within itself. Every description box has a description! However I’ve not yet written supporting documentation for either the universe or the Web Intelligence documents. Feedback from you on what I should explain would be great!


Requirements: BI Platform BI 4.1 Support Pack 5 or greater.



  1. Download the content.
  2. Import one of the four 'Universe' LCMBIAR files into your system using Promotion Management (it will go into "BI Platform Auditing" folder)
  3. Import the Web Intelligence LCMBIAR file (it will go into "BI Platform Auditing" folder)
  4. Edit the connection that is imported (in "BI Platform Auditing" folder) with the correct login credentials.
  5. Open the Web Intelligence document ‘STA1 - Start here - Events over time.wid’ as your starting point!


Please post your feedback here and I will do my best to comment back as soon as possible. (I’m on annual leave 24th July until 17th August 2015 so I won’t be able to reply during this time)


Matthew Shaw


Hi Folks,



This post is for reference who want to install BI 4.1 SP4 full build on Linux and have never seen similar installation.

Purpose of this post is to make ourselves aware of very simple i.e basic standalone installation with no add-ons, no custom database, no cluster etc.   



Working RHEL. Host file entries done for itself.

Important packages preinstalled according to install guide.

Follow PAM document  (supported platforms document) before installation.



Quick Check: Created directory bi42sp4p1 which will be used as installation directory.










More Reference:

SAP Business Intelligence Platform Pattern Books - Business Intelligence (BusinessObjects) - SCN Wiki



That's all folks.





Carsten Mönning and Waldemar Schiller

Part 1 - Single node Hadoop on Raspberry Pi 2 Model B (~120 mins), http://bit.ly/1dqm8yO

Part 2 - Hive on Hadoop (~40 mins), http://bit.ly/1Biq7Ta

Part 3 - Hive access with SAP Lumira (~30mins), http://bit.ly/1cbPz68
Part 4 - A Hadoop cluster on Raspberry Pi 2 Model B(s) (~45mins)


Part 4 - A Hadoop cluster on Raspberry Pi 2 Model B(s) (~45mins)


In Parts 1-3 of this blog series, we worked our way towards a single node Hadoop and Hive implementation on a Raspberry Pi 2 Model B showcasing a simple word count processing example with the help of HiveQL on the Hive command line and via a standard SQL layer over Hive/Hadoop in the form of the Apache Hive connector of the SAP Lumira desktop trial edition. The single node Hadoop/Hive setup represented just another SAP Lumira data source allowing us to observe the actual SAP Lumira-Hive server interaction in the background.

This final part of the series will go full-circle by showing how to move from the single node to a multi-node Raspberry Pi Hadoop setup. We will restrict ourselves to introducing a second node only, the principle naturally extending to three or more nodes.


Master node configuration


Within our two node cluster setup, "node1" will be set up as the master node with "node2" representing a slave node 'only'. Set the hostname of the master node, as required, in file /etc/hostname.

To keep things nice and easy, we will 'hard-code' the node's IP settings in the local hosts file instead of setting up a proper DNS service. That is, using, for example, the leafpad text editor, sudo leafpad /etc/hosts and modify the master node hosts file as follows:     node1     node2

Remember in this context that we edited the /etc/network/interfaces text file of node1 in Part 1 of this blog in such a way that the local ethernet settings for eth0 were set to the static IP address Thus, the master node IP address in the hosts file above needs to reflect this specific IP address setting.

Similarly, edit the file /opt/hadoop/etc/hadoop/masters to indicate which host will be operating as "master node" (here: node1) by simply adding a single line consisting of the entry node1. Note that in the case of older Hadoop versions, you need to set up the masters file in /opt/hadoop/conf. The "masters" file only really indicates to Hadoop which machine(s) should operate a secondary namenode. Similarly, the "slaves" file provides a list of machines which should run as datanodes in the cluster. Modify the file /opt/hadoop/etc/hadoop/slaves by simply adding the list of host IDs, for example:



You may remember from Part 1 of the series that the Hadoop configuration files are not held globally, i.e. each node in an Hadoop cluster holds its own set of configuration files which need to be kept in sync by the administrator using, for example, rsync. To keep the configuration of nodes of a cluster of significant size in sync represents one of the key challenges when operating a Hadoop environment. A discussion of the various means available for managing a cluster configuration is beyond the scope of this blog. You will find useful pointers in [1].


In Part 1, we configured the Hadoop system for operation in pseudodistributed mode. This time round we need to modify the relevant configuration files for operation in truly distributed mode by referring to the master node determined in the hosts file above (here: node1). Note that under YARN there is only a single resource manager for the cluster operating on the master node.



Common configuration settings for Hadoop Core.


Configuration settings for HDFS daemons:
The namenode, the secondary namenode and the datanodes.


General configuration settings for MapReduce
. Since we are running MapReduce using YARN, the MapReduce jobtracker and tasktrackers are replaced with a single resource manager running on the namenode.


File: core-site.XML - Change the host name from localhost to node1











File: hdfs-site.xml - Update the replication factor from 1 to 2









File: mapred-site.xml.template ( “mapred-site.xml”, if dealing with older Hadoop versions) - Change the host name from localhost to node1







Assuming that you worked your way through Parts 1-3 with the specific Raspberry Pi device that you are now turning into the master node, you need to delete its HDFS storage, i.e.: sudo rm -rf /hdfs/tmp/*

This already completes the master node configuration.


Slave node configuration

When planning to setup a proper Hadoop cluster consisting of considerably more than two Raspberry Pis, you may want to use a SD card cloning programme such as Win32 Disk Imager download | SourceForge.net to copy the node1 configuration above onto the future slave nodes. See, for example, http://bit.ly/1imyCXv for a step-by-step guide to cloning a Raspberry Pi SD card.


For each of these clones, modify the /etc/network/interfaces and /etc/hostname file, as described above, by replacing the node1 entries with the corresponding clone host name.

Alternatively and assuming that the Java environment, i.e. both the Java run-time environment and the JAVA_HOME environment variable, is already set up on the relevant node as decribed in Part 1, use rsync for distributing the node1 configuration to the other nodes in your local Hadoop network. More specifically, on the slave node (here: node2) run the following command:

     sudo rsync -avxP /usr/local/hadoop/ hduser@node2:/usr/local/hadoop/

This way the files in the hadoop directory of the master node are distributed automatically to the hadoop folder of the slave node. When dealing with a two-node setup as described here, however, you may simply want to work your way through Part 1 for node2. Having already done so in the case of node1, you are likely to find this pretty easy-going.


The public SSH key generated in Part 1 of this blog series and stored in id_rsa.pub (and then appended to the list of SSH authorised keys in the file authorized_keys) on the master node needs to be shared with all slave nodes to allow for seamless, password-less node communication between master and slaves. Therefore, switch to the hduser on the master node via su hduser and add ~/.ssh/id_rsa.pub from node1 to ~/.ssh/authorized_keys on slave node node2 via:

          ssh-copy-id -i ~/.ssh/id_rsa.pub hduser@node2

You should now have password-less access to the slave node and vice versa.

Cluster launch

Format the Hadoop file system and launch both the file system, i.e. namenode, datanodes and secondary namenode, and the YARN resource manager services on node1, i.e.:

     hadoop namenode -format



When dealing with an older Hadoop version using the original map reduce service, the start services to be used read /opt/hadoop/bin/start-dfs.sh and /opt/hadoop/bin/start-mapred.sh, respectively.

To verify that the Hadoop cluster daemons are running ok, launch the jps command on the master node. You should be presented with a list of services such as both namenode and secondary namenode as well as datanode on the master node and datanode on the slave nodes. For example, in the case of the master node, the list of services should look something like this, i.e., amongst other things, both the single YARN resource manager and the secondary namenode are operational:


If you find yourself in need for issue diagnostics at any point, consult the log4j.log file in the Hadoop installation directory /logs first. If preferred, you can separate the log files from the Hadoop installation directory by setting a new log directory in HADOOP_LOG_DIR and adding it to script hadoop-env.sh.


The picture shows what a two node cluster setup may look like. In this specific case, the nodes are powered by the powerbank on the right-hand side of the picture.


And this is really pretty much all there is to it. We hope that this four-part blog series helped to take some of the mystery out of the Hadoop world for you and that this Lab project demonstrated how easily and cheaply a, admittedly simple, "Big Data" setup can be implemented on truly commodity hardware such as Rapsberry Pis. We shall have a look at combining this setup with the world of Data Virtualization and, possibly, Open Data in the not-too-distant future.



A Hadoop data lab project on Raspberry Pi - Part 1/4 - http://bit.ly/1dqm8yO
A Hadoop data lab project on Raspberry Pi - Part 2/4 - http://bit.ly/1Biq7Ta

A Hadoop data lab project on Raspberry Pi - Part 3/4 - http://bit.ly/1cbPz68

Jonas Widriksson blog - http://www.widriksson.com/raspberry-pi-hadoop-cluster/

How to clone your Raspberry Pi SD card for super easy reinstallations - http://bit.ly/1imyCXv



[1] T. White, "Hadoop: The Definitive Guide", 3rd edition, O'Reilly, USA, 2012

Most of the BI Landscapes in industry utilize a content driven BI Approach rather than a user focused BI approach. While the content centrist approach is great for IT or IS organization it posed challenges to Business as business have to juggle through a lot of content which can be dashboards , reports and explorer information spaces or any other BI contents to do the analysis. This can lead to a lot of frustration and confusion and in the process and also wastes a lot of time of business to get all the relevant information for a specific analysis. Also when a new business user wants to do the same analysis the path he might take can be time consuming as he might need to understand which BI contents available for an analysis and what type of information they have and then switch between those contents to reach to an answer.

To overcome this problem we came up with a novel way to build user focused BI utilizing custom websites with embedded BI contents. Now before going there you might argue why anyone would need one more website when we already have BI launchpad in BusinessObjects as the default portal. And the answer is quite simply BI Launchpad can have multiple type of content like reports (Webi/Crystal) , Dashboards , data exploration Information space and they are most of the time just sitting in different folder and sub folders and there is no logical way to tie them to a specific type of activity or users and the process can be very cumbersome. Also some times the contents are not linked together for example there could be a sales dashboard and sales detail report but user have to go to sales dashboard find out the scenario which he wants to analyze and then go to the reports and select all prompts and filters to get to the details for that scenario.

How this solution works from a bird’s eye view : The most critical feature to make the solution work is open document URL for specific BI contents and enabling single sign on for BusinessObjects. The solution leverage the Opendoc links of Business Objects contents and combining it with i-frames in a customer portal. The portal being rendered via an IIS website which has a user friendly DNS alias. Let’s say user can access all the relevant sales information tying http://sales vs. http://businessobjects-dev (followed by a bunch of clicks to get to your desired folder), which one makes more sense and easier to remember when you are looking for all BI Contents related to sales? We created the sites and named them as http://Sales.yourcompanydomain.com , short, meaningful and easier to remember for users. The IIS websites make use of i-frames within which the open document links for dashboards, explorer information spaces and Webi reports are called. Also we make sure to make the website code in a way that it loads the dashboard contents while loading utilizing the parallel processing without wasting user time and once loaded the dashboard does not refreshed automatically.


Let take an Example:

Let’s take a fictitious scenario; assume you are a product manager in a large organization selling products to consumers across to globe and you are assigned to some product line in the company. Your job requires to ensure your have enough inventory for next week for your top selling products for last quarter for in North American region and ensure the plants which supplied the product is going to produce enough of them for the next quarter.


In a traditional content driven BI scenario you would have go to sales folder and find out which reports or dashboard gives you the top customer for last quarter by region. Then find out which is your top selling product for North America by filtering your product lines and regions. Then after you find the product, you would need to go to inventory folder and find out which report or dashboard shows the current inventory by product. Then find what is the current inventory levels for your top product which you have got from sales report. Then go to forecast report find out the forecast of the product for the next quarter and then then compare the number with the current inventory to understand how much of the product you would need to produce during next quarter.This whole process can take many hours to get a answer.


Now let’s take the scenario in this new approach where there is dedicated web link like http://PM-Analytics which has the sales dashboard with inventory dashboard and Forecasting report at the same Weblink as different tabs. The user just goes into the sales tab , finds the top selling product then gets to the next tab which inventory while still preserving his sales analysis.Then he finds the inventory numbers and goes to the next tab which is forecast report filtering the product and compare the additional inventory that will be needed based on forecast. Sounds simple!! This process will also save user a lot of headache to find the right content use the contents correctly as everything needed in one place and his sales analysis is not lost and he has just do the similar analysis for south America region quite easily as his old analysis does not automatically reset to default and the session should be still active. This process should be no more than few minutes.



How does it Look:

In a traditional content focused BI user have to go to Launch Pad , Public Folder and then find all the contents that are needed for an analysis.

Bi Launchpad 2.png


In the new Process just need to Type a URL in a browser which can be as simple as http://Sales which allows the user to directly view the landing dashboard without the hassles of finding it in a folder and all the additional BI contents to support an analysis. They do not see anything else except what they need.



The application can have reports which support analysis and also explorer information spaces to do data exploration.

When users wanted another set of related data they just click on another tab which takes him for additional analysis.


Solution Architecture:

Here is how the solution looks like. User types in a custom url like http://sales which is hosted on IIS web server as a web application.Then the request is redirected through a load balancer into Business Objects webserver and subsequently to BOBJ application server to cater the BI contents requested.


Creating a Web Application to

Deploying Business Objects Dashboard with a Custom IIS Website Name

I am going to discuss how to build a custom application URL to host BI contents so that a user group gets their BI contents available in just one place rather than having to go through launchpad and bunch of folders. The below solution is meant for IIS webserver so all the screen shots are specific to IIS only.


Three items need to be installed/configured on the server in order to prepare to serve up IIS websites:

  • IIS services should be configured on the server
  • .Net Framework 4.5 should be installed

Configure IIS Services on the server

Go to the Server Manager console on the server and select the option Add Roles -

Select the web server IIS role anc click next -


Once the installation is over, you will be able to see the role and services installed -

Install .Net Framework 4.5

Download the .Net 4.5 setup from Microsoft site.
Double click on the downloaded .exe file to start the setup.
Follow the on screen instructions to complete the setup.

How to Setup a Custom IIS Website for housing opendoc links

1. Content Home Folder for Site

Create Directory Folder

Create a folder that will server as the home folder for the website, this is required while creating the website.

Apply Access Levels to Site Folder

Go to the properties of the home folder that was created for the web site and add the ‘Everyone’ group with execute access –

2. Create the Website In IIS

Add the Web Site

Open up the Windows Server IIS manager console in one of two ways:

Start > Run > inetmgr > hit enter

or …

Start > Administrative Tools > Internet Information Services Manager

Right click on ‘Sites’ and select the option – Add Web Site.

Fill in the detail fields corresponding to the application area for which we are creating the site. These are…

Site Name: This name should match that of the Application Area established in the BO Launchpad

Physical Path: This is the path to the home contenet folder for the site that you created in an earlier step

Host Name: This equates to the web URL that users will enter to visit the web page (see example, below, for the “Inventory” application).

Application Pool Settings

In IIS left pane, click on Application Pools to see all application pools for your sites.  For your new site, make sure that the application pool is set to use the latest version of .Net Framework .  If it is not, double click the application pool and in the dialoge window select the latest version for .Net Framework.




In IIS left pane, right-click on your new application site and select Edit Bindings…  Make sure that both bindings are present on the website – the short name and the fully qualified name.



3. Finalize Web Content Customization

Populate the Home Directory with Sample Web Content

Once the website is created, the code needs to be put in the home folder we created.



Modify Customized Content Files

There are couple of things that we need to modify for the sites for each application area that we are rolling out the site for.

The following three files need to be modified to change the site as per the new application.


The timeout popup setting is in this file in the section for function Init() , if required it can be changed. We are currently using a standard timer value of 7140000.



The title of the Web Site and the working environment is present in this file –

The Workingenv parameter decides which links will be used from the links.xml file.
The Title parameter decided what will be the title of the webpage.


the opendoc links, title of the different tabs and the tooltip help is present in this file –

Based on the working environment we set in the web.config, the opendoc links will be picked from the links.xml file.


while inserting the links, we needto modify them a bit –


4. Request DNS Alias for the server/loadbalancer

Once the website is created, make sure to create an simple alias for the users to accessthe site – for example http://sales, http://quality etc.
Alias names being requested should be SAME as the bindings that have been provided for the web site.

Once the alias has been created, access the web site using any browser and confirm that it is working as expected.


Now finally once you are done with these you will have a website where you can embed the BI content for a personalized experience of your end user community.


Please keep in mind as there is no logoff button included in the custom website even the user closes the browser the sessions are still active in the server until it times out. However if you are in BI 4.1 SP6 BOBJ drops the session in 2mins after user closes the browser.

I have got multiple queries from various forums around how we are planning to support UDT and MSU connectivity in future. I have compiled below, our approach towards these going forward – from both BI 4.1 SPs as well as upcoming BI 4.2 release perspective.

Universe Designer Tool (UDT):

As you all know, UDT is used for creating new UNVs based on various supported data sources. Starting from SAP BI 4.x, we additionally ship Information Designer Tool (IDT) as part of the SAP BI product suite, which helps users to create multi-dimensional universes namely UNXs. IDT and UNX combination is forward looking and have advanced features/enhancements.


While users can open UNVs from IDT and convert existing UNVs to UNX format, user can continue to use UDT for creating new UNVs on the supported Data sources.


However, going forward in BI 4.x releases:

  • We will continue support UDT mandatorily for DBs/Sources which are supported in BI 3.1 version, to make sure there is no regression in the upgrade scenario
  • Newer versions of these DBs (if introduced by the vendor) will be tested and certified for UDT.
  • UDT will not be certified for the new data sources which got/being introduced newly in BI 4.x release.   
  • Current / Latest status of UDT support is upto-date and can be found here in Product Availability Matrix (PAM) - under UNV column.



Customer is using Oracle 10g as database for his UNV created using UDT as part of BI 3.1/BI 4.0.

  • Future Oracle versions will continue to be supported in UDT (as part of BI 4.1 / BI 4.2) – so that customer can seamlessly migrate.
  • Data sources which are/will be new in BI 4.1 / BI 4.2 (like Hadoop Hive, Amazon Redshift etc.) – will not have UDT support.


Multi Source Universe (MSU):

Going forward MSU will be tested and certified against Top 6 DB / datasources only, including SAP HANA and SAP BW (Teradata 15, Oracle, MSSQL, Progress, SAP HANA, SAP BW).

For other datasources, MSU support will only be considered based on a business case or customer request.  We will add the support with a justified request – through FPs or SPs, based on the priority.

Current status of MSU support for various data sources are upto-date in Product Availability Matrix (PAM).

Dear All,


we are pleased to announce that SAP BusinessObjects BI4.1 SP06 has been released and is available for download on http://support.sap.com


Additional recourses on SAP BusinessObjects BI4.1 SP06:



* requires logon to SAP Support Page with a valid account




SAP BusinessObjects Business Intelligence Support Pack 6 is Here!


Today, SAP released SAP BusinessObjects Business Intelligence 4.1 Support Pack 6 to the SAP Support Portal, both as a full build and as a patch to previous versions. Support Pack 6 has something we haven't seen from a support pack in a long time— new features! Christian Ah-Soon, SAP Product Expert, has written a great summary here on the SAP Community Network (see related article, SAP BI 4.1 SP6 - What's New in Web Intelligence and Semantic Layer). Web Intelligence users will no doubt put document-level input controls to great use. There's small yet significant usability improvements. For example, Export Data functionality has been added to the Reading mode (previously, you had to remember to go to Design mode for that feature). There's improvements to Microsoft Excel data providers. And while I'm not a huge fan of Free-Hand SQL (see related article on my personal blog, Free-Hand SQL Isn't Free), I'm thankful that SAP has closed yet another Web Intelligence feature gap with Desktop Intelligence. And if you're a Live Office fan (don't be ashamed), you'll be glad to know that Live Office has not only been given UNX universe access in BI 4.1 SP6, but the product also has a road map and a future (see related SCN article SAP BusinessObjects BI 4.1 SP06 What's New by Merlijn Ekkel for a comprehensive overview of what's coming to the entire platform). I've barely scratched the surface here, so please read Christian's and Merlijn's much more detailed articles.


BI 4.1 SP6 is the last support pack released in 2015. Read that sentence again, I'll wait... Support for XI 3.1 and BI 4.0 ends on December 31, 2015 and it is unlikely that BI 4.2 will be generally available by that time (although it might be in ramp-up, cross your fingers). This means that BI 4.1 SP6 is going to be the go-to release of BI 4.1 for the foreseeable future. And with just a bit of nostalgia, the article that you're reading now will likely be the last "State of the BusinessObjects BI4 Upgrade" I'll write this year (check out the State of the BusinessObjects BI 4 Upgrade archive on the EV Technologies web site). Tomorrow morning- before the first cup of coffee is finished- I'll begin helping a customer download the 4.1 SP6 full build for their XI 3.1 migration kickoff. And I've already downloaded the SP6 patch to apply to one of our internal sandbox servers tonight.


You are no doubt wondering if BI 4.1 SP6 is a stable release. And I am, too. I'd be lying if I said that BI 4.1 and its first five support packs were completely pain free. Let's hope that the product quality is just as impressive as the new features.


SAP Lumira v1.25 for the BI Platform - Now with Free Sizing Guide!

The big deal at last month's SAP SAPPHIRE was the release of SAP Lumira v1.25, which brought the first iteration of integration with the BI 4.1 platform. I've been lucky to follow Lumira v1.25  from a special SAP Partner training program to its Customer Validation program and finally to its general availability. Release 1.25 brings SAP Lumira from the desktop to the BI 4.1 platform without the requirement for SAP HANA, a stumbling block for a significant number of BI platform customers. But until today, Lumira for the BI platform was missing a critical component- sizing guidelines. SAP has published an updated SAP Lumira Sizing Guide to the SAP Community Network that includes sizing for the BI 4.1 add-on. The add-on brings the same in-memory database engine to the BI 4.1 platform that SAP introduced to the Lumira Desktop in version 1.23 a few weeks ago.

Time to Start Migrating!

The software and documentation released today, combined with the SAP Lumira v1.25 and Design Studio 1.5 software that was released last month (see related article, State of the SAP BusinessObjects BI 4.1 Upgrade - May 2015 (SAPPHIRE Edition)), bring all of the pieces together to take your BI landscape into the future. I hope that these pieces and their installation will be more tightly integrated in BI 4.2. But for me, as well as many of you, the adventure begins tomorrow. Just as soon as all of the software is downloaded.


More to come...

This is part 3 from yesterday's webcast.  Part 1 is askSAP Analytics Innovations Community Call Notes Part 1 and Part 2 askSAP Analytics Innovations Call Notes Part 2 SAP Lumira Roadmap


Please note the usual legal disclaimer applies that things in the future are subject to change.  What I liked particularly about this call was the time spent on question & answer (see below).


Figure 1: Source: SAP


SAP said they value customers’ feedback


Figure 2: Source: SAP


Coming for Design Studio includes increasing the number of rows that universes can bring back (today it is 5K), mobile offline support and more as shown in Figure 2


Figure 3: Source: SAP


Figure 3 covers Analysis Office with a converged Excel client to include EPM, and a new formula editor for 2.1



Figure 4: Source: SAP


Figure 4 covers future plans (subject to change) for Analysis Office, with improved PowerPoint integration and publishing workbooks to the HANA platform


Figure 5: Source: SAP


Figure 5 covers plans for the future for Web Intelligence (past BI4.1 SP06)


Next release for Web Intelligence includes shared objects and annotations


Figure 6: Source: SAP


Figure 6 covers plans for Mobile BI; SAP is seeing increasing demand for Android


Figure 7: Source: SAP


Figure 7 shows plans for a faster installer


Report comparison tool to save time during the upgrade


Linked universes – many projects require universes


“Biggest and best partner ecosystems” to extend BI Platform


Question & Answer

Q: Universe on BEx query – will it replace anything?

A: Makes it more business friendly for end users for consumption in Web Intelligence


Q: Which versions BI Web Intelligence be available

A: SP06 – next week

Future plans – BI4.2 – late this year early next year (forward looking statement)


Q: Any future plans for commenting solution for all BI tools

A: Commenting for Web Intelligence is at the platform – WebI is the first to use, looking at other tools


Q: Is the performance on WebI on BICS universes similar to BEx queries

A: no performance numbers to verify


Q: Lumira isn’t supported on Crystal Server? What do those customers do?

A: Technologically speaking can do this but now focused on Lumira server for teams – you should be able to connect to universes from Lumira teams on Crystal Server


Licensing – you can purchase Lumira Edge – team server & BI Platform


Q: When can we view Mobile Dashboards without going through the BI app?

A: working on, no timeframe


Q: is broadcasting of Design Studio reports available?

A: not available today

Ability to schedule using the BI platform is on the to do list


Q: SAP’s UX strategy says it will converge to Fiori – how reflect in BI platform & client tools?

A: BI platform / client – looking to integrate with Fiori

Lumira & Design Studio started this with a Fiori tile into a Lumira story – working on adding OpenDoc capabilities

More adherence to Fiori design type when working on further solutions including Cloud for Planning


Q: What is the future for SAP Infinite Insight?

A: brought together InfiniteInsight with SAP Predictive into SAP Predictive Analytics


SAP also announced SAP IT Operations Analytics - see an overview in this PC World article: SAP previews new analytics tools for IT, business users | PCWorld


Additionally ASUG has a webcast on this in August - Data Center Intelligence


ASUG also has a webcast in September titled "What is coming in BI4.2" - register here


Finally, if you have questions about moving from BEx tools to Analysis and Design Studio join ASUG today  - register here

Problem statement: Hard dependency with SAP HANA SPSs and BI 4.1 SPs

Currently, from SAP BI 4.1 side, there is ‘one to one’ mapping with SAP BI 4.1 SP Vs SAP HANA SPSs. i.e. SAP BI SP releases are not forward compatible to SAP HANA SPSs – as per Product Availability Matrix (PAM).

Each time customer upgrades SAP HANA to a newer SPS version, it mandates SAP BI 4.1 upgrade as well to the supported SP. This constitutes a significant burden for our customers, sometimes it is a showstopper.

Proposed guideline and solution:


Teams internally did additional testing on the newer / previously unclaimed version combinations, to make sure SAP BI + SAP HANA customers will not have to go through this problem in future.

With this, there is a commitment from both SAP HANA and SAP BI team, for the compatibility between them.

For example, all the existing features from BI side will continue to work with new SAP HANA SPS version in this combination. Customer will get the support from SAP’s respective team to resolve the issue with the latest SAP HANA version, if there are any, while they continue with the existing SAP BI SP version in their landscape.

SAP BI PAM documents have been updated with this new proposal, i.e. all active SAP BI 4.1 SP lines will work with latest SAP HANA SPS release. Customer need not update the SAP BI 4.1 landscape, to consume latest SAP HANA SPS version.


Following is the model in which we are looking at supporting this combination. Please note that SAP HANA SPS10, and NEXT are not released as of now, so please use this as guideline only. (Refer PAM for actuals)




Summary: In general, we would like to ensure the customer that ALL ACTIVE SAP BI 4.1 SPs will connect to latest SAP HANA SPSs. However, we advise you to continue using PAM document as THE reference for the support, to get the latest update on the versions supported and if there is any work-around needed.

The Promotion Manager tool does not bring instances and the UMT refuses to move content from one 4.x system to another. I am currently testing this on BI 4.1 SP5 Patch 5.


Can anyone suggest a better way to to move content from 4.1 SP1 to our test box with 4.1 SP5 Patch 5?

We have Over 100,000 reports and need to move several thousand for testing. Why? Because updates to BOE often fail with critical issues, so we can't apply it to our system and hope it works.

    I read in a separate post that SAP will eventually create a thick client for customers who need to move larger amounts of content. I already tried the UMT on this SP5 and it refuses. Does anyone know if and when this new thick client might be coming?



- Fails most of the time

- Some failures only say "Failure" go look at the logs. How about a clue for us inside PM?

- One error on a connection said this, "Relationship would not be a tree after update so bailing". I guess bailing is a strategy for this poorly designed tool. It appears to me that you must bring every universe that uses that connection before it will actually bring it. That's just plain wrong. I may not want those other universes over-writing previous work.

- Duplicate name. This and any other tool needs to allows us to overwrite ANY existing content if we so choose. Someone changed the CUID using Save-As and kept the same name. I need to replace that file -- why not let me? The only solution here is to delete that content and rerun the job. With users and groups, this is at best a large nightmare.

- No instances in scheduled reports come over. In fact, even the report won't come over if the destination report has instances. What kind of choice is that?

Most of our dashboards depend on scheduled reports, so what's the point in not bringing the instances with that content?


What else might help?

1. It would be EXCELLENT if SAP designed an Easy Button for mirroring content to another server. It would have to ensure nothing points back to the source system and create new cluster keys. We have tried this manually, it wasn't fun and still has artifacts of the original system.

2. If they are working on a tool to move larger amounts of content, it would be SPLENDID if they also made a way to mirror the security across all folders without having to move all content and all users at the same time. We could move the Groups in batches, then hit the easy button and it magically assigns the groups to the folders, universes, etc. 

Dear SCN user,


We are happy to inform you about the availability of the updated SAP Analytics Roadmap slides.



Updated features and benefits of Solutions released since the last roadmap, such as:

  • SAP BusinessObjects Business Intelligence platform 4.1, SP5
  • SAP BusinessObjects Mobile 6.1
  • SAP Lumira 1.25
  • SAP Lumira Server
  • SAP BusinessObjects Analysis, edition for Microsoft Office, version 2.0
  • SAP BusinessObjects Design Studio 1.5
  • SAP Predictive Analytics 2.0
  • Updated Planned Innovations for all Solutions


You can download the updated roadmaps via the links:

Overall Analytics Roadmap*

Analytics BW Roadmap*

Analytics Agnostic Roadmap*

* User Account required for SAP Support Page

Kind Regards


This was an ASUG webcast the other week, with a focus on BI (not predictive, HANA)


On a different webcast I became aware of this related document about licensing - see here


Figure 1: Source: SAP


Everyone's contract is different




Figure 2: Source: SAP


There have been multiple BI license models over time



Figure 3: Source: SAP


Figure 3 shows the context of BI license models; SAP has previously had add-on models


SAP has moved to suite style licensing


Differences in BI suite license over on the right including Desktop and Lumira Server


Figure 4: Source: SAP


Figure 4 shows the core licensing principles


There is no obligation or requirement to convert licensing


SAP wants to be transparent in license models


License models are non-version specific


Figure 5: Source: SAP


SAP no longer sells CPU licenses to new customers but to existing


NUL stands for named user license – for managers, power users, most desktop tools


CSBL are for casual users that don’t require guaranteed


In the CMC configure NUL or Concurrent


Figure 6: Source: SAP


Figure 6 shows 1 logon = 1 session


Figure 7: Source: SAP


It is still one session if navigating between sessions


Figure 8: Source: SAP


Figure 8 shows SAP is moving away from CPU based licenses because they wanted to remove from constraints


Part 2 is coming when time allows




Upcoming ASUG BI Webcast Listing

Carsten Mönning and Waldemar Schiller

Part 1 - Single node Hadoop on Raspberry Pi 2 Model B (~120 mins), http://bit.ly/1dqm8yO
Part 2 - Hive on Hadoop (~40 mins), http://bit.ly/1Biq7Ta

Part 3 - Hive access with SAP Lumira (~30mins)
Part 4 - A Hadoop cluster on Raspberry Pi 2 Model B(s) (~45mins), http://bit.ly/1eO766g


Part 3 - Hive access with SAP Lumira (~30 mins)

In the first two parts of this blog series, we installed Apache Hadoop 2.6.0 and Apache Hive 1.1.0 on a Raspberry Pi 2 Model B, i.e. a single node Hadoop 'cluster'. This proved perhaps surprisingly nice and easy with the Hadoop principle allowing for all sorts of commodity hardware and HDFS, MapReduce and Hive running just fine on top of the Raspbian operating system. We demonstrated some basic HDFS and MapReduce processing capabilities by word counting the Apache Hadoop license file with the help of the word count programme, a standard element of the Hadoop jar file. By uploading the result file into Hive's managed data store, we also managed to experiment a little with HiveQL via the Hive command line interface and queried the word count result file contents.

In this Part 3 of the blog series, we will pick up things at exactly this point by replacing the HiveQL command line interaction with a standard SQL layer over Hive/Hadoop in the form of the Apache Hive connector of the SAP Lumira desktop trial edition. We will be interacting with our single node Hadoop/Hive setup just like any other SAP Lumira data source and will be able to observe the actual SAP Lumira-Hive server interaction on our Raspberry Pi in the background. This will be illustrated using the word count result file example produced in Parts 1 and 2.




Apart from having worked your way through the first two parts of this blog series, you will need to get hold of the latest SAP Lumira desktop trial edition at http://saplumira.com/download/ and operate the application on a dedicated (Windows) machine locally networked with your Raspberry Pi.

If interested in details regarding SAP Lumira, you may want to have a look at [1] or the SAP Lumira tutorials at http://saplumira.com/learn/tutorials.php.

Hadoop & Hive server daemons

Our SAP Lumira queries of the word count result table created in Part 2 will interact with the Hive server operating on top of the Hadoop daemons. So, to kick off things, we need to launch those Hadoop and Hive daemon services first.

Launch the Hadoop server daemons in your Hadoop sbin directory. Note that I chose to rename the Hadoop standard directory name into "hadoop" in Part 1. So you may have to replace the directory path below with whatever hadoop directory name you chose to set (or chose to keep).



Similarly, launch the Hiver server daemon in your Hive bin directory, again paying close attention to the actual Hive directory name set in your particular case.



The Hadoop and Hive servers should be up and running now and ready for serving client requests. We will submit these (standard SQL) client requests with the help of the SAP Lumira Apache Hive connector.


SAP Lumira installation & configuration

Launch the SAP Lumira installer downloaded earlier on your dedicated Windows machine. Make sure the machine is sharing a local network with the Raspberry Pi device with no prohibitive firewall or port settings activated in between.


The Lumira Installation Manager should go smoothly through its motions as illustrated by the screenshots below.



On the SAP Lumira start screen, activate the trial edition by clicking the launch button in the bottom right-hand corner. When done, your home screen should show the number of trial days left, see also the screenshot below. Advanced Lumira features such as the Apache Hive connector will not be available to you if you do not activate the trial edition by starting the 30-day trial period.



With the Hadoop and Hive services running on the Raspberry Pi and the SAP Lumira client running on a dedicated Windows machine within the same local network, we are all set to put a standard SQL layer on top of Hadoop in the form of the Lumira Apache Hive connector.


Create a new file and select "Query with SQL" as the source for the new data set.


Select the "Apache Hadoop Hive 0.13 Simba JDBC HiveServer2  - JDBC Drivers" in the subsequent configuration sreen.



Enter both your Hadoop user (here: "hduser") and password combination as chosen in Part 1 of this blog series as well as the IP address of your Raspberry Pi in your local network. Add the Hiver server port number 10000 to the IP address (see Part 2 for details on some of the most relevant Hive port numbers).


If everything is in working order, you should be shown the catalog view of your local Hive server running on Raspberry Pi upon pressing "Connect".


In other words, connectivity to the Hive server has been established and Lumira is awaiting your free-hand standard SQL query against the Hive database. A simple 'select all' against the word count result Hive table created in Part 2, for example, means that the full result data set will be uploaded into Lumira for further local processing.


Although this might not seem all that mightily impressive to the undiscerning, remind yourself of what Parts 1 and 2 taught us about the things actually happening behind the scenes. More specifically, rather than launching a MapReduce job directly within our Raspberry Pi Hadoop/Hive environment to process the word count data set on Hadoop, we launched a HiveQL query and its subsequent MapReduce job using standard SQL pushed down to the single node Hadoop 'cluster' with the help of the SAP Lumira Hive connector.


Since the Hive server pushes its return statements to standard out, we can actually observe the MapReduce job processing of our SQL query on the Raspberry Pi.


An example (continued)

We already followed up on the word count example built up over the course of the first two blog posts by showing how to upload the word count result table sitting in Hive into the SAP Lumira client environment. With the word count data set fully available within Lumira now, the entire data processing and visualisation capabilities of the Lumira trial edition are available to you to visualise the word count results.


By way of inspiration, you may, for example, want to cleanse the license file data in the Lumira data preparation stage first by removing any punctuation data from the Lumira data set so as to allow for a proper word count visualisation in the next step.



With the word count data properly cleansed, the powerful Lumira visualisation capabilities can be applied freely at the data set and, for example, a word count aggregate measure as shown immediately below.



Let's conclude this part with some Lumira visualisation examples.







In the next and final blog post, we will complete our journey from a non-assembled Raspberry Pi 2 Model B bundle kit via a single node Hadoop/Hive installation to a 'fully-fledged' Raspberry Pi Hadoop cluster. (Though it will be a two-node cluster only, but it will do just fine to showcase the principle.)



SAP Lumira desktop trial edition - http://saplumira.com/download/

SAP Lumira tutorials - http://saplumira.com/learn/tutorials.php
A Hadoop data lab project on Raspberry Pi - Part 1/4 - http://bit.ly/1dqm8yO
A Hadoop data lab project on Raspberry Pi - Part 2/4 - http://bit.ly/1Biq7Ta

A Hadoop data lab project on Raspberry Pi - Part 4/4 - http://bit.ly/1eO766g


[1] C. Ah-Soon and P. Snowdon, "Getting Started with SAP Lumira", SAP Press, 2015

Continuing with the security topics, I will cover the topic of staying up to date with security patches for BI.

While SAP practices a complete security development lifecycle, the security landscape continues to evolve, and through both internal and external security testing we become aware of new security issues in our products.  Every effort is then made to provide a timely fix to keep our customers secure. 


This is part 4 of my security blog series of securing your BI deployment. 


Secure Your BI Platform Part 1

Secure Your BI Platform Part 2 - Web Tier

Securing your BI Platform part 3 - Servers


Regular patching:

You're probably familiar with running monthly patches for windows updates, "patch Tuesday" on the second Tuesday of every month.

SAP happens to follow a similar pattern, where we release information about security patches available for our customers, for the full suite of SAP products.


BI security fixes are shipped as part of fixpacks and service packs. 

I will here walk you through signing up for notifications.



Begin by navigating to https://support.sap.com/securitynotes


Click on "My Security Notes*"


This will take you to another link, where you can "sign up to receive notifications"



Click on "Define Filter" , where you can filter for the BI product suite.


Sign up for email notifications:


Defining the filter: Search for SBOP BI Platform (Enterprise)

And select the version:


Note that currently the search does not appear to filter on version unfortunately, so you will likely see all issues listed.


Your resulting filter should look something like this:



The security note listing will look something like this:



Understanding the security notes:

Older security notes have a verbal description of version affected and patches that contain the fix.

For example, the note will say "Customers should install fix pack 3.7 or 4.3"...


Newer notes will also have the table describing the versions affected and where the fixes shipped:

Interpreting the above, the issue affects XIr3.1, 4.0 and 4.1.  

Fixes are provided on xr3.1 Fixpacks 6.5 & 7.2, on 4.0 SP10, and 4.1 SP4.


The forward fit policy is the same as "normal" fixes, meaning a higher version of the support patch line will also include the fixes.


The security note details will also contain a CVSS score.  CVSS = Common Vulnerability Scoring System.

It is basically a 0 - 10 scoring system to give you an idea of how quickly you should apply the patch.

More info on the scoring system https://nvd.nist.gov/cvss.cfm


1. Vulnerabilities are labeled "Low" severity if they have a CVSS base score of 0.0-3.9.

2. Vulnerabilities will be labeled "Medium" severity if they have a base CVSS score of 4.0-6.9.

3. Vulnerabilities will be labeled "High" severity if they have a CVSS base score of 7.0-10.0.


In short, if you see a 10.0, you better patch quickly!


Not applying the latest security fixes can get you to fail things like PCI compliance, so after you have locked down & secured your environment, please make sure you apply the latest fixes and keep the bad guys out!


Filter Blog

By author:
By date:
By tag: