1 2 3 23 Previous Next

BI Platform

343 Posts

I am just sharing my experience while doing impact analysis for one of my BI 4.1 assessment in a BusinessObjects environment that is running exclusively for SAP BusinessObjects Design Studio and SAP Lumira applications. Here are few of my queries that could be handy for those who are doing similar type of activity. Here you go


     SAP Lumira


     To get the list of Lumira Dashboards published in to BI Platform



     For Scheduled instances of the Lumira documents



     To get the Universes and Managed connections used by Lumira documents




     SAP Design Studio


     To get the list of Design Studio dashboards




     To get the Bookmarks defined in Design studio Application


     SELECT * FROM CI_APPOBJECTS WHERE SI_KIND ='AAD.AnalysisApplication_Bookmark'


     To get the list of Design studio dashboards with atleast one Bookmark




     To get the bookmark metrics of the individual bookmark along with associated Analysis applications




     FROM CI_APPOBJECTS WHERE SI_KIND ='AAD.AnalysisApplication_Bookmark'


Analysis Office


     To get the list of Analysis Application for Office docs




Hope this is interesting. If you do have any specific requirement, please dont hesitate to comment here. I will try my best to bring the required information for you. Keep reading & Happy blogging!

Query Builder Blog series



                 BusinessObjects Query builder - Basics

              BusinessObjects Query builder – Best practices & Usability

Sample Queries

                  BusinessObjects Query builder queries

               BusinessObjects Query builder queries - Part II

               BusinessObjects Query builder queries - Part III

               BusinessObjects Query builder queries - Part IV

               BusinessObjects Query builder – Exploring Visualization Objects

Use cases

               BusinessObjects Environment assessment using Query builder

               BusinessObjects Environment Cleanup using Query builder

               BusinessObjects Query builder – What's New in BI 4.0

If you are a German speaking customer planning to upgrade from BOE XIr2 or XI3, to BI 4.1 - then you could hugely benefit from a *free* 2-day hands-on workshop.


RIG Analytics can help steer your through the key planning considerations & upgrade tasks that are required for this kind of migration project.

This course will be held in Walldorf, Germany, on 1st and 2nd September 2015. Registration is simple


Customer Feedback, from the 20 previous workshops, is overwhelmingly positive! Examples from our recent United Kingdom, Spain, & South Africa workshops include:

  • "Upgrading is a lot less intimidating now"
  • "Amazing how much knowledge was shared in 2 days"
  • "The best session I ever received from SAP"

For more information please contact your local SAP representative.

Are you looking for visualization to strengthen your decision making?


Are you looking for predictive analytics to build strong economic growth for your organization?


Are you concerned about budget to convert crystal reports to BO?


Is timeline, duration & resources cost bothering you for migrating crystal reports to BO?


Brief Summary

Reporting today is every industry needs , every industry demands analytical reports to make decision on fly, drill down reports to get the details of cost factors, comparison reports to know my competitors, google maps with embedded graphs to know sales geographically.

         B2.png                                                                                                                                        B1.png

Most important is report to be available on web, mobile devices & tablets.

Today is the world of Business intelligence, where every industry wants to increase their sales, reduce the cost & improve operations.  Accessing heap of information in less time and present intelligently with visual modelling is what BI tool does.

There are many BI tools in market one of the most successful tool is Business Objects, An object which does great magic with visualization which are beyond imagination.

Business object is the company now owned by SAP. 

Business object is the next generation tool which provides many intelligence ways of presenting information on web, tablets & mobile phones on finger tips.

BI is not just limited to crystal reports, it’s been extended using BO with powerful & intelligence analytic's. 




Almost all industries uses crystal report for reporting, crystal report has very limited analytic's to present data and limited to only desktop users.

There are many customers worldwide for crystal report, some has 100-1000 reports already developed, developing crystal report with all formulae and rules is very exhaustive work & redoing it for visualization, extensiblity with another tool is like pain in the neck.

Crystal report to BO conversion is another project implementation, all reports developed in crystal again developing them in BO is like building another Rome, but as i said we can make things simple.


Learn in my next follow up blog about crystal report to BO migration.





Carsten Mönning and Waldemar Schiller

In this blog post, we present a pretty straightforward way of setting up an automatically refreshing BusinessObjects Business Intelligence document dashboard with the help of a conventional Raspberry Pi 2 Model B unit. You may take this as an inspiration for a lab project along the lines of "A Hadoop Data Lab Project on Raspberry Pi", http://bit.ly/1dqm8yO. However, the setup is robust and simple enough to maintain to use it, for example, for the operation of a war room terminal showing an automatically refreshing Web Intelligence key figure report within a BusinessObjects Business Intelligence production environment.


The basic idea is to force a database refresh of a BusinessObjects Business Intelligence document referenced via a standard SAP OpenDocument URL which is reloaded automatically with the help of an auto reload add-on to the Debian "Iceweasel" web browser. The OpenDocument URL will represent the browser's landing page. For this to be more than a totally meaningless exercise, it is assumed that the data source for the Business Intelligence document is updated at least as frequently as the web browser's landing page. We have been using this setup in context with the SAP CRM embedded Business Warehouse out-of-the-box "real-time" info providers configured to be updated at 15 minute intervals thereby ending up with a report dashboard of 15 minute accuracy using standard SAP technology (and a Raspberry Pi). With the browser and its landing page set to launch automatically upon Raspberry Pi boot up, this setup can be turned into something like a 'plug-and-play' solution for straightforward BusinessObjects document dashboard implementations.


We are assuming a basic knowledge of Linux commands. The installation and configuration process should take no more than 45 minutes in total.



The following Raspberry Pi 2 Model B bits and pieces are required to get things off the ground:

  • A Raspberry Pi 2 Model B (quadcore CPU, 1 GB RAM).
  • 8 GB microSD with NOOBS ("New Out-of-the-Box Software") installer/boot loader pre-installed.
  • Wireless LAN USB card.
  • Mini USB power supply, heat sinks and HDMI display cable.
  • Optional, but recommended: A case to hold the Raspberry circuit board.


Rather than purchasing all of these items individually, you may want to go for a Raspberry Pi accessory bundle at approximately € 60-70, as shown in the picture below.


Setup overview

The installation and configuration process consists of the following three main steps:


  1. Raspberry Pi software configuration
  2. Web browser installation and configuration (auto reload plugin and OpenDocument URL landing page)
  3. Autostart and display configuration


Raspberry Pi software configuration

Launch your Raspberry Pi device. If not triggered automatically, enter sudo raspi-config on the command line to start the standard Raspberry Pi software configuration programme and make the following selections:


  1. Enable booting into the Raspberry desktop environment.
  2. Overclock the device to the "Pi 2" setting, i.e. "1000 Mhz ARM, 500 Mhz core, 500 Mhz SDRAM, 2 overvolt".




With the help of setting (1), we will be able to configure the device in such a way that it launches a web browser immediately following the completion of its boot up sequence, whilst setting (2) simply makes full use of the remarkably powerful processing capabilities of the Raspberry Pi 2 Model B.

Web browser installation and configuration

Establish a LAN or wireless internet connection for your Raspberry device and download and install the "Iceweasel" web browser, the Debian distribution's fork from the Mozilla Firefox browser, https://wiki.debian.org/Iceweasel (and not to be confused with the GNU browser "IceCat", formerly known as "IceWeasel"):


     sudo apt-get install iceweasel

Following successful "Iceweasel" implementation, install any auto reload plugin for this web browser, for example, "Reload Every" at http://reloadevery.mozdev.org.


With both the web browser and the auto reload add-on in place, it is left to the set the browser landing page to the SAP BusinessObjects Business Intelligence (BI) document you want to display and to get database-refreshed automatically in regular intervals within the browser. This is where the SAP OpenDocument URL functionality comes in handy, https://help.sap.com/businessobject/product_guides/sbo41/en/sbo41_opendocument_en.pdf.

OpenDocument comes with your BusinessObjects BI platform installation in the form of a deployed web application. (The web bundle is part of the BOE.war file.) What it does is to process incoming URL requests for BusinessObjects BI documents in the BusinessObjects Central Management Server and delivers the document to the end user. The supported document types include, amongst other things, Web Intelligence documents, Analysis workspaces, Dashboard objects and Crystal reports.

The OpenDocument default URL syntax reads as follows:


where <platformSpecific> is to be replaced with openDocument.jsp in the case of a Java deployment or with openDocument.aspx in the case of a .NET SAP BusinessObjects BI deployment. Note that there are not to be any spaces around the ampersands joining the parameters.

Refer to the SAP help document referenced above for the various OpenDocument parameters available. For our purpose of automatic database refreshing, the sRefresh parameter is the parameter of choice and is used, for example, as follows:




In other words, the BusinessObjects document with the CUID, i.e., cluster unique ID, Aa6GrrM79cRAmaOSMGoadKI will undergo a database refresh each time it is opened via this OpenDocument URL.


Set the browser landing page to whatever document of your BusinessObjects BI deployment you would like to refer to using the above OpenDocument syntax. (This is assuming, of course, that your Raspberry Pi device is granted the necessary network access privileges to resolve the OpenDocument URL.) You may want to finish this configuration step by setting the browser to full screen mode by pressing the F11 key.

Autostart and display configuration

Launch a Raspberry Pi terminal session and navigate to the autostart folder via

     cd ~/.config/autostart

Create the new file iceweasel.desktop with the contents as shown below.

That is:

     [Desktop Entry]







Finally, prevent your display from ending up in power save mode adding the line xserver-command=X -s 0 -dpms to file /etc/lightcm/lightdm.conf.


And that's pretty much it. Plug in your Raspberry Pi in the dashboard display of choice, restart the Raspberry device and the boot up process should result in your BusinessObjects document automatically getting shown and refreshed within the "Iceweasel" web browser. (Unless you have got a single-sign-on scenario for your BusinessObjects BI environmen up and running, you will need to enter the relevant BusinessObjects BI user and password credentials once and once only immediately following the launch of the web browser.) Set the required refresh frequency with the help of the web browser's reload tab and that's it.

A Web Intelligence ticketing KPI dashboard example featuring a 15 minute automatic report update frequency is shown below.




A Hadoop Data Lab Project on Raspberry Pi,http://bit.ly/1dqm8yO

Cooking up an Office Dashboard Pi - https://gocardless.com/blog/raspberry-pi-metric-dashboards

Iceweasel - https://wiki.debian.org/Iceweasel

"Reload Every" auto reload add-on - http://reloadevery.mozdev.org

Viewing Documents Using OpenDocument - https://help.sap.com/businessobject/product_guides/sbo41/en/sbo41_opendocument_en.pdf

I hope everyone is having a nice, relaxing summer.  The Vancouver summer so far has been full of great weather, and I've been enjoying every moment of it.

A question I am seeing frequently from our customers is "how do I secure my BI deployment?" - and for good reason.  The headlines in my RSS reader are still filled with security breaches and data protection incidents, and I don't anticipate that going away any time soon.


My colleague Greg Wcislo has written a three-part series on answering this question.


Part 1,  securing Identity Provider communication and a review on how the data is stored

Part 2,  configuring the web tier which is likely the most critical if you have your installation exposed to the outside world.

Part 3,  securing the BI servers, including ports, firewalls and database encryption

As a bonus, here's another excellent write up by Greg on Encryption and Data Security in BI 4.0.


I strongly recommend implementing HTTPS and CORBA SSL in your deployment, along with having a regular password expiration for your users, use complex passwords and regularly reviewing authorizations in your BI system, even if the web application is not public-facing.

In addition, don't forget about SAP's security note portal.  It's located here:



Other links of interest:

I want to share some of my findings on this mission 'possible - on the way from good old Deski to new brave WebI.


Still the customer users are fond of Deski and will miss the tool.


"Can we  set the timeout for 8 hours?" - - a user asks - since in deski you could keep the tool up and running the whole time. Webi in the browser has a timeout set!

We are testing 240 min timeout at the moment.


The RCT tool is still a bit  -ahm - could be better - let's phrase it this way:


Crucial it seems to run as administrator - which makes it tricky on the windows 2012 server...

Still it does not remember more the one login detail - I'll have to check if there is an ini file or sg.

Some RCT installation will not run properly - don't ask why!


Some features are already working but the RCT gives an error: e.g.: do not Update SQL. - gets converted pretty well, but the conversion log reports an error / warning: usupported option!. -


Strugling with FHSQL though - oracle works - investigation ongoing how to geht teradata FHSQL to be converted!


to be continued



This was an SAP webcast given by Maheshwar Singh, SAP, this month.  Below are my notes.  Note that things in the future are subject to change.


Figure 1: Source: SAP


Figure 1 shows the legal disclaimer that things in the future are subject to change.


Figure 2: Source: SAP


Figure 2 covers self-service BI on BI Platform as defined by SAP – this was not covered in the BI Platform roadmap.  BI Launchpad includes save, share manage content and self-service on BI Platform


Figure 3: Source: SAP


Schedules are subject to change


Purpose of roadmap is to give you short term and long term goals


Figure 3 shows the roadmap in three sections – today, planned innovations in 12-18 months, future innovation – forward looking


Today is BI4.1 SP6


Figure 4: Source: SAP


Integration kits are no longer a separate add-on; key capabilities are included


LCM console is integrated in CMC as promotion and version management


Planned Innovations


Planned innovations are in 12-18 months and anything could change based priorities


Figure 5: Source: SAP


As shown in Figure 5 plans include add-on audit improvements to include all clients – Lumira, Analysis Office, Design Studio.  For audit samples see Matt Shaw's post Unlock the Auditing database with a new Universe and Web Intelligence Documents for BI4.1


Admin broadcasting messages idea came from Idea Place.  For example, the admin send message to all users, some users, or certain user – downtime that IT plans.  Can also send by e-mail if e-mails are available for users.  It is also available as an alert


Recycle bin is a top-voted idea in Idea Place.  They plan to make it like Windows – CMC – admin can restore content – first focus is on public folders.  The admin defaults clean-up date; admin can change.  In first iteration it is not available for universe connections and personal documents

Admin cockpit is one central page to get information about what is happening with your system


Figure 6: Source: SAP


Migration improvements for the Upgrade Management Tool UI have an option to change log level instead of changing the INI file


Promotion management – selective retrieval from LCMBIAR – top voted idea


Performance improvements around promotion management


Manage Inbox – they are thinking – manage users’ personal space – today cannot restrict what user stores in personal space


BI Commentary – they plan to do – collaboration today with SAP Jam today

  • Context to your comment
  • Available as part of the platform so it is centrally stored
  • Plan is to start with Web Intelligence and then Design Studio / Analysis Office
  • Plan is to see the same content across all reporting tools


Figure 7 Source: SAP


Today the install takes up a lot of time; want to look at reduce install start up screen time for patches


Simplify language update support experience


Figure: 8 Source: SAP


Multitenancy was introduced in 4.1 with more improvements shown above


Promote content between tenant 1 and tenant 2


Figure 9 Source: SAP


Publications are planned for Analysis Office


Looking at scheduling to a printer as an idea


Figure: 10 Source: SAP


RESTful web services new things will be supported in BI4.2


Future Direction


Figure: 11 Source: SAP


Quick time to value with deployment and automation of software maintenance


In TCO reduction, SAP is looking at CMC has several applications; look at consolidation and making BI admin simple


Roadmaps are updated twice a year



Upcoming ASUG Webcasts:

What's new in BI4.1 SP06

What is new in BI4.2

Unprecedented Visibility: Bringing BI Auditing and Monitoring to your Mobile Device


Upcoming ASUG Business Intelligence Community W... | ASUG


Over the past six months we have been hard at work designing and developing a brand new supportability platform for SAP BusinessObjects BI Platform.  This product is the SAP BI Platform Support Tool version 2.0 which is an evolution and follow up of the original BI Platform Support Tool version 1.  Over 2014, we collected feedback from engineers, developers, and customers and implemented as much as possible into the new platform.  I believe with the new version, we will significantly reduce the number of incidents needed, reduce the amount of work for the BI administrator, and considerably reduce the time it takes to resolve support incidents raised to SAP support.


In this article, I will share the release schedule and provide details on all of the confirmed features coming soon in the version 2.0 release.


BI Support Dashboard

The home view now in version 2.0 is a BI supportability dashboard that brings together all of the resources that a BI administrator needs to support their BI Platform environments.  It is essentially a browser that displays useful SCN RSS feeds, hyperlinks to important maintenance, patching, and documentation, as well as a knowledge base search feature that searches KB Articles, SAP Notes, BOB Forum, and Google Search all simultaneously from a single query.  Once you have logged on using your S-USER account, the S-USER SSO certificate is stored within the BI Platform Support Tool client providing you a quick way to access important support content.





Landscape Analysis Report

One of the primary features of the product is the Landscape Analysis Report.  The Landscape Analysis Report is the name given to a collection of one or more analysis reports containing information about the BI Platform landscape.  The user can select which types of analysis should be included in the Landscape Analysis Report depending on the type of information that is needed for a particular service or root cause analysis task.




The criteria for including data in the Landscape Analysis Report are:


  • Data for an analysis type can be collected in less than 10 minutes
  • Information is useful to be reviewed on a re-occurring basis
  • Data can be collected without introducing a large performance hit on the target system
  • Change Analysis and Alerting can be applied to the collected data


The generation of a Landscape Analysis Report occurs in two separate phases, Extraction phase, and Report Generation phase.  This separation makes it such that historical report instances can be opened and saved data can be analyzed or compared separately from the actual data extraction.  As a result, it is possible for offline analysis to occur by SAP support or other consulting organizations that cannot connect directly to the live customer system.


CVOM Charting


We have implemented the same charting engine as used by other SAP BusinessObjects Analytics products such as Lumira and Design Studio.  The CVOM charting engine allows us to visualize more of the system metrics and properties making analysis quicker and more intuitive.



New Analysis Types


In version 2.0, we have both added new types of analysis and improved the functionality of the existing analysis that existed in version 1.x.  Refer to the table below for a list of the analysis types and information about that analysis.


Analysis TypeDescriptionData Source
1.pngServer and Services


Information about BI server configuration, settings, and metrics.  The configuration is also displayed in a side by side comparison report for quickly spotting differences in server settings or command line properties

2.pngContentDisplays information regarding the count of Info Objects in the system.  This is useful for understanding which products are in use and how large the InfoStore repository isInfoStore
3.pngSchedulePulls back scheduled instances and does analysis on why reports are failing, which instances are taking up most disk space, most common error messages for failed instances, and longest running instances.  You can also now add a date filter to view only the instances you need to analyzeInfoStore
4.pngLicense KeyAnalyzes the current keycodes in use and gives alerts if the keycode will expire soon or if there is missing functionalityInfoStore
5.pngPlatform SearchConfirms that best practices are being followed concerning the Platform Search feature. This is a common reason for performance degradation if not optimally configuredInfoStore
6.pngHardware SummaryInvokes the SAP Host Agent and returns information about the host and operating system for each node in the BI landscapeSAP Host Agent
7.pngAuthenticationDisplays information about the third party authentication setup and single sign onInfoStore
8.pngSemanticShows which Universes and Connections are being used the most.  Displays how many reports will be affected by changes to these semantic layer objects (UNX, UNV).  Checks for orphaned Webi documents (those without a linked universe)InfoStore
9.pngWeb Application ServerConnects to the Java Application Server and shows information and metrics about the Java Virtual Machine as well as the application server settings and configurationJMX
10.pngPatch HistoryCollects from each BI node the installation and patch history.  This is useful to see which patches have been applied, what order they were applied, who installed the patch, was it an install, uninstall, repair, etcSAP Host Agent



New Custom Alerting Framework

alle.pngOne limitation of the previous version is that all the alerts and thresholds were static and configured at development time.  In version 2.0, we have made a new extensible alert framework that allows the expansion and customization of the metrics and settings that are evaluated.  Additionally, the threshold values and logic used to trigger the alerts can also be customized to better align with the needs of a particular organization or environment.

The alerts themselves are evaluated during the extraction phase so that the triggered alerts are stored within landscape xml itself.  This way, if you are reporting on the landscape xml outside of the SAP BI Platform Support Tool or if you are sending the landscape xml to SAP for analysis, the alerts that were triggered at extraction time will always be able to be recalled and viewed in a historical manner.

There is some new alert terminology to be aware of in the 2.0 platform.

Simple Alerts - Allow user customization, changes to thresholds, delete or add new metrics.  Simple alerts are limited to evaluation of one metric/setting and one logic operator

System Alerts - These are system defined alerts which allow for more advanced logic and analysis.  System alerts include things such as keycodes expiring in the next 30, 60, or 90 days, nodes not at the same install patch level, or nodes not running the same support pack

Complex Alerts - Complex alerts are alerts which allow you to combine the results of two or more alerts and allow the use of AND and OR logic to determine the alert state.  Complex Alerts are not available yet in version 2.0 and are scheduled to be implemented in the next version (version 2.1)

Alert Definitions - Alerts are configured via the preferences UI and are stored in the file alerts.xml under the BI Platform Support Tool /resources/Alerting directory



Alert Summary Tab


Any simple or system alerts triggered in the Landscape Analysis will appear on the Alert Summary Tab.  This makes it possible to quickly review which alerts were triggered in a particular analysis so that actions may be taken where necessary.  The Alert Summary tab also contains information such as the user who ran the Landscape Analysis, how long the processing took, and what version of the SAP BI Platform Support Tool was used to generate the analysis.




Improved E2E Trace Wizard


One of the most common activities required for root cause analysis of a BI landscape is to generate an End to End Trace.  Using the included SAP Client Plug-in, each request sent from the BI client contains an SAP Passport which is intercepted by the application server and passed along to the backend processing servers and databases.  This feature automates this process by automatically configuring the BO_trace.ini on each BI node, recording a video capture of the trace session, and collecting the log files from each host in the landscape for the user.




The E2E Trace Wizard relies on the existing Landscape Definition to understand which BI nodes and Application Server nodes are defined for the target landscape.  For each BI node and Application Server node, the user has a choice of whether they want to use the SAP Host Agent or UNC Shared Directories to collect log files from the remote hosts.  This allows flexibility for cases where the customer cannot run a SAP Host Agent or share network folders on a particular node type.  After the logs are collected, the content is zipped up and stored in the E2E working directory of the BI Platform Support Tool user directory.  The user may then forward this required tracing information and video capture to SAP for quick problem resolution or code correction.  Once the trace session is complete, the E2E Trace Wizard will revert the BO_trace.ini back to default settings on each node in the landscape.



Change Analysis 2.0

A useful technique in troubleshooting the SAP BI Platform is to understand what changes have been made in the BI system as some changes may lead to performance problems and/or system crashes.  The Change Analysis feature builds on the existing Landscape Analysis Report feature and allows the end user to select two or more Landscape Analysis Report Instances for comparison.




When compared, each data landscape XML is loaded into memory of the BI Platform Support Tool.  Each property name, property value pair is compared using a comparator and when differences are found, the differences are displayed in the client in columnar format.  Values determined to be different are highlighted in yellow for quick and easy identification.  Although useful for identifying changes to the system, the Change Analysis tool can also be used to view a change in performance metrics over a period time (for example, Memory Usage on Saturday each week).



3rd Party Authentication Wizards


The procedure for setting up third party authentication and single sign on (SSO) tends to generate a lot of incidents and can be a fairly complex set of procedures.  This process requires the administrator to read the manual and follow instructions very closely for success.  Additionally, differences in environments can make understanding the setup guide a bit difficult since it is not tailored for their particular landscape.  This is where the Authentication Wizard comes in.  This wizard guides the BI administrator through every step of the process while customizing the setup depending on their own domain, LDAP, or SAP environment.  Furthermore, it even authors emails for the BI administrator to send to their domain administrators with instructions on the steps that need to be taken on the domain controller, BW system, LDAP server, etc.  This wizard is truly like having SAP Support helping you without ever needing to create an incident.




Landscape Toolbox

The new Landscape Tools section contains a number of Diagnostic Tools which are mostly used by SAP Support for specific troubleshooting tasks. This area is mainly for smaller applications that do not fit the criteria required for the Landscape Analysis Report.  Applications in this area are usually tools that have existed in support in the form of JSP pages or smaller Java console applications.  Refer to the table below for details on the included tools:


Publish Landscape to SAP (Reverse 911 Alerting)



Predictive Maintenance is a big initiative here at SAP Active Global Support and to help facilitate a more pro-active support service, we have built into version 2.0 the ability to safely and securely publish your Landscape Analysis Report directly to SAP.  If you choose to participate, your landscape XML is consumed on an internal SAP HANA system where a variety of analytics can automatically check for problems such as:


  • Landscapes not following best practice or not within PAM recommendations
  • Systems where tracing was accidentally not disabled
  • Systems that may be vulnerable to a new security vulnerability that was discovered
  • Systems that may contain a setting that was recently discovered to introduce a problem
  • Landscapes running a patch or SP that may contain a regression


The goal of this functionality is to identify problems before they occur and pro-actively reach out via email to those customers and administrators who may be affected by the problem or situation.





Release Dates and Beta


The SAP BI Platform Support Tool 2.0 will be released for free as an official product on the SAP Store.  Since it is an official product, we are subject to SAP Product Standards and as a result, the release has taken a bit longer than originally expected.  Prior to release to customer, we will be having a beta release.   We are in the final stages of the release process now and plan to release the beta in August 2015.  Release to customer will follow in September / October time frame after the beta program wraps up.

SFTP Destination support is one of the more interesting new features introduced with the recently released SAP BusinessObjects BI Platform 4.1 Support Pack 6.


Quite a lot of customer requests for this one, and it's finally here!


When you send or schedule a document to a SFTP destination, you will be asked to enter a fingerprint value.


  • What is a fingerprint?
  • Why is it important?
  • How do you determine the fingerprint?


I'll answer these questions in this blog. Additionally, I'll describe how I set up a simple environment that I've used for internal testing and teaching purposes for the SFTP feature.


SSH File Transfer Protocol (SFTP) Fingerprint


SFTP uses Secure Shell (SSH) to send files securely over the network. It's a full-fledged transfer and file management system that uses public-private key cryptography to ensure any client may send a file to a server securely.

Sometimes it's confused with FTP Secure (FTPS) or Simple FTP, but they're not compatible. FTPS is FTP over SSL and Simple FTP has no security features built in.


Why the need for secure file transfer?


I'll give the most often sited analogy, to snail mail. Say your company needs to send letters to a bank. You put it in an envelope, address the envelope, and drop it off at your company's mailroom. The clerk hands it over to the postman for delivery to the bank.


But let's say the clerk happens to be not-above-board. He steams open the envelope and reads the contents, and uses the information found within for private gain. Your letter is compromised. The clerk puts the letter back in the envelope, seals it, and sends it on its way, no-one the wiser.


To prevent that, the bank mails you special envelopes. Anyone can put contents into the envelope, but only the bank can open the envelope without destroying the contents. The shady clerk's now thwarted and would no longer be able to read the contents and steal the information.


But say the clerk's pretty crafty. He knows that the bank envelopes are delivered through his mailroom. So he waylays the package when it comes in. Instead, he has a set of those special envelopes made for himself, that only he can open, and forwards those envelopes to you. You can't tell the difference between the clerk's envelope and the bank's and so you put the letter in the clerk's envelope and drop it off at the mailroom. The clerk opens the envelope, reads the letter, steals the information, then puts the letter in one of the bank's envelope, and gives to the postman. Neither you nor the bank are aware that the letter has been compromised.


The clerk is called the man-in-the-middle, and the scheme he plays is called the man-in-the-middle attack.


To thwart a man-in-the-middle, what the bank will do is place a very unique symbol on its envelopes. This symbol would be extremely difficult for others to duplicate. They then publicly publish what this symbol looks like, allowing you to verify that the special envelopes you have is actually from the bank and not the man-int-the-middle.


This symbol is a fingerprint.


Fingerprints are extremely difficult to duplicate, since they're computed by hashing the public key, the key used for cryptography.


Discover the SFTP Fingerprint that BI Platform Expects


Now that you know the importance of a fingerprint, how do you discover the fingerprint needed, when sending/scheduling a document to SFTP?


If you use a SFTP client tool such as WinSCP or PuTTY, you'll see that they present a fingerprint value for every SFTP that you connect to. But those fingerprint value won't work with BI Platform. They won't work because the hashing algorithm used is different.


Typical client tools use a MD5 hash. BI Platform uses the more secure SHA-1 hash. Because of that, you'll need some other means to get the fingerprint.


One way is to let BI Platform tell you. When it connects to a SFTP server, it retrieves the public key and computes the SHA-1 fingerprint from it. If that expected fingerprint does not match the fingerprint you've entered for the SFTP destination parameters, then an error is entered in the trace files. That error line records both the expected and entered fingerprint values. You can use this to get the expected fingerprint. The steps are described in SAP Note 2183131, but I'll describe the steps here as well.


Log onto the Central Management Console and enable tracing for the Adaptive Job Server. Log onto BI launch pad, navigate to the public "Web Intelligence Samples" folder, right-click on a WebI document and select from the menu Send->SFTP Location:




Fill out the SFTP Server information, including hostname, port, user name and password. For the fingerprint, just enter a keyword that'll be easy to remember and search for, say FINDTHEFINGERHERE:




Click Send.  Nothing appears to happen (not even an error dialog box pops up), but the document would not have been sent to the SFTP server.


Go to the machine where the BI Platform Adaptive Job Server is running, and navigate to the logging folder for the BI Platform deployment. Find the trace file associated with the Adaptive Job Server Destination Service child process. Open the glf file associated with that Service, and search for the fingerprint keyword you entered above:




Here's the line:


destination_sftp: exception caught while connecting to sftp server [<hostname>]. Details: [83:89:8c:dd:e8:00:a2:e3:26:63:83:24:47:71:ec:8c:1b:ce:de:25 is admin input.Mis match in fingerprint. i.e hashing server fingerPrint obtained  from serverFINDTHEFINGERHERE]


The long sequence of 20 two-digit hex numbers separated by colons is the SHA-1  hash of the public key as received by BI Platform. Enter that value into the FingerPrint box of the Send dialog box:




and you'll see the document be sent successfully to the SFTP server.


Are we done?


What if I were to ask you whether the fingerprint above is the one for the SFTP server or a man-in-the-middle between your BI Platform deployment and the SFTP server?


You can't tell by looking at the fingerprint value itself, you need some other independent way to validate it. A good way is to contact the SFTP server maintainer, and ask them "Would you provide us, securely, the SHA-1 fingerprint for your SFTP server?" That's actually the best way.


But sometimes you encounter Administrators who don't know how to do that. What then?


Given the public key, a public key you've gotten from the SFTP server by secure means, you can compute the fingerprint yourself. I'll give instructions to do that.


First, let's set  up a trial, simple, SFTP server, so we can see things from the SFTP server side of things.



Generating the Cryptographic Public Key and Private Key


First, generate public and private keys that the SFTP server will use for cryptography. There's various ways to do this, some SFTP server products have their own ways.


What I'll use is the popular and common PuTTY tools.


Download the PuTTYgen RSA key generation utility from here.


It's a fairly easy tool to use. In the "Parameters" section, specify the type and length of key, and click the "Generate" button:




You'll see that the public key in "OpenSSH format" will be displayed in the text area titled "Public key for pasting into OpenSSH authorized_keys file:" So copy and paste the key into a text file using a text editor, such as Notepad or Notepad++. Save the contents to a file named public_key_openssh.pub. By the way, you see the "Key fingerprint:" value in the above screenshot. Ignore it. That's a MD5 hash fingerprint, not the SHA-1 fingerprint we want.


Next go to the menu selection Conversions -> "Export OpenSSH key" to export the private key to a file, that I name private_key.key




Why OpenSSH key? It's because I'm going to use a SFTP implementation that expects private keys to be in OpenSSH format. There are other formats, and you'd need to refer to your SFTP server documentation to find out which one, if you're going to be using something different from I.


Now that we have the keys, let's set up the SFTP server.



Setting up the freeFTPd SFTP Server


For simplicity, I'll use the open-source freeFTPd implementation of the SFTP server. There are others, but freeFTPd is the one I find is easiest to set up and use.


Download and run. First go to the SFTP -> Hostkey page, and specify the private_key.key RSA key you generated previously:




Then go to the Users page and create a test user. Call it testuser:




Now go to the SFTP page and start up the SFTP server, making sure you first set where the SFTP is to store the incoming file in "SFTP root directory" setting:




And finally check the Status to ensure the SFTP us running:




That's it!


Now connect to this SFTP server using instructions given above, and get the fingerprint value that BI Platform expects.  Now, what we want to do is compute the fingerprint directly from the public key file public_key_openssh.pub and verify that the value is correct.



Use OpenSSL tools to Compute the SHA-1 Fingerprint


Let's have a look at the public key file contents (in OpenSSH format):


ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIEAnx3a1iYFDX4HY8Ysf2hOE1UJwha+rLD0iq82gn3+Lgla3ZzPOTuU4R39yQ5cgtzfvQrUq+NIEVEKrw1Vm3CuYVs/UrCUEhDhYOc4AfzszDGaLPnIIJjrZt9i2TnZ+9OeLakno4bgNntVglr8GbL2tryg+FWTzPGcq9O6O5gnavE=



Now the first line, 'ssh-rsa', specifies that the type of key is RSA, and the last line 'rsa-key-20150626' is merely an optional comment line (I just had PuTTY denote the type and date when I generated it).


In between, the gibberish, is the Base64 encoded string value for the public key binary value. What we need to do is extract this value from the file, Base64 decode it to get the binary value back, then generate the SHA-1 Digest for this value (in colon-separated hex 2-digit format).


Now, the last step you can do using OpenSSL command-line tools. But if you'd like to make life much easier, you can use command-line tools to accomplish the other two pre-steps.


The easiest, if you're not on a Unix machine, is to download Unix tools, the Cygwin toolset. The Cygwin command-line tools contain the textfile manipulation and base64 tools to automate the other steps.  Go to the Cygwin site, and install the tools (the default install won't include the OpenSSL toolset, so make sure you manually select those as well during the installation of Cygwin packages).


Now, the way to compute the fingerprint is a single (albeit longish) command-line:




Breaking down the individual commands on the pipe, the command:


    cut -d ' ' -f 2 < public_key_openssh.pub


reads the file public_key_openssh.pub, cuts the contents at whitespace, and streams out the second component. Essentially, it's extracting the Base64 encoded public key from the public key file. The command:


    base64 -d


merely reads the input pipe, base64 decode it, and streams out the binary value. And finally, the command:


    openssl dgst -c -sha1


uses the OpenSSL tool to compute the SHA-1 Digest from the binary value.


As you can see, the fingerprint we compute directly from the public key corresponds to the one BI Platform says it got from the SFTP server.  The public key the BI Platform is using is the one from the SFTP server, and not from the man-in-the-middle.




If you require ways to send or schedule BI Platform documents across the network securely, the recommended solution is to upgrade your deployment to BI 4.1 SP6 or higher, and use the new SFTP destination functionality.


One quirk is the fingerprint value. This blog describes how to determine the fingerprint value to use, and how to validate the fingerprint for correctness.


Hope you find this information useful, and you're able to integrate this new functionality into your BI architecture!



Ted Ueda has supported SAP BusinessObjects BI Platform and its predecessors for almost 10 years. He still finds fun stuff to play with.

An understanding for how your BI Platform is used and utilised will enable you as a BI Platform Administrator to take the necessary steps to improve its reliability, performance and adoption within your organisation.


The Auditing database coupled with a new comprehensive Universe and a set of Web Intelligence documents that I have developed will help give you that insight you need and this is what I'd like to share with you now.


My Universe and documents have been in development, on and off, for some time but they have now reached a maturity level where I’m happy to share them with a wider community.


I’m overall pretty happy with the Universe and the documents, however they need a little performance testing on large data sets. This is where you can help me, help you!


Please download my latest ‘build’ (available for a limited time) and give them a blast. They are provided ‘as is’. I’m looking for feedback on any defects, performance issues and also additional reporting/business requirements. If you can get back to me with your feedback I can improve the content for everyone else to benefit.  I may occasionally published a newer ‘build’ in the same container, so check every now and then for an update.


Once I’m happy with the amount of feedback and testing I will make the Universe and documents more widely and permanently available.


I have ported the universe to various databases and is currently available for:

  • Microsoft SQL Server
  • Oracle
  • SQL Anywhere

Feedback on which database I should next port to would be helpful too!


There’s a large set of documents, each with a number of ‘reports’. The number of reports ranges from 1 to over 50 within a single document. So you can see I’ve been busy! They will take you some time to go through them all.


Here’s a list of documents:

1.     STA1 - Start here - Events over time.wid

2.     FRA1 - Fraud Detection - 1 machine more than 1 user.wid

3.     FRA2 - Fraud Detection - 1 machine more with multiple logon failures.wid

4.     LIC1 - License - 1 user more than 1 machine.wid

5.     LIC2 - License - Periods when sessions exceeded X.wid

6.     LIC3 - License - Users no longer using the system.wid

7.     SYS1 - System - Event Log.wid

8.     SYS2 - System - Delay in Recording of events to Audit Database.wid

9.     SYS3 x1 - System - Overall System Load Analysis (without Mode).wid

          SYS3 x2 mi - System - Overall System Load Analysis (Mode is Interactive Only).wid

          SYS3 x2 ms - System - Overall System Load Analysis (Mode is Scheduled Only).wid

          SYS3 x4 - System - Overall System Load Analysis inc Mode.wid

10.     SYS4 x1 - System -  Refresh Analysis (Mode is Interactive).wid

          SYS4 x1 - System - Refresh Analysis (Mode is Scheduled).wid

          SYS4 x2 - System - Refresh Analysis (inc Mode).wid

11.     USA1 x1 - Usage - Session Analysis.wid

          USA1 x15 u - Usage - Session Analysis (With Users, Without Mode).wid

          USA1 x2 mI - Usage - Session Analysis (Mode is Interactive Only).wid

          USA1 x2 mS - Usage - Session Analysis (Mode is Scheduled Only).wid

          USA1 x30 umI - Usage - Session Analysis (With Users) (Mode is Interactive Only).wid

          USA1 x30 umS - Usage - Session Analysis (With Users) (Mode is Scheduled Only).wid

          USA1 x4 m - Usage - Session Analysis (With Mode).wid

12.     USA2 - Usage - Large number of Data Providers.wid

13.     USA3 - Usage - Documents no longer used in the system.wid

14.     USA4 - Usage - Universe Objects usage. Identify infrequent used objects.wid

15.     USA5 - Usage - Universes no longer used.wid


Each document has an ‘About’ page that provides a few more details on its purpose.

The Universe is, of course, documented within itself. Every description box has a description! However I’ve not yet written supporting documentation for either the universe or the Web Intelligence documents. Feedback from you on what I should explain would be great!


Requirements: BI Platform BI 4.1 Support Pack 5 or greater.



  1. Download the content.
  2. Import one of the four 'Universe' LCMBIAR files into your system using Promotion Management (it will go into "BI Platform Auditing" folder)
  3. Import the Web Intelligence LCMBIAR file (it will go into "BI Platform Auditing" folder)
  4. Edit the connection that is imported (in "BI Platform Auditing" folder) with the correct login credentials.
  5. Open the Web Intelligence document ‘STA1 - Start here - Events over time.wid’ as your starting point!


Please post your feedback here and I will do my best to comment back as soon as possible. (I’m on annual leave 24th July until 17th August 2015 so I won’t be able to reply during this time)


Matthew Shaw


Hi Folks,



This post is for reference who want to install BI 4.1 SP4 full build on Linux and have never seen similar installation.

Purpose of this post is to make ourselves aware of very simple i.e basic standalone installation with no add-ons, no custom database, no cluster etc.   



Working RHEL. Host file entries done for itself.

Important packages preinstalled according to install guide.

Follow PAM document  (supported platforms document) before installation.



Quick Check: Created directory bi42sp4p1 which will be used as installation directory.










More Reference:

SAP Business Intelligence Platform Pattern Books - Business Intelligence (BusinessObjects) - SCN Wiki



That's all folks.





Carsten Mönning and Waldemar Schiller

Part 1 - Single node Hadoop on Raspberry Pi 2 Model B (~120 mins), http://bit.ly/1dqm8yO

Part 2 - Hive on Hadoop (~40 mins), http://bit.ly/1Biq7Ta

Part 3 - Hive access with SAP Lumira (~30mins), http://bit.ly/1cbPz68
Part 4 - A Hadoop cluster on Raspberry Pi 2 Model B(s) (~45mins)


Part 4 - A Hadoop cluster on Raspberry Pi 2 Model B(s) (~45mins)


In Parts 1-3 of this blog series, we worked our way towards a single node Hadoop and Hive implementation on a Raspberry Pi 2 Model B showcasing a simple word count processing example with the help of HiveQL on the Hive command line and via a standard SQL layer over Hive/Hadoop in the form of the Apache Hive connector of the SAP Lumira desktop trial edition. The single node Hadoop/Hive setup represented just another SAP Lumira data source allowing us to observe the actual SAP Lumira-Hive server interaction in the background.

This final part of the series will go full-circle by showing how to move from the single node to a multi-node Raspberry Pi Hadoop setup. We will restrict ourselves to introducing a second node only, the principle naturally extending to three or more nodes.


Master node configuration


Within our two node cluster setup, "node1" will be set up as the master node with "node2" representing a slave node 'only'. Set the hostname of the master node, as required, in file /etc/hostname.

To keep things nice and easy, we will 'hard-code' the node's IP settings in the local hosts file instead of setting up a proper DNS service. That is, using, for example, the leafpad text editor, sudo leafpad /etc/hosts and modify the master node hosts file as follows:     node1     node2

Remember in this context that we edited the /etc/network/interfaces text file of node1 in Part 1 of this blog in such a way that the local ethernet settings for eth0 were set to the static IP address Thus, the master node IP address in the hosts file above needs to reflect this specific IP address setting.

Similarly, edit the file /opt/hadoop/etc/hadoop/masters to indicate which host will be operating as "master node" (here: node1) by simply adding a single line consisting of the entry node1. Note that in the case of older Hadoop versions, you need to set up the masters file in /opt/hadoop/conf. The "masters" file only really indicates to Hadoop which machine(s) should operate a secondary namenode. Similarly, the "slaves" file provides a list of machines which should run as datanodes in the cluster. Modify the file /opt/hadoop/etc/hadoop/slaves by simply adding the list of host IDs, for example:



You may remember from Part 1 of the series that the Hadoop configuration files are not held globally, i.e. each node in an Hadoop cluster holds its own set of configuration files which need to be kept in sync by the administrator using, for example, rsync. To keep the configuration of nodes of a cluster of significant size in sync represents one of the key challenges when operating a Hadoop environment. A discussion of the various means available for managing a cluster configuration is beyond the scope of this blog. You will find useful pointers in [1].


In Part 1, we configured the Hadoop system for operation in pseudodistributed mode. This time round we need to modify the relevant configuration files for operation in truly distributed mode by referring to the master node determined in the hosts file above (here: node1). Note that under YARN there is only a single resource manager for the cluster operating on the master node.



Common configuration settings for Hadoop Core.


Configuration settings for HDFS daemons:
The namenode, the secondary namenode and the datanodes.


General configuration settings for MapReduce
. Since we are running MapReduce using YARN, the MapReduce jobtracker and tasktrackers are replaced with a single resource manager running on the namenode.


File: core-site.XML - Change the host name from localhost to node1











File: hdfs-site.xml - Update the replication factor from 1 to 2









File: mapred-site.xml.template ( “mapred-site.xml”, if dealing with older Hadoop versions) - Change the host name from localhost to node1







Assuming that you worked your way through Parts 1-3 with the specific Raspberry Pi device that you are now turning into the master node, you need to delete its HDFS storage, i.e.: sudo rm -rf /hdfs/tmp/*

This already completes the master node configuration.


Slave node configuration

When planning to setup a proper Hadoop cluster consisting of considerably more than two Raspberry Pis, you may want to use a SD card cloning programme such as Win32 Disk Imager download | SourceForge.net to copy the node1 configuration above onto the future slave nodes. See, for example, http://bit.ly/1imyCXv for a step-by-step guide to cloning a Raspberry Pi SD card.


For each of these clones, modify the /etc/network/interfaces and /etc/hostname file, as described above, by replacing the node1 entries with the corresponding clone host name.

Alternatively and assuming that the Java environment, i.e. both the Java run-time environment and the JAVA_HOME environment variable, is already set up on the relevant node as decribed in Part 1, use rsync for distributing the node1 configuration to the other nodes in your local Hadoop network. More specifically, on the slave node (here: node2) run the following command:

     sudo rsync -avxP /usr/local/hadoop/ hduser@node2:/usr/local/hadoop/

This way the files in the hadoop directory of the master node are distributed automatically to the hadoop folder of the slave node. When dealing with a two-node setup as described here, however, you may simply want to work your way through Part 1 for node2. Having already done so in the case of node1, you are likely to find this pretty easy-going.


The public SSH key generated in Part 1 of this blog series and stored in id_rsa.pub (and then appended to the list of SSH authorised keys in the file authorized_keys) on the master node needs to be shared with all slave nodes to allow for seamless, password-less node communication between master and slaves. Therefore, switch to the hduser on the master node via su hduser and add ~/.ssh/id_rsa.pub from node1 to ~/.ssh/authorized_keys on slave node node2 via:

          ssh-copy-id -i ~/.ssh/id_rsa.pub hduser@node2

You should now have password-less access to the slave node and vice versa.

Cluster launch

Format the Hadoop file system and launch both the file system, i.e. namenode, datanodes and secondary namenode, and the YARN resource manager services on node1, i.e.:

     hadoop namenode -format



When dealing with an older Hadoop version using the original map reduce service, the start services to be used read /opt/hadoop/bin/start-dfs.sh and /opt/hadoop/bin/start-mapred.sh, respectively.

To verify that the Hadoop cluster daemons are running ok, launch the jps command on the master node. You should be presented with a list of services such as both namenode and secondary namenode as well as datanode on the master node and datanode on the slave nodes. For example, in the case of the master node, the list of services should look something like this, i.e., amongst other things, both the single YARN resource manager and the secondary namenode are operational:


If you find yourself in need for issue diagnostics at any point, consult the log4j.log file in the Hadoop installation directory /logs first. If preferred, you can separate the log files from the Hadoop installation directory by setting a new log directory in HADOOP_LOG_DIR and adding it to script hadoop-env.sh.


The picture shows what a two node cluster setup may look like. In this specific case, the nodes are powered by the powerbank on the right-hand side of the picture.


And this is really pretty much all there is to it. We hope that this four-part blog series helped to take some of the mystery out of the Hadoop world for you and that this Lab project demonstrated how easily and cheaply a, admittedly simple, "Big Data" setup can be implemented on truly commodity hardware such as Rapsberry Pis. We shall have a look at combining this setup with the world of Data Virtualization and, possibly, Open Data in the not-too-distant future.



A Hadoop data lab project on Raspberry Pi - Part 1/4 - http://bit.ly/1dqm8yO
A Hadoop data lab project on Raspberry Pi - Part 2/4 - http://bit.ly/1Biq7Ta

A Hadoop data lab project on Raspberry Pi - Part 3/4 - http://bit.ly/1cbPz68

A BOBI document dashboard with Raspberry Pi - http://bit.ly/1Mv2Rv5

Jonas Widriksson blog - http://www.widriksson.com/raspberry-pi-hadoop-cluster/

How to clone your Raspberry Pi SD card for super easy reinstallations - http://bit.ly/1imyCXv



[1] T. White, "Hadoop: The Definitive Guide", 3rd edition, O'Reilly, USA, 2012

Most of the BI Landscapes in industry utilize a content driven BI Approach rather than a user focused BI approach. While the content centrist approach is great for IT or IS organization it posed challenges to Business as business have to juggle through a lot of content which can be dashboards , reports and explorer information spaces or any other BI contents to do the analysis. This can lead to a lot of frustration and confusion and in the process and also wastes a lot of time of business to get all the relevant information for a specific analysis. Also when a new business user wants to do the same analysis the path he might take can be time consuming as he might need to understand which BI contents available for an analysis and what type of information they have and then switch between those contents to reach to an answer.

To overcome this problem we came up with a novel way to build user focused BI utilizing custom websites with embedded BI contents. Now before going there you might argue why anyone would need one more website when we already have BI launchpad in BusinessObjects as the default portal. And the answer is quite simply BI Launchpad can have multiple type of content like reports (Webi/Crystal) , Dashboards , data exploration Information space and they are most of the time just sitting in different folder and sub folders and there is no logical way to tie them to a specific type of activity or users and the process can be very cumbersome. Also some times the contents are not linked together for example there could be a sales dashboard and sales detail report but user have to go to sales dashboard find out the scenario which he wants to analyze and then go to the reports and select all prompts and filters to get to the details for that scenario.

How this solution works from a bird’s eye view : The most critical feature to make the solution work is open document URL for specific BI contents and enabling single sign on for BusinessObjects. The solution leverage the Opendoc links of Business Objects contents and combining it with i-frames in a customer portal. The portal being rendered via an IIS website which has a user friendly DNS alias. Let’s say user can access all the relevant sales information tying http://sales vs. http://businessobjects-dev (followed by a bunch of clicks to get to your desired folder), which one makes more sense and easier to remember when you are looking for all BI Contents related to sales? We created the sites and named them as http://Sales.yourcompanydomain.com , short, meaningful and easier to remember for users. The IIS websites make use of i-frames within which the open document links for dashboards, explorer information spaces and Webi reports are called. Also we make sure to make the website code in a way that it loads the dashboard contents while loading utilizing the parallel processing without wasting user time and once loaded the dashboard does not refreshed automatically.


Let take an Example:

Let’s take a fictitious scenario; assume you are a product manager in a large organization selling products to consumers across to globe and you are assigned to some product line in the company. Your job requires to ensure your have enough inventory for next week for your top selling products for last quarter for in North American region and ensure the plants which supplied the product is going to produce enough of them for the next quarter.


In a traditional content driven BI scenario you would have go to sales folder and find out which reports or dashboard gives you the top customer for last quarter by region. Then find out which is your top selling product for North America by filtering your product lines and regions. Then after you find the product, you would need to go to inventory folder and find out which report or dashboard shows the current inventory by product. Then find what is the current inventory levels for your top product which you have got from sales report. Then go to forecast report find out the forecast of the product for the next quarter and then then compare the number with the current inventory to understand how much of the product you would need to produce during next quarter.This whole process can take many hours to get a answer.


Now let’s take the scenario in this new approach where there is dedicated web link like http://PM-Analytics which has the sales dashboard with inventory dashboard and Forecasting report at the same Weblink as different tabs. The user just goes into the sales tab , finds the top selling product then gets to the next tab which inventory while still preserving his sales analysis.Then he finds the inventory numbers and goes to the next tab which is forecast report filtering the product and compare the additional inventory that will be needed based on forecast. Sounds simple!! This process will also save user a lot of headache to find the right content use the contents correctly as everything needed in one place and his sales analysis is not lost and he has just do the similar analysis for south America region quite easily as his old analysis does not automatically reset to default and the session should be still active. This process should be no more than few minutes.



How does it Look:

In a traditional content focused BI user have to go to Launch Pad , Public Folder and then find all the contents that are needed for an analysis.

Bi Launchpad 2.png


In the new Process just need to Type a URL in a browser which can be as simple as http://Sales which allows the user to directly view the landing dashboard without the hassles of finding it in a folder and all the additional BI contents to support an analysis. They do not see anything else except what they need.


Exec Summary with Arrow.png

geo Summary with Arrow.png

The application can have reports which support analysis and also explorer information spaces to do data exploration.

When users wanted another set of related data they just click on another tab which takes him for additional analysis.


Solution Architecture:

Here is how the solution looks like. User types in a custom url like http://sales which is hosted on IIS web server as a web application.Then the request is redirected through a load balancer into Business Objects webserver and subsequently to BOBJ application server to cater the BI contents requested.


Creating a Web Application to

Deploying Business Objects Dashboard with a Custom IIS Website Name

I am going to discuss how to build a custom application URL to host BI contents so that a user group gets their BI contents available in just one place rather than having to go through launchpad and bunch of folders. The below solution is meant for IIS webserver so all the screen shots are specific to IIS only.


Three items need to be installed/configured on the server in order to prepare to serve up IIS websites:

  • IIS services should be configured on the server
  • .Net Framework 4.5 should be installed

Configure IIS Services on the server

Go to the Server Manager console on the server and select the option Add Roles -

Select the web server IIS role anc click next -


Once the installation is over, you will be able to see the role and services installed -

Install .Net Framework 4.5

Download the .Net 4.5 setup from Microsoft site.
Double click on the downloaded .exe file to start the setup.
Follow the on screen instructions to complete the setup.

How to Setup a Custom IIS Website for housing opendoc links

1. Content Home Folder for Site

Create Directory Folder

Create a folder that will server as the home folder for the website, this is required while creating the website.

Apply Access Levels to Site Folder

Go to the properties of the home folder that was created for the web site and add the ‘Everyone’ group with execute access –

2. Create the Website In IIS

Add the Web Site

Open up the Windows Server IIS manager console in one of two ways:

Start > Run > inetmgr > hit enter

or …

Start > Administrative Tools > Internet Information Services Manager

Right click on ‘Sites’ and select the option – Add Web Site.

Fill in the detail fields corresponding to the application area for which we are creating the site. These are…

Site Name: This name should match that of the Application Area established in the BO Launchpad

Physical Path: This is the path to the home contenet folder for the site that you created in an earlier step

Host Name: This equates to the web URL that users will enter to visit the web page (see example, below, for the “Inventory” application).

Application Pool Settings

In IIS left pane, click on Application Pools to see all application pools for your sites.  For your new site, make sure that the application pool is set to use the latest version of .Net Framework .  If it is not, double click the application pool and in the dialoge window select the latest version for .Net Framework.




In IIS left pane, right-click on your new application site and select Edit Bindings…  Make sure that both bindings are present on the website – the short name and the fully qualified name.



3. Finalize Web Content Customization

Populate the Home Directory with Sample Web Content

Once the website is created, the code needs to be put in the home folder we created.



Modify Customized Content Files

There are couple of things that we need to modify for the sites for each application area that we are rolling out the site for.

The following three files need to be modified to change the site as per the new application.


The timeout popup setting is in this file in the section for function Init() , if required it can be changed. We are currently using a standard timer value of 7140000.



The title of the Web Site and the working environment is present in this file –

The Workingenv parameter decides which links will be used from the links.xml file.
The Title parameter decided what will be the title of the webpage.


the opendoc links, title of the different tabs and the tooltip help is present in this file –

Based on the working environment we set in the web.config, the opendoc links will be picked from the links.xml file.


while inserting the links, we needto modify them a bit –


4. Request DNS Alias for the server/loadbalancer

Once the website is created, make sure to create an simple alias for the users to accessthe site – for example http://sales, http://quality etc.
Alias names being requested should be SAME as the bindings that have been provided for the web site.

Once the alias has been created, access the web site using any browser and confirm that it is working as expected.


Now finally once you are done with these you will have a website where you can embed the BI content for a personalized experience of your end user community.


Please keep in mind as there is no logoff button included in the custom website even the user closes the browser the sessions are still active in the server until it times out. However if you are in BI 4.1 SP6 BOBJ drops the session in 2mins after user closes the browser.

I have got multiple queries from various forums around how we are planning to support UDT and MSU connectivity in future. I have compiled below, our approach towards these going forward – from both BI 4.1 SPs as well as upcoming BI 4.2 release perspective.

Universe Designer Tool (UDT):

As you all know, UDT is used for creating new UNVs based on various supported data sources. Starting from SAP BI 4.x, we additionally ship Information Designer Tool (IDT) as part of the SAP BI product suite, which helps users to create multi-dimensional universes namely UNXs. IDT and UNX combination is forward looking and have advanced features/enhancements.


While users can open UNVs from IDT and convert existing UNVs to UNX format, user can continue to use UDT for creating new UNVs on the supported Data sources.


However, going forward in BI 4.x releases:

  • We will continue support UDT mandatorily for DBs/Sources which are supported in BI 3.1 version, to make sure there is no regression in the upgrade scenario
  • Newer versions of these DBs (if introduced by the vendor) will be tested and certified for UDT.
  • UDT will not be certified for the new data sources which got/being introduced newly in BI 4.x release.   
  • Current / Latest status of UDT support is upto-date and can be found here in Product Availability Matrix (PAM) - under UNV column.



Customer is using Oracle 10g as database for his UNV created using UDT as part of BI 3.1/BI 4.0.

  • Future Oracle versions will continue to be supported in UDT (as part of BI 4.1 / BI 4.2) – so that customer can seamlessly migrate.
  • Data sources which are/will be new in BI 4.1 / BI 4.2 (like Hadoop Hive, Amazon Redshift etc.) – will not have UDT support.


Multi Source Universe (MSU):

Going forward MSU will be tested and certified against Top 6 DB / datasources only, including SAP HANA and SAP BW (Teradata 15, Oracle, MSSQL, Progress, SAP HANA, SAP BW).

For other datasources, MSU support will only be considered based on a business case or customer request.  We will add the support with a justified request – through FPs or SPs, based on the priority.

Current status of MSU support for various data sources are upto-date in Product Availability Matrix (PAM).

Dear All,


we are pleased to announce that SAP BusinessObjects BI4.1 SP06 has been released and is available for download on http://support.sap.com


Additional recourses on SAP BusinessObjects BI4.1 SP06:



* requires logon to SAP Support Page with a valid account





Filter Blog

By author:
By date:
By tag: