1 2 3 18 Previous Next

BI Platform

264 Posts

In our daily Business Objects reports scheduling life, we often use events as a key factor for triggering the BO Reports. Data quality can be obtained by fixing the events as a dependency to a BO Report. If I'm not wrong, almost everyone who uses Business Objects follow this event dependency concept to maintain the quality of data in the reports. What else, we can play with those events??...


I thought of sharing on how well we can utilize the BO events for Load balancing, Data quality and Maintenance activities.


Basically, events are classified into 3 types.


  1. Custom Event - Occurs when you explicitly clicks its "Trigger this event" button.
  2. File Event - Waits for the particular file to generate in the desired location to trigger the event.
  3. Schedule Event - Triggers the event, when a particular object has been processed.


To know more about events, read Page#229 in Admin guide.


Load Balancing:




Dummy Reports - 3 or 4 Reports*

Schedule Events - 3 or 4 Events*


Priority events can be created to maintain the load balancing on BO servers. By creating these events, the reports will be kicked off based on the criticality. The four dummy reports which we have created will be executed every 1 min* and triggers the corresponding schedule events.


Here, we can name those 4 dummy reports as Priority1_Report, Priority2_Report, Priority3_Report and Priority4_Report.

Name those 4 schedule events as Event_P1, Event_P2, Event_P3 and Event_P4. (Fix the events as a OUT Conditions for corresponding Priority reports)


The Priority events will be fixed as a condition to all the BO Reports based on the start time of the reports. For example,


Event NamesFixed to the reports
Event_P1Scheduled to start between 8 AM -10 AM
Event_P2Scheduled to start between 10:01 AM - 11:30 AM
Event_P3Scheduled to start between 11:31 AM - 1:00 PM
Event_P4Scheduled to start after 1:01 PM


How these events can be used?


  • Pause these Priority Reports when there is a delay in the data load or any other issues to avoid reports kicking off at the same time.
  • Once the data load process got completed or any other issues are fixed, release the "Priority1_Report" first to generate the critical reports depends on Event_P1.
  • Release the "Priority2_Report" after 30-45 mins* after confirming that there are enough resource available for processing the next set of reports. By this time, most of the P1 reports might have completed.

  • Release the "Priority3_Report" and "Priority4_Report" reports after 30-45 mins* respectively by checking the number of reports in RUNNING state.


Maintenance Activities:


I divide this module into 2 segments.


  1. Maintenance on BO Server
  2. Maintenance on Data load server (DB)


For Maintenance on BO Server, Rudiments:


Dummy Reports - 1 Report

Schedule Events - 1 Event


To avoid report failures or to stop all the BO reports from kicking off, we can have this one dummy report to manage all these actions. The dummy reports which we have created will be executed every 1 min* and triggers the corresponding schedule events.


Here, we can name that 1 dummy report as Allow_All_Reports.

Name the schedule event as Event_AllowAll. (Fix this event as a OUT Conditions for "Allow_All_Reports" reports)


How this event can be used?


  • Make this event event as a mandatory event while scheduling a report.
  • This event is going to be the key for BO Reports to stop or kick off.
  • Incase of any issues, if you don't want your reports to trigger and to avoid the report failures, you can go-ahead and pause one single report "Allow_All_Reports" to control all other scheduled reports on your environment.
  • Once the issues are resolved, you can resume this "Allow_All_Reports" to allow all the reports to kick off.

Note - If you pause this report by mistake then none of your reports will trigger.



For Maintenance on Data Load Server (DB), Rudiments:


Dummy Reports - 3 or 4 Reports*

Schedule Events - 3 or 4 Events*


To avoid report failures or to stop all the BO reports which are hitting specific database during planned or unplanned maintenance on data load servers (DB), we can have these dummy report to manage all these actions. The dummy reports which we have created will be executed every 1 min* and triggers the corresponding schedule events.


Here, we can name that 3 dummy report based on the DB Names like DB1_Report, DB2_Report and DB3_Report.

Name the schedule event as Event_DB1, Event_DB2 and Event_DB3. (Fix these event as a OUT Conditions for respective DB Event reports)


For example, Consider DB1_Report is scheduled for Oracle DB reports. Then, fix the Event_DB1 as a in-condition for all the reports which are hitting the Oracle Database. Incase of any unplanned maintenance or any issues with the database, we don't have to look for the metadata or identify the list of reports hitting the Oracle DB and pause them manually. All you need to do is, go-ahead and pause the DB1_Report which will in-turn hold all the reports which are hitting the Oracle DB since we have scheduled those reports with DB Event conditions.


Event NameReports hitting ...
Event_DB1Oracle Database
Event_DB2SQL Database


Data Quality:


The Quality of data in the report can be maintained by fixing any of the 3 events (Custom, Schedule or File events). Based on the ETL jobs which loads the data to the tables used by the reports, we can fix the best event type as a condition to the BO Reports.


  • If ETL jobs are generating any trigger files after processing the data, file event can be fixed as a condition to all those reports which are using the tables loaded by that particular ETL job.
  • Custom Event can be used to kick off the reports, once the specific table got loaded.
  • Scheduled event will be acting as a WATCHER Reports to monitor the status of ETL job. Based on the completion of ETL job, instance will be successful and triggers the corresponding event.


To Summarize:


While scheduling any reports, plan to fix the below events as a dependency to maintain the quality of data, load balancing and to avoid report failures or to pause \ resume any reports during planned \ unplanned maintenance period.


Event to wait for:  Event_AllowAll, <<Priority Event>>, <<Database Event>>, <<Data Load Event>>


  1. Event_AllowAll - Default event which should be fixed as a dependency for all scheduled reports. Pausing this report will hold all the reports in your environment.
  2. Priority Event - Should be fixed based on the scheduled start time of the report. Helps in Load balancing during loads delay or any other issues.
  3. Database Event - Fix this dependency based on the database the report is running against. Helps to pause \ resume the reports during maintenance period or any DB issues.
  4. Data Load Event -Based on the ETL process, fix the best suitable event for ensuring the data quality.


Points to be noted:


  • No DB connections are required for the dummy reports which has been created for load balancing and maintenance.
  • As these reports are not hitting any DB's, instances will complete in few seconds.

  • Place these event reports in a separate folder and maintain security level in such a way that only Administrators can Pause \ Resume these reports. (Customize your security levels based on the requirement)
  • Fix the report LIMITS as 10.
  • These event reports will be acting as a one stop location where you can control almost all the reports which are scheduled in your environment.


*Based on business requirements.


I welcome the feedback, comments and complements.



Vijay Madhav

You have to complete 4 steps.


1. Create an event

2. Create a script file for database table record count

3. Create the schedule

4. Create a windows task schedule


1- Create an event:


Log on to CMC and create an event.



Go to "System Events" folder and  click on "Create an event"



Fill the text boxes. Choose "File" as type and the path of your "ok" file. Note that the "ok" file (which is "f:\a.txt" for this example) is not created yet. The text file will be created by a script mentioned below. The "f" drive is a drive on your BO server.




2- Create a script file for database table record count


Ok. Now we need a script. This script will;

  • connect to the database
  • check the number of rows in our table
  • if there is record which is corresponding to our query, create the "ok" file

You can find an example file with this kind of script as an attachment.


3- Create the schedule


Go to your BI portal and create your schedule. Don't forget to select your event.



So, your schedule will never run, until there is a file named as "a.txt" on your "F" drive.


PS: Don't forget that the schedule looks for the "ok" file which is created after the schedule creation time.


4- Create a windows task schedule

The last step is , create a windows schedule task, which automatically runs your script file every hour or quarter or etc. If your ETL process fails there will be no record in your table and the script won't create the "ok" file "a.txt". When the ETL process succeed your script will create the "ok" file and your schedule will run.



The objective behind this post is to try and bring some attention to what I consider a serious product shortfall, namely the sequential processing of multiple SQL statements within a single universe query.


The below idea touches on this briefly, it also references a secondary issue which is the sequential processing of multiple dataproviders in a single document - another topic which needs addressing.



The Issue


As mentioned this post focuses on the execution behaviour of a single unx query that contains multiple statements (flows) and the sequential processing thereof. These queries are generally invoked when the following options have been set on the universe:

  • Multiple statements for each context
  • Multiple statements for each Measure


Steps to reproduce:


  • Create a universe containing multiple contexts
  • Enable the option 'Multiple statements for each context'
  • Select a measure from each context, this will produce 2 statements (observed when selecting view script)
  • Profile/monitor database activity
  • Execute the query
  • Note the sequential execution of each statement


This behaviour is documented in SAP Note: 1258337. This document is slightly dated but from what I've seen the behaviour hasn't changed.




The below components all use a java library to execute and retrieve the results of a .unx query from the database.

  • Crystal Reports Processing Server (Crystal Reports for Enterprise)
  • Dashboard Processing Server
  • Information Design Tool
  • Dashboard Designer
  • Crystal Reports for Enterprise Client


Generally speaking the guts of the code that performs the query execution is held in a class called com.businessobjects.dsl.services.dataprovider.impl.QuerySpecDataProvider which can be found in the jar files:

  • dsl_engine.jar
  • com.businessobjects.dsl.services.impl.jar


Changing the below two methods, in the class com.businessobjects.dsl.services.dataprovider.impl.QuerySpecDataProvider, to include a simple asynchronous call when running the recursive calls to getResultNode(<?>) would bring about significant performance improvements when executing queries that contain multiple statements:

  • getResultNode(<?>)
  • getMergedResults(<?>)


By making the changes above you should in theory see vast improvements in overall query performance  when:

  • You have a large number of statements in a batch
  • And/Or statements within the batch take a long time to run

It is important to note that the final output will always take as long as the longest running statement in the batch as all results need to be retrieved before they can be passed up the stack.


Please note that the workflow is slightly different for components that access the Webi Processing Server, namely:

  • Webi Reports
  • PowerBI
  • BIWS


The Webi processing server utilises a different code base, I’m struggling to get some readable traces together for this application component but my guess is changing the underlying code, compiled in QT.dll, in line with the above recommended change would have a similar positive affect.




The above findings are the result of testing against a single source relational unx universe, behaviour may differ when using multisource universes/olap connectivity - both of which I haven't got round to testing yet.


With that in mind by no means am I proposing that the above small change to the referenced class is a complete solution but one would hope that given the available manpower at SAP, coupled with the performance improvements this change can bring to the product suite, a full solution could be implemented by SAP in a relatively short space of time - given enough push by us the community.


I've listed several ideas below that detail where sequential processing takes place. Please up-vote them if you want to see the current implementation improved and ultimately reap the performance gains these changes will bring.








For those JAVA devs out there, it is of course possible to decompile the com.businessobjects.dsl.services.dataprovider.impl.QuerySpecDataProvider class and implement this change in order to reap the performance benefits, however there is a high likelihood that merely the act of decompiling the code would be a copyright infringement so this is by no means endorsed or recommended by me.


Let the crusade begin......hopefully.





Dear All,


After a long time here in SCN, I am writing this blog about one of the quite interesting feature in BI 4.x i.e. Visual difference.


Visual difference provides an opportunity to compare two versions of BI content in order to identify the changes incorporated in each version. This will allow you to track the changes made in every version of the BI content and the content could be a Report, Universe or even a LCMBIAR file.


Visual difference background


Visual difference is one among the hosted service in APS and consumed by both Promotion management and Version management. It is not necessary to have a dedicated APS for hosting Visual difference service unless otherwise majority of your users are using it exclusively apart from Promotion/Version managements.



What & How to compare


You can compare the BI contents present in any one of the below locations


  • CMS – from a BusinessObjects repository
  • VMS – from a Version Management system repository
  • Local File System  – from a local file system




In CMS, you can select both reports and Universes.



Post selection you can compare them across repositories (CMS/VMS/LFS)



You can even schedules the difference generation similar to reports




Visual difference use case for each user category



  • Difference log - What are the changes made in my current report/Universe with reference to my previous version so that I can proceed with my development from the required version
  • Change history - What are the consolidates Changes made since from my initial Universe version (Version 0)

Test Analyst

  • Changed section - What are the only modified objects in the Universe so that I can use my test cases only for the required objects for faster testing approach?


  • Difference log - What are the changes made in user’s current report/Universe comparing earlier version?  Based on this as per user requirement I will restore my universe/report.
  • Consolidation for easier maintenance - What are differences between Sales universe and marketing universe? Can I merge both of them and create a single universe to fulfill the reporting requirements for both the departments?

Hope the blog is interesting. Let’s start using visual difference and make our work even simpler & smarter. Thanks for reading!

As promised in SAP BusinessObjects XI 3.1/BI 4.x installation prerequisites and best practices [Windows] here is a continuation of the pre-requisites & best practices document for SAP BOE XI 3.1 and BI 4.x on Linux/Unix based environments.


The same concept applies as for Windows servers, which is to allow the installer to run as uninterrupted as possible with respect to the OS parameters/settings. There are certainly deeper checks required on a *nix environment, however these should be taken care of during the build-up of a server and are mostly common for any application. However, here are a few of them which have been observed to cause issues if not set correctly.


I've not included sizing in this as I wanted to keep this document for OS related parameters and configurations only however if you wish to learn more about how to size an environment you can visit: **http://service.sap.com/sizing.




To start with, the hardware and software configuration of the server or client machine that we're installing SAP BusinessObjects on must be supported. SAP provides a supported platforms guide or "Product Availability Matrix" (PAM) for several products.

These can be found at the following URL: http://service.sap.com/pam

You can also refer the following KBA: 1338845 - How to find Product Availability Matrix (PAM) / Supported Platforms Document for SAP BusinessObjects Products


Here is an example of a page from the PAM document. Along with the compatibility with SAP and 3rd party softwares, for Unix, OS patch requirements and additional patches/libraries that are required are also mentioned here. The product version is also included in the screenshot.A-PAM1.PNG

Source: SAP


Source: SAP


Ensure you're viewing the details of the correct OS / patch when viewing a PAM or Supported Platforms Guide.

**Visit the Sizing URL to know more about SAPS.





The most crucial piece in a Unix BO installation is the user. BO cannot be installed on Unix using the root user nor is root access required post-installation to run BusinessObjects. Hence, it is best practice to have a separate user for an installation for example: bo41adm.

Add this user to an existing group or create a group separately for the BO user. bo41adm will own the BO installation and be used to run all scripts, etc.


The user must have sufficient rights on the following:

a. Installation directory

b. Installation media

The following rights must be given:

a. Read

b. Write

c. Execute

A minimum right's value of 755 is sufficient.







Just like Windows, BusinessObjects is identified on a server by its hostname. Hence, we must ensure that the system has a valid and resolvable hostname.

To set a coherent hostname (e.g. bidev01, etc.), you can use the command:

hostname <desired name>

Nextly, make sure the hostname is associated with the machine's IP address in the hosts file, wherever relevant.

The hosts file on a Linux/Unix system is found in /etc.



This is a necessary step because if the hostname is not resolved over the network, many services will not be accessible. Client tools have been observed to fail to connect to a BO cluster / host if it cannot be resolved.


If you're choosing to export SLD data to Solution Manager, the hostname is what is included in the ".xml" output of the BOBJ SLD REG batch. If the hostname/ip address mapping is not correct, the SLD will contain incorrect data cause errors ahead.




Most access to a Unix based server take place through a terminal emulator / console such as PuTTY. This can be used to access a BO server from any machine that has network access to that server. This means that we're installing remotely.


It is advised to use the terminal emulator from a machine which is in the same network as the server (or within the DMZ, if applicable). However, in scenarios where this is not possible, ensure that there is NO restriction on the network path that may hamper the installation.



E. SELinux:


...or Security-Enhanced Linux.

SELinux is an access control implementation for a Linux kernel. It provides security restrictions for certain kinds of executions.

For more details: Security-Enhanced Linux - Wikipedia, the free encyclopedia


Note that SELinux is only for RedHat Enterprise Linux.


Disable SELinux prior to perform a BusinessObjects installation. To disable on RHEL5, follow the below steps:

  1. Click on System on the main taskbar.
  2. Select Administration.
  3. Click on SELinux Management.
  4. Choose to keep this disabled.

See the below screenshot of how to disable SELinux on a RedHat Linux 5 OS.



You could also perform this using the command prompt:

To check the status of SELinux: sestatus

To change the SELinux status, you can make the changes in /etc/selinux/config

For more details you can check the following link:






A Linux/Unixoperating system has a methodology of sharing the available resources within a set of users, groups, domains, etc.. These resources can be split up to allow optimum usage of the OS that has various applications installed on it and managed by different users.


A user can be allocated a certain amount of resources. The user can play around within the range that it has been provided by setting the required value (soft limit) and can set it upto the admin restricted maximum value (hard limit).


For example, in RHEL5, we can see the limits using the ulimit -a command.



The limits configuration file is here: /etc/security/limits.conf


The root user has access to make changes to the configuration in this file.

It is recommended to set the limits to unlimited for BusinessObjects installations as mentioned in the Installation guides.

There have been issues observed with BI 4.0 installations on Linux which led to random services crashing. These issues were overcome by increasing the limits. See below KBAs:

1845973 - In BI 4.0 linux environments, random servers including CMS crash when starting SIA

1756728 - Servers fail randomly and the system becomes unstable in BI 4.0 installed on Linux





In the PAM document, you will also find a list of LOCALES that BusinessObjects is supported with. A compatible locale needs to be set prior to installation.

To check the locale, you can type the command: locale

To set a specific locale, you can type the command: locale <value>; For example: locale en_US.utf8

The PAM document mentioned in point A presents a list of the supported locales.




Needless to say, however the user profile must have the correct access related requisites in terms of the Unix executables, etc. Improper access to the user's bin usually causes issues when the installer internally sources, runs scripts, etc.

Have observed issues where if the path to the "ping" command is not added to $PATH, installations do not proceed with a INS00293.




I hope the above helps towards a successful installation. Above recommendations are based on several instances observed while troubleshooting support issues where the installation had failed and one or more of these options had helped reach to completion!

Being Linux/Unix environments there is a lot to be ensured from the OS perspective as well as the rights. The basic idea here is to allow the product to install without any hindrances, access issues, etc. from the OS level.

You can find documentation for installation, deployment, etc. here: BI 4.0, BI 4.1 and BOE XI 3.1.





The first release of the integration of Lumira for BI4.1 has been released!



Here I cover the functionality available with the first release.  The functionality is delivered with an add-on Installer for BI4.1.  Minimum requirement is 4.1 SP3 and Lumira 1.17.   This also means you need HANA to run Lumira 1.17.

BI 4.1 SP4 is recommend if you want full LCM support.

The most basic workflow:

First, you will see that with Lumira 1.17, there is a new option to publish to BI



When you have created your data set and visualization, you will be able to publish those to the BI repository, just like you do with a Crystal Report or Web Intelligence report.



Once you have published your story and dataset, you can see the story listed directly in BI launchpad, inheriting the same folder security as your other reports.


From here, you can view this report just like you do with any other report type.  The "story" that you are viewing is just a visualization based on a data set.  This also includes support for viewing InfoGraphics, right in BI Launchpad.


This is also supported in OpenDocument, as you would expect with the other document types!


Universe Support.

Both UNV and UNX data sources are supported by Lumira.  This means you can leverage your existing universe infrastructure, AND underlying security.

Once you've acquired data from a universe, what you publish to BI is the dataset, based on that universe.   That dataset then be scheduled, and refreshed by the user.   Below you can see the schedule option on a dataset, which gives you the standard scheduling options that you would see for a CR or Webi report.


Of course static datasets, which are based off uploaded data will not have the schedule options.

Note that the "force refresh on open" is not yet available.  However a user can refresh the data on demand if they have rights to do so.

When scheduling or refreshing, the same prompts with which the data was first acquired will be reused.  At this time, changing prompts or popping those up for the user when the manually refresh is not yet supported.



Datasets are the building blocks of stories.  A story can be build from one or more datasets.  In BI, these datasets are stored in the CMC, in a new location:


These datasets can be secured, deleted, much like you can do with explorer infospaces.


More on Datasets:

In Lumira, you 'prepare' a dataset, including merging data from a source like universe with data from a local spreadsheet. 

Screen Shot 2014-07-02 at 3.00.50 PM.png

You can also preform other operations on the data before publishing it.   When you schedule a dataset to run to fetch latest data, all the transforms that were applied will be replayed automatically.  This means any column joins/splits, custom calculations or combining of the data with a local excel source will be reapplied.




ESRI map support

Support for ESRI ArcGIS is also available with this initial release, which gives you great interactive maps support.  Support for on premise ESRI server is not yet available with this first relesae.



Mobile Support

The SAP BI Mobile application will now make the stories available through the Mobi application, where you can consume stories directly alongside your existing webi and other BI mobile content.


LCM Support

Full LCM support including the dependency calculations is included.  This means that promoting a story, will allow you to include the dependent dataset, universe, connection object and all related security.   Do note that to have full LCM UI for the Lumira integration, you must be on BI 4.1 SP4.  On 4.1 SP3, you can still use the LCM command line interface.


New BI Proxy Server

A new category of servers shows up, called "Lumira Servies".   This service is responsible for the proxying the requests down to Lumira Server, which performs all the heavy lifting of the analytic.



Auditing, Monitoring, BI Enterprise Readiness

Standard monitoring metrics will show up for this new service.  Additionally, the standard 'who viewed what' that you expect to see in your audit log also becomes available.


Data Storage & setting Limits

When a dataset is published to BI4, the underlying data is actually stored in Lumira Server.  This is all done transparently behind the scenes and does not require any administration in Lumira Server.  In fact, it does not actually show up in your Lumira Server repository.

When a user refreshes a story based on a universe, they will get their own copy of the data stored temporarily.   An administrator can set size and time restrictions for this temporary storage.




Stories must be authored and edited in Lumira Desktop.  Authoring directly from BI Launchpad, as you would do with the Webi Applet is not yet supported.

Accessing HANA Live is not yet supported.  At this time, a static dataset from HANA must be published from desktop.




This is only the first integration release of Lumira which already packs in a lot of functionality and allows you to leverage your existing BI4 infrastructure.  The Lumira BI4 integration will continue to add more enhancements and functionality with more releases over the coming months.

This post shows some possible ways to show in WebIntelligence who never login in the platform using SAP BO BI Universe in SBOPRepositoryExplorer and in the second method combining with SAP BO BI Audit Universe. Also it can be a good option to understand if Audit is working as expected.



Method I (Without Audit data)



Some required components:

  1. SAP BO BI 4.x Platform;
  2. SBOPRepositoryExplorer connection and the universe;
  3. WebIntelligence to create the document.



Creating a report with users information:


From WRC or from BI LaunchPad using WebIntelligence we can create next query to show number of users that we have in our SAP BO BI environment:



In our Test environment we have 2.921 users.

Now we can discover the number of users that never logged in Test environment:



It means that we have 2.921-2.676=245 users who have ever logged in Test environment.


With next query we can show list of users who never logged in this environment:





Method II (with Audit data)




Some required components:

  1. SAP BO BI 4.x Platform;
  2. SAP BO BI Audit DB;
  3. SAP BO BI Audit Universe configured and pointing to Audit DB;
  4. Excel to save users from Audit Report;
  5. IDT (Information Design Tool) to configure SBOPRepositoryExplorer connection and the universe;
  6. WebIntelligence to create the document.



Creating Report with Users Login-Activity from Audit DB:


Using WebIntelligence and Audit universe:


- For result objects:



- Filter Event Status with: User Logon Succeeded, Named User Logon Succeeded and Concurrent Logon Succeeded.



At the end you have next query:



Execute with Run Query:



Save document for a future use.

Export Report to Excel File:

Export report to Excel (XLS or XLSX):



Remove in the Excel all blank rows before head and all blank columns before "User Name", remove also any special character different than [a-Z][0-9].

You also can use SAP BO Live Office to retrieve data using Audit universe and schedule periodically.

Rename report name to the final table name in the universe:



Save to a visible path by SAP BO BI Client Tools (IDT and WRC) and by SAP BO BI WeIProcessingServer.




Retrieve SBOPRepositoryExplorer universe to IDT:


Create in IDT (Information Design Tool) a Local Project and from Repository Resources:






Configure Universe Connection attaching the Excel File:


To attach our Excel file definition to our universe we must create an universe connection in IDT into a project, for example:





Test Data from Excel in Connection:


Before continue with next steps is important to check if Excel data can be read where path is correctly defined and also the structure:



Import new Table (Excel) into Universe:


Now we can import the table into the Data Foundation:



Insert Join between EXCEL's table and USERS table:



Configure Join:




Save the Data Foundation.



Define new objects in the Business Layer:


Here we can define in the Business Layer, into the "Users" folder the new measure coming from the new table:



for example, with next content:



and before save and publish the universe, create a query to test results with all users and users without login:


- With login (in our example 2.921 users):



- Without login (in our example 2.761 users):



It means that we have only 2.921-2.761=160 users who have ever logged in Test environment.

Now we can publish our new universe to CMS for next topic.



Compare data from Method I and Method II


As you can observe in "Method I" we have 245 users logged and in "Method II" 160 users logged. We want to discover what users are different from "Method I" and "Method II" and try to understand why those users were not recorded in Audit DB.


- First is create a query with all users logged from both methods:



(245 users)


- Second is create a combined query (with minus) to get the list of users that were not included in Audit DB record:



So we have to investigate why those 85 users where not recorded in Audit DB.


That's all by the moment.

Jorge Sousa

This post shows one possible way to create the list of SAP BO LCM jobs using the SAP BO BI Universe in SBOPRepositoryExplorer.




Some required components:

  1. SAP BO BI 4.x Platform;
  2. XML file with predefined CMS query;
  3. IDT to configure SBOPRepositoryExplorer connection and the universe;
  4. WebIntelligence to create the document.


For this example I'm using SAP BO BI 4.1 SP2.


Create CMS query for LCM Jobs:


We can create an XML file with next content:

<?xml version='1.0' encoding='ISO-8859-15'?>
<Tables xmlns="http://www.w3.org/2011/SBOPTables">
    <TableDescription>LCM Jobs</TableDescription>
    <BOQuery>SELECT SI_ID,


Configure Universe Connection attaching the XML file:


To attach our XML file definition to our universe we must create an universe connection in IDT into a project, for example:





Create the Universe:


After connection configured we can create the Data Foundation and Business Layer:




Test the Universe:


When universe is already created we can do a test before publish to CMS:



Publish Universe to CMS:


After test we can publish to CMS:



Create report in BI LaunchPad with WebI:


After published the universe we can use in WRC and in BI LaunchPad:


and the report can be like:



Thanks and enjoy.

Jorge Sousa


Update 20/08/2014: Maintenance Schedule has finally been updated with the announcement of SAP BI 4.1 SP05.  See below.

Update 04/07/2014: Forward Fit Plan has been updated.  See below.

Update 23/06/2014: PAM has been updated.  See below.

Update 22/06/2014: Added Section: Maintenance Schedule.  See below.

Update 17/06/2014: What's New document has been updated.  Comments below.





SAP has released on Friday June 13th 2014, as planned in the Maintenance Schedule, Support Package 04 for the following products:

  • SBOP BI Platform 4.1 SP04 Server
  • SBOP BI Platform 4.1 SP04 Client Tools
  • SBOP BI Platform 4.1 SP04 Live Office
  • SBOP BI Platform 4.1 SP04 Crystal Reports for Enterprise
  • SBOP BI Platform 4.1 SP04 Integration for SharePoint
  • SBOP BI Platform 4.1 SP04 NET SDK Runtime
  • SAP BusinessObjects Dashboards 4.1 SP04
  • SAP BusinessObjects Explorer 4.1 SP04
  • SAP Crystal Server 2013 SP04
  • SAP Crystal Reports 2013 SP04


You can download these updates from the SAP Marketplace as a Full Install Package or Support Package (Update).


E.g.: Full Install

Full Install.png


E.g.: Support Package (Update)

SP (Update).png



What's New?


The updated What's New document has been released few days late on 1706/2014.  There are 9 new features and unless I'm missing the point, none of them are very impressive.  However, there are tons of fixes (293 to be exact).


Tip: If the link above still shows an old version, refresh the page in your browser or press F5.



Supported Platform (Product Availability Matrix)


The updated SAP BusinessObjects BI 4.1 Supported Platforms (PAM) document has been released a week late on 23/06/2014.


Alternative URL: http://service.sap.com/pam


As far as I can tell, the following extra support has been added since SAP BI 4.1 SP03:


  • CMS + Audit Repository Support by Operating System
    • Microsoft SQL Server 2014
    • Oracle 12c
    • SAP HANA SPS08
    • Sybase ASE 16


  • Adobe Flash Player 12


  • SAP HANA Support per SAP BI Products


  • Java Runtime (JRE) 1.8 (For browser use with Web Intelligence - Not Server side aka JDK which is still 1.7)





The usual documents have been made available:










Forward Fit Plan


The SBOP Forward Fit Plan has finally been updated.  Few weeks late...  SAP BI 4.1 SP04 includes the following updates and fixes:


  • BI 4.1 Patch 3.1
  • BI 4.1 Patch 2.2 - 2.4
  • BI 4.1 Patch 1.6 - 1.8


  • BI 4.0 Patch 6.11 - 6.12
  • BI 4.0 Patch 7.7 - 7.9


  • XI 3.1 FP 6.4


Source: SBOP Forward Fit Plan

Maintenance Schedule

SAP BI 4.1 SP04 Patch 4.1 (Week 31 - August 2014)

SAP BI 4.1 SP04 Patch 4.2 (Week 35 - August/September 2014)

SAP BI 4.1 SP04 Patch 4.3 (Week 40 - October 2014)

SAP BI 4.1 SP04 Patch 4.4 (Week 44 - November 2014)

SAP BI 4.1 SP04 Patch 4.5 (Week 48 - November / December 2014)


SAP BI 4.1 SP05 is now announced for Week 47, 2014.  Release date should be around Friday November 21st.  Looking forward to it!


Source: Maintenance Schedule



Installing Updates


I have installed the following updates on my training server.


Updates Installed


    • SBOP BI Platform 4.1 SP04 Server
    • SBOP BI Platform 4.1 SP04 Client Tools
    • SAP BusinessObjects Explorer 4.1 SP04




    • Windows Server 2012
    • 4x Virtual Processors (Hyper-V)
    • 20 GB RAM




Bearing in mind my training server originally started with a clean installation of SP02 then patched to SP03 with 3x languages (English, French, Finnish), this is how long it tool to install everything.


1. As always, the Preparing to install screen takes longer and longer...


Please wait.png

2. This chart shows the time it took waiting for the Preparing screen to disappear then the installation time.  That's right, about 2.5 hours (151 minutes) just to patch SAP BI Platform 4.1 SP04 Server.


3. As always, when you click Finish, do NOT reboot straight away.  Wait for setupengine.exe to go away in your Task Manager.  This can take a minute or so.


Task Manager.png


4. How it looks for me now.





Past Articles


For information, I wrote the following articles about previous SAP BI Support Packages:






It's still early days and there are couple of documents that need to be updated but so far so good.  No errors in the Event Viewer, every services are starting as they should and some preliminarily tests are successful.


As always, please share your how it went for you in the comments below.  I'm sure this does help many people.



Please like and rate this article if you find it useful!



Thanks and good luck!



Here's some exciting news for you enterprise data connectivity junkies out there: SAP's BI 4.1 suite will support Hive2 and Impala connectivity via ODBC and JDBC drivers from Simba Technologies. And later in the year, so too will SAP's Lumira data visualization software.


For Simba Technologies, it's a mutually-rewarding partnership: Simba shares SAP's broad commitments to enterprise Big Data innovation, integration, productivity, and efficiency. But beyond that, why should Simba's connectivity "plumbing" matter to SAP's customers?


SAP BI 4.1 + Simba Connectivity = The Future of Big Data Interoperability

SAP's adoption of Simba connectivity drivers represents SAP taking a progressive stand for the innovation enterprise: From the data warehouse to the BI application to the Hadoop framework of choice, when it comes to Big Data in the enterprise, interoperability matters. A lot.


The SAP BI 4.1 Suite now includes Simba ODBC and JDBC drivers as embedded components. SAP BI 4.1 customers can easily connect their Big Data directly to Hive or Impala. Queries are faster, performance is better, and reliability is so good enterprise IT can take it for granted. (Not that enterprise IT would or should ever take anything for granted, of course!)


Interoperability and Extensibility: How Best-in-class Connectivity Impacts SAP Enterprises

What's really meaningful about this partnership is that it signifies SAP's commitment to interoperability. The Simba JDBC drivers for Big Data, for example, adhere to the JDBC standards. For SAP BI 4.1 customers, that means accessibility to more apps, more platforms, more data sources. The SAP BI 4.1 Suite is a first-class diplomatic citizen in the Big Data world because it can connect directly to any Hadoop distribution – no need to get drivers from the Hadoop distros – SAP has it all built in. SAP BI 4.1 customers get this direct Big Data and Hadoop connectivity using the same tools and products they have always used.


Today BI 4.1, Tomorrow Lumira

BI 4.1’s support for Hive and Impala connectivity via SIMBA drivers is a first step (or the first two steps) in optimizing Hadoop connectivity for SAP enterprises. SAP has cranked the throttle when it comes to optimizing Hadoop SQL engine performance. And the next step is on the BI visualization side: Lumira, SAP’s innovative solution for visualizing all that big data, will adopt Simba JDBC drivers later this year. The right tools, optimized connectivity, and screaming fast query speed: It’s a great time to be an SAP Big Data enterprise.

At SAP we understand customers would prefer to resolve product issues on their own, rather than logging a support incident with SAP Product Support.


To help customers resolve their own issues without our involvement, we have started to externalize our methodology for resolving product related issues. We call this ‘Troubleshooting by Scenario’. Troubleshooting by Scenario means you can follow exactly the same steps and methodology that a SAP support engineer or developer would follow to isolate the issue.


Troubleshooting by Scenario’ provides customers with a list of scenarios. Example scenarios are ‘Promotion Management’, ‘Process crashes’, ‘Install problems’, ‘Scheduling problems’, and ‘Report refresh problems’ to name but a few.


(To start with we are providing one scenario “Promotion Management”, within one product area “Business Intelligence Products”, in order to get your feedback before we expand this innovative idea further.)


Within each scenario we provide a list of hypotheses (something to test or high level symptoms). For example “Problem with the meta data of the source or target repository” or “Known workflow causes a problem” or “Individual object causing a problem somewhere” are all hypotheses.


For each hypothesis we explain:

  • the purpose (of the troubleshooting task)
  • the 'tool' name (and details of the tool)
  • a rating (to help you pick in case you're not sure which one to use first)
  • why the tools is suitable
  • how to use the tool (for that particular hypothesis)
  • next steps


The idea all along, is for customers to self-serve and resolve issues without the need to contact SAP. Of course there may be times you will need to contact SAP to help isolate or resolve an issue and certainly when a defect requires a new code-level fix.


Please watch this video


and visit the Promotion Management Troubleshooting Scenario at http://wiki.scn.sap.com/wiki/display/BOBJ/Promotion+Management+Problems


Your feedback is most valuable, please either comment here, use the survey or contact me via Twitter.

I'd be delighted to hear your opinions.

Thank you.


Matthew Shaw                          Twitter:@MattShaw_on_BI                Feedback Survey

This post contains how we can create an useful security matrix in SAP BO BI using the SBOPRepositoryExplorer connector.


Shortly I want to show you next content for this example of security matrix:


  • Access Levels (ACLs) definition;
  • Groups and Users Relation;
  • Application Rights;
  • Folder Rights;
  • Universe Rights;
  • Connection Rights.


1. How to extract ACLs

Using the universe provided in SBOPRepositoryExplorer and WebIntelligence:


We could use next dimensions to extract the ACLs from our CMS repository:




Next step is create some required variables for our example:



=If ([Specific Right]=0 And [Right Group Name]<>"General")
     Then "Overwrite General"
               If ([Specific Right]=1 And [Right Group Name]<>"General")
                    Then "Specific Right"


=If  (Count([CRole Right ID]) Where([CRole Right Granted]=1))>0
     Then 1
               If (Count([CRole Right ID]) Where([CRole Right Granted]=0))>0
                    Then 0
                         Else Count([CRole Right ID])

Now we can create a cross-table like:



And for the values ( ) we can use conditional formatting rules:



And the final result is (example):


2. Groups and Users Relation


For this kind of content we can use different perspectives/views in function of our requirements, but anyway, we can use next basic dimensions:


Here we have an example with:


"Group Name"

"User Name"

and for the value we can define next formula:


=If Count([Group Name])>=1
     Then "X"
          Else ""

Other example using full path:



3. Application Rights


For applications we can use next dimensions:



A possible example:


4. Folder Rights


For folder rights we can use the same logic than application rights:



5. Universe Rights

In next example we are showing rights for universe folders:




6. Connection Rights

Like the previous one we can use next dimensions:


This is a simple way to create online our security matrix for SAP BO BI.


Thanks and enjoy!

Jorge Sousa

Here are the slow behavior  of the CMC which ​was giving a hard time to work.

It takes 30 seconds to more than 1 minute to login in CMC / BI Launchpad.

Once logged in, navigation is slow between different sections.

Tomcat manager came up fast; however login page also takes time to come up.

Things to check: ~

Multiple boe_cmsd observed on the server box. One remains constant while other come and go.

Checked that Platform search was is set to continue crawling.

#Our APS service is split; however no separate APS for Platform search service.

As our BO server runs on 2 nodes cluster; we stopped the 2nd node to see if the performance improves.

It was observed in the past that restarting SIA resolves the issue temporarily; however issue comes up again after few days.

We restarted SIA but no visible change observed in the system.

We Disabled 'Enable auto-save for users who have sufficient privileges' for Webi Report referring KBA 1342368.

# Refer KBA 1956237 for information on auto-save in Web Intelligence is triggered the Server hangs.

How did we resolve this issue: ~

Removed the Platform Search Service from # APS and create separate #APS with the Platform Search Service.
In order to do this, Login to CMC go to Application > Platform Search Application > properties > check scheduled crawling. Save and Close.

Navigate to the program object for the Platform search under folder's in CMC and schedule it for after working hours.

One fine day you you as a BO Admin will find the Tomcat is going down frequently ot couple to few times a week. And you may not have any clue what's going.

In our case we had this issue and hence i thought to share it with you all.

This is likely due to the compression bug in tomcat .

Or check the logs and see if there is any indication of a problem with max permgen being reached (OOM - out of memory).

As a resolution First thing is to disable compression in server.xml (/path/to/tomcat/server.xml ) - Look for the connectors that have

compression=on and change it to compression=off .

Second thing to do will be to up the max permgen space for Tomcat. This can typically be done in the environment shell script (env.sh/setenv.sh) .

Knowing all these, we modified the server.xml from Tomact/conf/server.xml to off the compression. From compression=on to compression=off.

We cleared out the cache of tomcat Tomcat/work/catalina/localhost.

Then, we were able to login to CMC.

Then we changed the value for -Xmx to 4096 and added one parameter

-Xms to 256 in following variable-

JAVA_OPTS in tomcat/bin/setenv.sh script.

For more , we can refer SAP note-1750952 - BI4 Setting JAVA_OPTS for tomcat

Log in to CMC and go to Monitoring page. Go to Watchlist tab and click New.



Give a name for your watch and select "Two (Ok, Danger)". Click Next.


Type "disk" as filter and choose "Free Disk Space in Root Directory". Change the value of Danger. In this example i set it to less or equal then 100 GB.


Click Next. At the Bottom, set the notification settings. Save your watch. You can add also e-mail alerts. Now copy a large file to your disk for test the watch. If your free disk space becomes less then 100 GB, you will get an alert depending on your notification settings.



Filter Blog

By author:
By date:
By tag: