1 2 3 18 Previous Next

BI Platform

270 Posts

The Issue


I recently deployed SAP BusinessObjects 4.1 across several environments at a client site and ran into the following error message when saving a Web Intelligence report to the platform from the Webi Rich Client, Java Panel and HTML Panel.

The document's serialization version is too recent (Error: WIS 30915)

Scheduling a Web Intelligence report to Webi format also resulted in the above error.


The Web Intelligence processing server logs left the following trace:


**ERROR:SRM:An internal error occured while dgSerializeManagerImpl is calling ibo_idgSrmStore->PublishToCorporate [kdgSerializeManager.cpp;2533]


  • No issues were logged in the setupengine.log
  • Running a repair on the deployment didn't resolve the issue


Root Cause


Query Builder reveals what would appear to be an incorrect version number for the Web Intelligence application





  • Note the dfo filename above - BusinessObjects_WebIntelligence_dfo.xml.
  • Stop all SIA nodes
  • Backup the CMS database
  • Copy the file from the <InstallDir>\SAP BusinessObjects Enterprise XI 4.0\dfo directory to <InstallDir>\SAP BusinessObjects Enterprise XI 4.0\packages directory
  • Start the SIA Node containing the CMS
  • Confirm the file has been removed from the <InstallDir>\SAP BusinessObjects Enterprise XI 4.0\packages directory
  • Confirm the update via Query Builder


  • Run earlier tests.......All working


Side Notes


I ran into this issue on an upgrade from BI4.0 SP5 FP4. I wasn't able to replicate this issue (as expected) on an environment that had a clean build i.e. no upgrade


I'll report the issue through the correct channels in due course but I felt it was best to get this information out there as I burnt several hours troubleshooting this and no doubt someone else might run into the same problem.

Designing security model is one among the important phase in BusinessObjects implementation/migration projects. Well-organized security model not only provides easier Administration but also ensures security is seamlessly implemented across different functional/application user groups with less maintenance effort.


From this blog on wards, we are going to see how to design and implement the security model. Before starting the design, we should consider the below things


Various rights categories (as on BI 4.0 SP3)


There are 4 different rights categories as we already aware and they listed below


  • General
  • Application
  • Content
  • System


I have consolidated all of them and the same is depicted below



Various user categories

Always categorize users based on Application as well as functionality (what they can do). I have categorized users based on their BusinessObjects Application/content and their functionality on BI content

Application wise user category

User category


Crystal users

Crystal report users

WebI users

Web Intelligence report users

Dashboard users

Xcelcius dashboard users

Design Studio Users

Design Studio dashboard users

Universe/Information designers

Universe Designers

Analysis users

Analysis Application users

Explorer users

Explorer application users

Mobile users

Mobile report users

Functional user category

User category


BOE Users

All users of the BusinessObjects system


Users who can only view/refresh the reports

Interactive Analysts

Analysts can refresh/create/modify the reports that they create and they cannot create/modify the corporate reports

Interactive Authors

Analysts can refresh/create/modify the corporate reports

Super Users/Managers

Users who can manage and maintain document as well as users for a particular application/department

Content Schedulers

Users who can schedule reports for their own and on behalf of others

Content Promoters

Users who can migrate/promote BI content across different environments

Delegated administrators

Users who can administrate the Businessobjects deployment as a whole or part of it


Based on the rights and user categorizations we are going to see more about the security model design in my upcoming blogs! Thanks for reading!

We can add the HANA Live views in UDT by adding HANA connection. Normally views will be stored in_SYS_BIC schema.

Step 1: Create a HANA Connection with JDBC driver



Step 2: Choose JDBC drive and Add the credentials and Host name for HANA DB




Step 3: Test the Connection





Step 4: Add a new class under project.



Step 5: Select  the view from _SYS_BIC schema







Step 6: Select your view and add the objects for the class.



Now you have successfully added the HANA Live View from HANA Database into UDT. Universe can be created using HANA Live view.


Let me know, if you have any issues.

As we all already aware, In BI 4.x the capabilities provided by as-is sample of Audit reporting suite is limited and we can see there are so many requirements flowing in SCN around this for quite some time with reference to additional requirements from Auditing .


Can I extend the Auditing capabilities and How?


We can enhance the existing audit capability from the as-is sample. Besides the default sample reports provided, I do have few more requirements something like below.

  • Frequently used reports
  • List of most active users
  • Who are all my Mobile BI users?


To achieve the requirements above I have adopted following approaches.




Below are some of the approaches I have considered for Audit reporting enhancements.


1. Creating customized Audit reports from the existing Audit schema


We can create enhanced audit reports from the existing Audit schema based on our requirement. We can create extended reports by referring existing report and modify the report prompts/filter etc.


For example to get the Mobile report access use  "Application_Type_Name" from table "ADS_APPLICATION_TYPE_STR" which provides the application type from which the access is from i.e. mobile device. It is available as "Client Application Type" in class "Events" in the Universe)


2. Creating Custom tables in Audit schema for the reporting


          We can create custom tables in Audit schema based on our requirement. One of such option is to create derived tables in Audit universe based on the Custom SQL statements than can be run directly on Audit database.


More active users can be obtained by running the below SQL on Audit database











Create a derived table in Audit universe with the above SQL and then you can directly run reports on top of the derived table column/objects.


Alternatively if the Custom SQL extracts large dataset we can skip the derived table approach which is meant for less number of rows and create a materialized view at database side and refresh it periodically and then do the reporting from there.


3. Creating a metadata repository and start the reporting by creating the multi source universe which points to both Auditing schema as well as metadata schema.


          This approach will be very useful whenever we need to create reports that need to capture the information from both Audit and BO repository. Some of the information such as number of Named users/Concurrent users cannot be extracted from Audit schema in which metadata reporting along with Audit reporting will be handy.


BI 4.x Audit reporting references:


BusinessObjects Auditing - What is changed in BO 4.0?

Sample Auditing Universe and Reports for SAP BusinessObjects_4_x

SAP BusinessObjects 4.0 Auditor Configuration & Deployment End to End

BusinessObjects Auditing - Considerations & Enabling


Thanks for reading. Appreciate all your thoughts, comments, ideas & feedback.

Hi All,


I wanted to share an experience that I had during migration of universes and reports from BO 6.5 to BOE 3.1


My BO 6.5 Admin gave me the bomain.key file which I used in the Import Wizard.


A little bit of background for the people like me who have never worked on a BO 6.5 system. There is no concept of CMS and filestore in BO 6.5


Instead there are domains which house the BI content. There are mainly three types of domains in BO 6.5 which I am aware of.


1. Security Domain :- It has the users, groups and the security information.

2. Document Domain :- It has the reports.

3. Universe Domain :- It has the universes.


A particular environment can have multiple Document and Universe Domains. These documents translate to folders while migrating to BOE architecture (BOE XI R2 or BOE 3.1)


bomain.key is the encrypted file which has the connection and linking information for these domains of BO 6.5 environment. You need this file while logging into the BO 6.5 system using Import Wizard.


So my BO 6.5 Admin provided me with the bomain.key file for migrating content required for the project. My BO 6.5 repository was on Oracle and so was the CMS of BOE 3.1


I faced a weird issue while logging into BO 6.5 through Import Wizard. It gave me an error about missing TNS entry. So I contacted my BO 6.5 Admin and he provided the TNS for my security domain. This help me pass the login screen.


However when I selected the folders and universes to migrate, I saw only empty folders. None of the reports or universes were visible in the Import Wizard.


To troubleshoot this issue further, I enabled tracing on Import Wizard by adding "-trace" to the IW shortcut in the startup menu. The logs were very lucid and correctly pointed to the problem.


I got the below trace in the logs.


2014/08/21 08:33:10.966|>=|W| | 3864|7560| |||||||||||||||_BOImportHelper::getUniverses: Universe 15 cannot be imported because the corresponding domain is down

2014/08/21 08:33:10.966|>=|W| | 3864|7560| |||||||||||||||_BOImportHelper::getUniverses: Universe 16 cannot be imported because the corresponding domain is down

2014/08/21 08:33:20.028|>>|E| | 3864|7560| |||||||||||||||PingDomain: Unable to connect to domain 11 because: ORA-12154: TNS:could not resolve the connect identifier specified

Apparently my universe and document domains required additional tns entries. So to further overcome this issue, I just merged the tnsnames.ora file from the BO 6.5 server with the one on BO 3.1 server.

This resolved the issue and migration went very smooth.

I will just put the crux of my blog in points for easier understanding.

  • There is no CMS or Filestore concept in BO 6.5 It has domains.
  • There are three types of domains, mainly security, document and universe domain
  • These domains translate to folders while migrating to BOE XI R2\3.1
  • bomain.key file is the encrypted file holding information of all domains. It is required logging into the Import Wizard.
  • The machine\box from where you launch the Import Wizard should be able to connect to all the domains in BO 6.5 for successful migration.


Thanks for reading my blog and I hope you found it useful.




In our daily Business Objects reports scheduling life, we often use events as a key factor for triggering the BO Reports. Data quality can be obtained by fixing the events as a dependency to a BO Report. If I'm not wrong, almost everyone who uses Business Objects follow this event dependency concept to maintain the quality of data in the reports. What else, we can play with those events??...


I thought of sharing on how well we can utilize the BO events for Load balancing, Data quality and Maintenance activities.


Basically, events are classified into 3 types.


  1. Custom Event - Occurs when you explicitly clicks its "Trigger this event" button.
  2. File Event - Waits for the particular file to generate in the desired location to trigger the event.
  3. Schedule Event - Triggers the event, when a particular object has been processed.


To know more about events, read Page#229 in Admin guide.


Load Balancing:




Dummy Reports - 3 or 4 Reports*

Schedule Events - 3 or 4 Events*


Priority events can be created to maintain the load balancing on BO servers. By creating these events, the reports will be kicked off based on the criticality. The four dummy reports which we have created will be executed every 1 min* and triggers the corresponding schedule events.


Here, we can name those 4 dummy reports as Priority1_Report, Priority2_Report, Priority3_Report and Priority4_Report.

Name those 4 schedule events as Event_P1, Event_P2, Event_P3 and Event_P4. (Fix the events as a OUT Conditions for corresponding Priority reports)


The Priority events will be fixed as a condition to all the BO Reports based on the start time of the reports. For example,


Event NamesFixed to the reports
Event_P1Scheduled to start between 8 AM -10 AM
Event_P2Scheduled to start between 10:01 AM - 11:30 AM
Event_P3Scheduled to start between 11:31 AM - 1:00 PM
Event_P4Scheduled to start after 1:01 PM


How these events can be used?


  • Pause these Priority Reports when there is a delay in the data load or any other issues to avoid reports kicking off at the same time.
  • Once the data load process got completed or any other issues are fixed, release the "Priority1_Report" first to generate the critical reports depends on Event_P1.
  • Release the "Priority2_Report" after 30-45 mins* after confirming that there are enough resource available for processing the next set of reports. By this time, most of the P1 reports might have completed.

  • Release the "Priority3_Report" and "Priority4_Report" reports after 30-45 mins* respectively by checking the number of reports in RUNNING state.


Maintenance Activities:


I divide this module into 2 segments.


  1. Maintenance on BO Server
  2. Maintenance on Data load server (DB)


For Maintenance on BO Server, Rudiments:


Dummy Reports - 1 Report

Schedule Events - 1 Event


To avoid report failures or to stop all the BO reports from kicking off, we can have this one dummy report to manage all these actions. The dummy reports which we have created will be executed every 1 min* and triggers the corresponding schedule events.


Here, we can name that 1 dummy report as Allow_All_Reports.

Name the schedule event as Event_AllowAll. (Fix this event as a OUT Conditions for "Allow_All_Reports" reports)


How this event can be used?


  • Make this event event as a mandatory event while scheduling a report.
  • This event is going to be the key for BO Reports to stop or kick off.
  • Incase of any issues, if you don't want your reports to trigger and to avoid the report failures, you can go-ahead and pause one single report "Allow_All_Reports" to control all other scheduled reports on your environment.
  • Once the issues are resolved, you can resume this "Allow_All_Reports" to allow all the reports to kick off.

Note - If you pause this report by mistake then none of your reports will trigger.



For Maintenance on Data Load Server (DB), Rudiments:


Dummy Reports - 3 or 4 Reports*

Schedule Events - 3 or 4 Events*


To avoid report failures or to stop all the BO reports which are hitting specific database during planned or unplanned maintenance on data load servers (DB), we can have these dummy report to manage all these actions. The dummy reports which we have created will be executed every 1 min* and triggers the corresponding schedule events.


Here, we can name that 3 dummy report based on the DB Names like DB1_Report, DB2_Report and DB3_Report.

Name the schedule event as Event_DB1, Event_DB2 and Event_DB3. (Fix these event as a OUT Conditions for respective DB Event reports)


For example, Consider DB1_Report is scheduled for Oracle DB reports. Then, fix the Event_DB1 as a in-condition for all the reports which are hitting the Oracle Database. Incase of any unplanned maintenance or any issues with the database, we don't have to look for the metadata or identify the list of reports hitting the Oracle DB and pause them manually. All you need to do is, go-ahead and pause the DB1_Report which will in-turn hold all the reports which are hitting the Oracle DB since we have scheduled those reports with DB Event conditions.


Event NameReports hitting ...
Event_DB1Oracle Database
Event_DB2SQL Database


Data Quality:


The Quality of data in the report can be maintained by fixing any of the 3 events (Custom, Schedule or File events). Based on the ETL jobs which loads the data to the tables used by the reports, we can fix the best event type as a condition to the BO Reports.


  • If ETL jobs are generating any trigger files after processing the data, file event can be fixed as a condition to all those reports which are using the tables loaded by that particular ETL job.
  • Custom Event can be used to kick off the reports, once the specific table got loaded.
  • Scheduled event will be acting as a WATCHER Reports to monitor the status of ETL job. Based on the completion of ETL job, instance will be successful and triggers the corresponding event.


To Summarize:


While scheduling any reports, plan to fix the below events as a dependency to maintain the quality of data, load balancing and to avoid report failures or to pause \ resume any reports during planned \ unplanned maintenance period.


Event to wait for:  Event_AllowAll, <<Priority Event>>, <<Database Event>>, <<Data Load Event>>


  1. Event_AllowAll - Default event which should be fixed as a dependency for all scheduled reports. Pausing this report will hold all the reports in your environment.
  2. Priority Event - Should be fixed based on the scheduled start time of the report. Helps in Load balancing during loads delay or any other issues.
  3. Database Event - Fix this dependency based on the database the report is running against. Helps to pause \ resume the reports during maintenance period or any DB issues.
  4. Data Load Event -Based on the ETL process, fix the best suitable event for ensuring the data quality.


Points to be noted:


  • No DB connections are required for the dummy reports which has been created for load balancing and maintenance.
  • As these reports are not hitting any DB's, instances will complete in few seconds.

  • Place these event reports in a separate folder and maintain security level in such a way that only Administrators can Pause \ Resume these reports. (Customize your security levels based on the requirement)
  • Fix the report LIMITS as 10.
  • These event reports will be acting as a one stop location where you can control almost all the reports which are scheduled in your environment.


*Based on business requirements.


I welcome the feedback, comments and complements.



Vijay Madhav

You have to complete 4 steps.


1. Create an event

2. Create a script file for database table record count

3. Create the schedule

4. Create a windows task schedule


1- Create an event:


Log on to CMC and create an event.



Go to "System Events" folder and  click on "Create an event"



Fill the text boxes. Choose "File" as type and the path of your "ok" file. Note that the "ok" file (which is "f:\a.txt" for this example) is not created yet. The text file will be created by a script mentioned below. The "f" drive is a drive on your BO server.




2- Create a script file for database table record count


Ok. Now we need a script. This script will;

  • connect to the database
  • check the number of rows in our table
  • if there is record which is corresponding to our query, create the "ok" file

You can find an example file with this kind of script as an attachment.


3- Create the schedule


Go to your BI portal and create your schedule. Don't forget to select your event.



So, your schedule will never run, until there is a file named as "a.txt" on your "F" drive.


PS: Don't forget that the schedule looks for the "ok" file which is created after the schedule creation time.


4- Create a windows task schedule

The last step is , create a windows schedule task, which automatically runs your script file every hour or quarter or etc. If your ETL process fails there will be no record in your table and the script won't create the "ok" file "a.txt". When the ETL process succeed your script will create the "ok" file and your schedule will run.



The objective behind this post is to try and bring some attention to what I consider a serious product shortfall, namely the sequential processing of multiple SQL statements within a single universe query.


The below idea touches on this briefly, it also references a secondary issue which is the sequential processing of multiple dataproviders in a single document - another topic which needs addressing.



The Issue


As mentioned this post focuses on the execution behaviour of a single unx query that contains multiple statements (flows) and the sequential processing thereof. These queries are generally invoked when the following options have been set on the universe:

  • Multiple statements for each context
  • Multiple statements for each Measure


Steps to reproduce:


  • Create a universe containing multiple contexts
  • Enable the option 'Multiple statements for each context'
  • Select a measure from each context, this will produce 2 statements (observed when selecting view script)
  • Profile/monitor database activity
  • Execute the query
  • Note the sequential execution of each statement


This behaviour is documented in SAP Note: 1258337. This document is slightly dated but from what I've seen the behaviour hasn't changed.




The below components all use a java library to execute and retrieve the results of a .unx query from the database.

  • Crystal Reports Processing Server (Crystal Reports for Enterprise)
  • Dashboard Processing Server
  • Information Design Tool
  • Dashboard Designer
  • Crystal Reports for Enterprise Client


Generally speaking the guts of the code that performs the query execution is held in a class called com.businessobjects.dsl.services.dataprovider.impl.QuerySpecDataProvider which can be found in the jar files:

  • dsl_engine.jar
  • com.businessobjects.dsl.services.impl.jar


Changing the below two methods, in the class com.businessobjects.dsl.services.dataprovider.impl.QuerySpecDataProvider, to include a simple asynchronous call when running the recursive calls to getResultNode(<?>) would bring about significant performance improvements when executing queries that contain multiple statements:

  • getResultNode(<?>)
  • getMergedResults(<?>)


By making the changes above you should in theory see vast improvements in overall query performance  when:

  • You have a large number of statements in a batch
  • And/Or statements within the batch take a long time to run

It is important to note that the final output will always take as long as the longest running statement in the batch as all results need to be retrieved before they can be passed up the stack.


Please note that the workflow is slightly different for components that access the Webi Processing Server, namely:

  • Webi Reports
  • PowerBI
  • BIWS


The Webi processing server utilises a different code base, I’m struggling to get some readable traces together for this application component but my guess is changing the underlying code, compiled in QT.dll, in line with the above recommended change would have a similar positive affect.




The above findings are the result of testing against a single source relational unx universe, behaviour may differ when using multisource universes/olap connectivity - both of which I haven't got round to testing yet.


With that in mind by no means am I proposing that the above small change to the referenced class is a complete solution but one would hope that given the available manpower at SAP, coupled with the performance improvements this change can bring to the product suite, a full solution could be implemented by SAP in a relatively short space of time - given enough push by us the community.


I've listed several ideas below that detail where sequential processing takes place. Please up-vote them if you want to see the current implementation improved and ultimately reap the performance gains these changes will bring.








For those JAVA devs out there, it is of course possible to decompile the com.businessobjects.dsl.services.dataprovider.impl.QuerySpecDataProvider class and implement this change in order to reap the performance benefits, however there is a high likelihood that merely the act of decompiling the code would be a copyright infringement so this is by no means endorsed or recommended by me.


Let the crusade begin......hopefully.





Dear All,


After a long time here in SCN, I am writing this blog about one of the quite interesting feature in BI 4.x i.e. Visual difference.


Visual difference provides an opportunity to compare two versions of BI content in order to identify the changes incorporated in each version. This will allow you to track the changes made in every version of the BI content and the content could be a Report, Universe or even a LCMBIAR file.


Visual difference background


Visual difference is one among the hosted service in APS and consumed by both Promotion management and Version management. It is not necessary to have a dedicated APS for hosting Visual difference service unless otherwise majority of your users are using it exclusively apart from Promotion/Version managements.



What & How to compare


You can compare the BI contents present in any one of the below locations


  • CMS – from a BusinessObjects repository
  • VMS – from a Version Management system repository
  • Local File System  – from a local file system




In CMS, you can select both reports and Universes.



Post selection you can compare them across repositories (CMS/VMS/LFS)



You can even schedules the difference generation similar to reports




Visual difference use case for each user category



  • Difference log - What are the changes made in my current report/Universe with reference to my previous version so that I can proceed with my development from the required version
  • Change history - What are the consolidates Changes made since from my initial Universe version (Version 0)

Test Analyst

  • Changed section - What are the only modified objects in the Universe so that I can use my test cases only for the required objects for faster testing approach?


  • Difference log - What are the changes made in user’s current report/Universe comparing earlier version?  Based on this as per user requirement I will restore my universe/report.
  • Consolidation for easier maintenance - What are differences between Sales universe and marketing universe? Can I merge both of them and create a single universe to fulfill the reporting requirements for both the departments?

Hope the blog is interesting. Let’s start using visual difference and make our work even simpler & smarter. Thanks for reading!

As promised in SAP BusinessObjects XI 3.1/BI 4.x installation prerequisites and best practices [Windows] here is a continuation of the pre-requisites & best practices document for SAP BOE XI 3.1 and BI 4.x on Linux/Unix based environments.


The same concept applies as for Windows servers, which is to allow the installer to run as uninterrupted as possible with respect to the OS parameters/settings. There are certainly deeper checks required on a *nix environment, however these should be taken care of during the build-up of a server and are mostly common for any application. However, here are a few of them which have been observed to cause issues if not set correctly.


I've not included sizing in this as I wanted to keep this document for OS related parameters and configurations only however if you wish to learn more about how to size an environment you can visit: **http://service.sap.com/sizing.




To start with, the hardware and software configuration of the server or client machine that we're installing SAP BusinessObjects on must be supported. SAP provides a supported platforms guide or "Product Availability Matrix" (PAM) for several products.

These can be found at the following URL: http://service.sap.com/pam

You can also refer the following KBA: 1338845 - How to find Product Availability Matrix (PAM) / Supported Platforms Document for SAP BusinessObjects Products


Here is an example of a page from the PAM document. Along with the compatibility with SAP and 3rd party softwares, for Unix, OS patch requirements and additional patches/libraries that are required are also mentioned here. The product version is also included in the screenshot.A-PAM1.PNG

Source: SAP


Source: SAP


Ensure you're viewing the details of the correct OS / patch when viewing a PAM or Supported Platforms Guide.

**Visit the Sizing URL to know more about SAPS.





The most crucial piece in a Unix BO installation is the user. BO cannot be installed on Unix using the root user nor is root access required post-installation to run BusinessObjects. Hence, it is best practice to have a separate user for an installation for example: bo41adm.

Add this user to an existing group or create a group separately for the BO user. bo41adm will own the BO installation and be used to run all scripts, etc.


The user must have sufficient rights on the following:

a. Installation directory

b. Installation media

The following rights must be given:

a. Read

b. Write

c. Execute

A minimum right's value of 755 is sufficient.







Just like Windows, BusinessObjects is identified on a server by its hostname. Hence, we must ensure that the system has a valid and resolvable hostname.

To set a coherent hostname (e.g. bidev01, etc.), you can use the command:

hostname <desired name>

Nextly, make sure the hostname is associated with the machine's IP address in the hosts file, wherever relevant.

The hosts file on a Linux/Unix system is found in /etc.



This is a necessary step because if the hostname is not resolved over the network, many services will not be accessible. Client tools have been observed to fail to connect to a BO cluster / host if it cannot be resolved.


If you're choosing to export SLD data to Solution Manager, the hostname is what is included in the ".xml" output of the BOBJ SLD REG batch. If the hostname/ip address mapping is not correct, the SLD will contain incorrect data cause errors ahead.




Most access to a Unix based server take place through a terminal emulator / console such as PuTTY. This can be used to access a BO server from any machine that has network access to that server. This means that we're installing remotely.


It is advised to use the terminal emulator from a machine which is in the same network as the server (or within the DMZ, if applicable). However, in scenarios where this is not possible, ensure that there is NO restriction on the network path that may hamper the installation.



E. SELinux:


...or Security-Enhanced Linux.

SELinux is an access control implementation for a Linux kernel. It provides security restrictions for certain kinds of executions.

For more details: Security-Enhanced Linux - Wikipedia, the free encyclopedia


Note that SELinux is only for RedHat Enterprise Linux.


Disable SELinux prior to perform a BusinessObjects installation. To disable on RHEL5, follow the below steps:

  1. Click on System on the main taskbar.
  2. Select Administration.
  3. Click on SELinux Management.
  4. Choose to keep this disabled.

See the below screenshot of how to disable SELinux on a RedHat Linux 5 OS.



You could also perform this using the command prompt:

To check the status of SELinux: sestatus

To change the SELinux status, you can make the changes in /etc/selinux/config

For more details you can check the following link:






A Linux/Unixoperating system has a methodology of sharing the available resources within a set of users, groups, domains, etc.. These resources can be split up to allow optimum usage of the OS that has various applications installed on it and managed by different users.


A user can be allocated a certain amount of resources. The user can play around within the range that it has been provided by setting the required value (soft limit) and can set it upto the admin restricted maximum value (hard limit).


For example, in RHEL5, we can see the limits using the ulimit -a command.



The limits configuration file is here: /etc/security/limits.conf


The root user has access to make changes to the configuration in this file.

It is recommended to set the limits to unlimited for BusinessObjects installations as mentioned in the Installation guides.

There have been issues observed with BI 4.0 installations on Linux which led to random services crashing. These issues were overcome by increasing the limits. See below KBAs:

1845973 - In BI 4.0 linux environments, random servers including CMS crash when starting SIA

1756728 - Servers fail randomly and the system becomes unstable in BI 4.0 installed on Linux





In the PAM document, you will also find a list of LOCALES that BusinessObjects is supported with. A compatible locale needs to be set prior to installation.

To check the locale, you can type the command: locale

To set a specific locale, you can type the command: locale <value>; For example: locale en_US.utf8

The PAM document mentioned in point A presents a list of the supported locales.




Needless to say, however the user profile must have the correct access related requisites in terms of the Unix executables, etc. Improper access to the user's bin usually causes issues when the installer internally sources, runs scripts, etc.

Have observed issues where if the path to the "ping" command is not added to $PATH, installations do not proceed with a INS00293.




I hope the above helps towards a successful installation. Above recommendations are based on several instances observed while troubleshooting support issues where the installation had failed and one or more of these options had helped reach to completion!

Being Linux/Unix environments there is a lot to be ensured from the OS perspective as well as the rights. The basic idea here is to allow the product to install without any hindrances, access issues, etc. from the OS level.

You can find documentation for installation, deployment, etc. here: BI 4.0, BI 4.1 and BOE XI 3.1.





The first release of the integration of Lumira for BI4.1 has been released!



Here I cover the functionality available with the first release.  The functionality is delivered with an add-on Installer for BI4.1.  Minimum requirement is 4.1 SP3 and Lumira 1.17.   This also means you need HANA to run Lumira 1.17.

BI 4.1 SP4 is recommend if you want full LCM support.

The most basic workflow:

First, you will see that with Lumira 1.17, there is a new option to publish to BI



When you have created your data set and visualization, you will be able to publish those to the BI repository, just like you do with a Crystal Report or Web Intelligence report.



Once you have published your story and dataset, you can see the story listed directly in BI launchpad, inheriting the same folder security as your other reports.


From here, you can view this report just like you do with any other report type.  The "story" that you are viewing is just a visualization based on a data set.  This also includes support for viewing InfoGraphics, right in BI Launchpad.


This is also supported in OpenDocument, as you would expect with the other document types!


Universe Support.

Both UNV and UNX data sources are supported by Lumira.  This means you can leverage your existing universe infrastructure, AND underlying security.

Once you've acquired data from a universe, what you publish to BI is the dataset, based on that universe.   That dataset then be scheduled, and refreshed by the user.   Below you can see the schedule option on a dataset, which gives you the standard scheduling options that you would see for a CR or Webi report.


Of course static datasets, which are based off uploaded data will not have the schedule options.

Note that the "force refresh on open" is not yet available.  However a user can refresh the data on demand if they have rights to do so.

When scheduling or refreshing, the same prompts with which the data was first acquired will be reused.  At this time, changing prompts or popping those up for the user when the manually refresh is not yet supported.



Datasets are the building blocks of stories.  A story can be build from one or more datasets.  In BI, these datasets are stored in the CMC, in a new location:


These datasets can be secured, deleted, much like you can do with explorer infospaces.


More on Datasets:

In Lumira, you 'prepare' a dataset, including merging data from a source like universe with data from a local spreadsheet. 

Screen Shot 2014-07-02 at 3.00.50 PM.png

You can also preform other operations on the data before publishing it.   When you schedule a dataset to run to fetch latest data, all the transforms that were applied will be replayed automatically.  This means any column joins/splits, custom calculations or combining of the data with a local excel source will be reapplied.




ESRI map support

Support for ESRI ArcGIS is also available with this initial release, which gives you great interactive maps support.  Support for on premise ESRI server is not yet available with this first relesae.



Mobile Support

The SAP BI Mobile application will now make the stories available through the Mobi application, where you can consume stories directly alongside your existing webi and other BI mobile content.


LCM Support

Full LCM support including the dependency calculations is included.  This means that promoting a story, will allow you to include the dependent dataset, universe, connection object and all related security.   Do note that to have full LCM UI for the Lumira integration, you must be on BI 4.1 SP4.  On 4.1 SP3, you can still use the LCM command line interface.


New BI Proxy Server

A new category of servers shows up, called "Lumira Servies".   This service is responsible for the proxying the requests down to Lumira Server, which performs all the heavy lifting of the analytic.



Auditing, Monitoring, BI Enterprise Readiness

Standard monitoring metrics will show up for this new service.  Additionally, the standard 'who viewed what' that you expect to see in your audit log also becomes available.


Data Storage & setting Limits

When a dataset is published to BI4, the underlying data is actually stored in Lumira Server.  This is all done transparently behind the scenes and does not require any administration in Lumira Server.  In fact, it does not actually show up in your Lumira Server repository.

When a user refreshes a story based on a universe, they will get their own copy of the data stored temporarily.   An administrator can set size and time restrictions for this temporary storage.




Stories must be authored and edited in Lumira Desktop.  Authoring directly from BI Launchpad, as you would do with the Webi Applet is not yet supported.

Accessing HANA Live is not yet supported.  At this time, a static dataset from HANA must be published from desktop.




This is only the first integration release of Lumira which already packs in a lot of functionality and allows you to leverage your existing BI4 infrastructure.  The Lumira BI4 integration will continue to add more enhancements and functionality with more releases over the coming months.

This post shows some possible ways to show in WebIntelligence who never login in the platform using SAP BO BI Universe in SBOPRepositoryExplorer and in the second method combining with SAP BO BI Audit Universe. Also it can be a good option to understand if Audit is working as expected.



Method I (Without Audit data)



Some required components:

  1. SAP BO BI 4.x Platform;
  2. SBOPRepositoryExplorer connection and the universe;
  3. WebIntelligence to create the document.



Creating a report with users information:


From WRC or from BI LaunchPad using WebIntelligence we can create next query to show number of users that we have in our SAP BO BI environment:



In our Test environment we have 2.921 users.

Now we can discover the number of users that never logged in Test environment:



It means that we have 2.921-2.676=245 users who have ever logged in Test environment.


With next query we can show list of users who never logged in this environment:





Method II (with Audit data)




Some required components:

  1. SAP BO BI 4.x Platform;
  2. SAP BO BI Audit DB;
  3. SAP BO BI Audit Universe configured and pointing to Audit DB;
  4. Excel to save users from Audit Report;
  5. IDT (Information Design Tool) to configure SBOPRepositoryExplorer connection and the universe;
  6. WebIntelligence to create the document.



Creating Report with Users Login-Activity from Audit DB:


Using WebIntelligence and Audit universe:


- For result objects:



- Filter Event Status with: User Logon Succeeded, Named User Logon Succeeded and Concurrent Logon Succeeded.



At the end you have next query:



Execute with Run Query:



Save document for a future use.

Export Report to Excel File:

Export report to Excel (XLS or XLSX):



Remove in the Excel all blank rows before head and all blank columns before "User Name", remove also any special character different than [a-Z][0-9].

You also can use SAP BO Live Office to retrieve data using Audit universe and schedule periodically.

Rename report name to the final table name in the universe:



Save to a visible path by SAP BO BI Client Tools (IDT and WRC) and by SAP BO BI WeIProcessingServer.




Retrieve SBOPRepositoryExplorer universe to IDT:


Create in IDT (Information Design Tool) a Local Project and from Repository Resources:






Configure Universe Connection attaching the Excel File:


To attach our Excel file definition to our universe we must create an universe connection in IDT into a project, for example:





Test Data from Excel in Connection:


Before continue with next steps is important to check if Excel data can be read where path is correctly defined and also the structure:



Import new Table (Excel) into Universe:


Now we can import the table into the Data Foundation:



Insert Join between EXCEL's table and USERS table:



Configure Join:




Save the Data Foundation.



Define new objects in the Business Layer:


Here we can define in the Business Layer, into the "Users" folder the new measure coming from the new table:



for example, with next content:



and before save and publish the universe, create a query to test results with all users and users without login:


- With login (in our example 2.921 users):



- Without login (in our example 2.761 users):



It means that we have only 2.921-2.761=160 users who have ever logged in Test environment.

Now we can publish our new universe to CMS for next topic.



Compare data from Method I and Method II


As you can observe in "Method I" we have 245 users logged and in "Method II" 160 users logged. We want to discover what users are different from "Method I" and "Method II" and try to understand why those users were not recorded in Audit DB.


- First is create a query with all users logged from both methods:



(245 users)


- Second is create a combined query (with minus) to get the list of users that were not included in Audit DB record:



So we have to investigate why those 85 users where not recorded in Audit DB.


That's all by the moment.

Jorge Sousa

This post shows one possible way to create the list of SAP BO LCM jobs using the SAP BO BI Universe in SBOPRepositoryExplorer.




Some required components:

  1. SAP BO BI 4.x Platform;
  2. XML file with predefined CMS query;
  3. IDT to configure SBOPRepositoryExplorer connection and the universe;
  4. WebIntelligence to create the document.


For this example I'm using SAP BO BI 4.1 SP2.


Create CMS query for LCM Jobs:


We can create an XML file with next content:

<?xml version='1.0' encoding='ISO-8859-15'?>
<Tables xmlns="http://www.w3.org/2011/SBOPTables">
    <TableDescription>LCM Jobs</TableDescription>
    <BOQuery>SELECT SI_ID,


Configure Universe Connection attaching the XML file:


To attach our XML file definition to our universe we must create an universe connection in IDT into a project, for example:





Create the Universe:


After connection configured we can create the Data Foundation and Business Layer:




Test the Universe:


When universe is already created we can do a test before publish to CMS:



Publish Universe to CMS:


After test we can publish to CMS:



Create report in BI LaunchPad with WebI:


After published the universe we can use in WRC and in BI LaunchPad:


and the report can be like:



Thanks and enjoy.

Jorge Sousa


Update 20/08/2014: Maintenance Schedule has finally been updated with the announcement of SAP BI 4.1 SP05.  See below.

Update 04/07/2014: Forward Fit Plan has been updated.  See below.

Update 23/06/2014: PAM has been updated.  See below.

Update 22/06/2014: Added Section: Maintenance Schedule.  See below.

Update 17/06/2014: What's New document has been updated.  Comments below.





SAP has released on Friday June 13th 2014, as planned in the Maintenance Schedule, Support Package 04 for the following products:

  • SBOP BI Platform 4.1 SP04 Server
  • SBOP BI Platform 4.1 SP04 Client Tools
  • SBOP BI Platform 4.1 SP04 Live Office
  • SBOP BI Platform 4.1 SP04 Crystal Reports for Enterprise
  • SBOP BI Platform 4.1 SP04 Integration for SharePoint
  • SBOP BI Platform 4.1 SP04 NET SDK Runtime
  • SAP BusinessObjects Dashboards 4.1 SP04
  • SAP BusinessObjects Explorer 4.1 SP04
  • SAP Crystal Server 2013 SP04
  • SAP Crystal Reports 2013 SP04


You can download these updates from the SAP Marketplace as a Full Install Package or Support Package (Update).


E.g.: Full Install

Full Install.png


E.g.: Support Package (Update)

SP (Update).png



What's New?


The updated What's New document has been released few days late on 1706/2014.  There are 9 new features and unless I'm missing the point, none of them are very impressive.  However, there are tons of fixes (293 to be exact).


Tip: If the link above still shows an old version, refresh the page in your browser or press F5.



Supported Platform (Product Availability Matrix)


The updated SAP BusinessObjects BI 4.1 Supported Platforms (PAM) document has been released a week late on 23/06/2014.


Alternative URL: http://service.sap.com/pam


As far as I can tell, the following extra support has been added since SAP BI 4.1 SP03:


  • CMS + Audit Repository Support by Operating System
    • Microsoft SQL Server 2014
    • Oracle 12c
    • SAP HANA SPS08
    • Sybase ASE 16


  • Adobe Flash Player 12


  • SAP HANA Support per SAP BI Products


  • Java Runtime (JRE) 1.8 (For browser use with Web Intelligence - Not Server side aka JDK which is still 1.7)





The usual documents have been made available:










Forward Fit Plan


The SBOP Forward Fit Plan has finally been updated.  Few weeks late...  SAP BI 4.1 SP04 includes the following updates and fixes:


  • BI 4.1 Patch 3.1
  • BI 4.1 Patch 2.2 - 2.4
  • BI 4.1 Patch 1.6 - 1.8


  • BI 4.0 Patch 6.11 - 6.12
  • BI 4.0 Patch 7.7 - 7.9


  • XI 3.1 FP 6.4


Source: SBOP Forward Fit Plan

Maintenance Schedule

SAP BI 4.1 SP04 Patch 4.1 (Week 31 - August 2014)

SAP BI 4.1 SP04 Patch 4.2 (Week 35 - August/September 2014)

SAP BI 4.1 SP04 Patch 4.3 (Week 40 - October 2014)

SAP BI 4.1 SP04 Patch 4.4 (Week 44 - November 2014)

SAP BI 4.1 SP04 Patch 4.5 (Week 48 - November / December 2014)


SAP BI 4.1 SP05 is now announced for Week 47, 2014.  Release date should be around Friday November 21st.  Looking forward to it!


Source: Maintenance Schedule



Installing Updates


I have installed the following updates on my training server.


Updates Installed


    • SBOP BI Platform 4.1 SP04 Server
    • SBOP BI Platform 4.1 SP04 Client Tools
    • SAP BusinessObjects Explorer 4.1 SP04




    • Windows Server 2012
    • 4x Virtual Processors (Hyper-V)
    • 20 GB RAM




Bearing in mind my training server originally started with a clean installation of SP02 then patched to SP03 with 3x languages (English, French, Finnish), this is how long it tool to install everything.


1. As always, the Preparing to install screen takes longer and longer...


Please wait.png

2. This chart shows the time it took waiting for the Preparing screen to disappear then the installation time.  That's right, about 2.5 hours (151 minutes) just to patch SAP BI Platform 4.1 SP04 Server.


3. As always, when you click Finish, do NOT reboot straight away.  Wait for setupengine.exe to go away in your Task Manager.  This can take a minute or so.


Task Manager.png


4. How it looks for me now.





Past Articles


For information, I wrote the following articles about previous SAP BI Support Packages:






It's still early days and there are couple of documents that need to be updated but so far so good.  No errors in the Event Viewer, every services are starting as they should and some preliminarily tests are successful.


As always, please share your how it went for you in the comments below.  I'm sure this does help many people.



Please like and rate this article if you find it useful!



Thanks and good luck!



Here's some exciting news for you enterprise data connectivity junkies out there: SAP's BI 4.1 suite will support Hive2 and Impala connectivity via ODBC and JDBC drivers from Simba Technologies. And later in the year, so too will SAP's Lumira data visualization software.


For Simba Technologies, it's a mutually-rewarding partnership: Simba shares SAP's broad commitments to enterprise Big Data innovation, integration, productivity, and efficiency. But beyond that, why should Simba's connectivity "plumbing" matter to SAP's customers?


SAP BI 4.1 + Simba Connectivity = The Future of Big Data Interoperability

SAP's adoption of Simba connectivity drivers represents SAP taking a progressive stand for the innovation enterprise: From the data warehouse to the BI application to the Hadoop framework of choice, when it comes to Big Data in the enterprise, interoperability matters. A lot.


The SAP BI 4.1 Suite now includes Simba ODBC and JDBC drivers as embedded components. SAP BI 4.1 customers can easily connect their Big Data directly to Hive or Impala. Queries are faster, performance is better, and reliability is so good enterprise IT can take it for granted. (Not that enterprise IT would or should ever take anything for granted, of course!)


Interoperability and Extensibility: How Best-in-class Connectivity Impacts SAP Enterprises

What's really meaningful about this partnership is that it signifies SAP's commitment to interoperability. The Simba JDBC drivers for Big Data, for example, adhere to the JDBC standards. For SAP BI 4.1 customers, that means accessibility to more apps, more platforms, more data sources. The SAP BI 4.1 Suite is a first-class diplomatic citizen in the Big Data world because it can connect directly to any Hadoop distribution – no need to get drivers from the Hadoop distros – SAP has it all built in. SAP BI 4.1 customers get this direct Big Data and Hadoop connectivity using the same tools and products they have always used.


Today BI 4.1, Tomorrow Lumira

BI 4.1’s support for Hive and Impala connectivity via SIMBA drivers is a first step (or the first two steps) in optimizing Hadoop connectivity for SAP enterprises. SAP has cranked the throttle when it comes to optimizing Hadoop SQL engine performance. And the next step is on the BI visualization side: Lumira, SAP’s innovative solution for visualizing all that big data, will adopt Simba JDBC drivers later this year. The right tools, optimized connectivity, and screaming fast query speed: It’s a great time to be an SAP Big Data enterprise.


Filter Blog

By author:
By date:
By tag: