1 2 3 22 Previous Next

BI Platform

324 Posts

Dear SCN user,


We are happy to inform you about the availability of the updated SAP Analytics Roadmap slides.



Updated features and benefits of Solutions released since the last roadmap, such as:

  • SAP BusinessObjects Business Intelligence platform 4.1, SP5
  • SAP BusinessObjects Mobile 6.1
  • SAP Lumira 1.25
  • SAP Lumira Server
  • SAP BusinessObjects Analysis, edition for Microsoft Office, version 2.0
  • SAP BusinessObjects Design Studio 1.5
  • SAP Predictive Analytics 2.0
  • Updated Planned Innovations for all Solutions


You can download the updated roadmaps via the links:

Overall Analytics Roadmap*

Analytics BW Roadmap*

Analytics Agnostic Roadmap*

* User Account required for SAP Support Page

Kind Regards


This was an ASUG webcast the other week, with a focus on BI (not predictive, HANA)


On a different webcast I became aware of this related document about licensing - see here


Figure 1: Source: SAP


Everyone's contract is different




Figure 2: Source: SAP


There have been multiple BI license models over time



Figure 3: Source: SAP


Figure 3 shows the context of BI license models; SAP has previously had add-on models


SAP has moved to suite style licensing


Differences in BI suite license over on the right including Desktop and Lumira Server


Figure 4: Source: SAP


Figure 4 shows the core licensing principles


There is no obligation or requirement to convert licensing


SAP wants to be transparent in license models


License models are non-version specific


Figure 5: Source: SAP


SAP no longer sells CPU licenses to new customers but to existing


NUL stands for named user license – for managers, power users, most desktop tools


CSBL are for casual users that don’t require guaranteed


In the CMC configure NUL or Concurrent


Figure 6: Source: SAP


Figure 6 shows 1 logon = 1 session


Figure 7: Source: SAP


It is still one session if navigating between sessions


Figure 8: Source: SAP


Figure 8 shows SAP is moving away from CPU based licenses because they wanted to remove from constraints


Part 2 is coming when time allows




Upcoming ASUG BI Webcast Listing

Carsten Mönning and Waldemar Schiller

Part 1 - Single node Hadoop on Raspberry Pi 2 Model B (~120 mins), http://bit.ly/1dqm8yO
Part 2 - Hive on Hadoop (~40 mins), http://bit.ly/1Biq7Ta

Part 3 - Hive access with SAP Lumira (~30mins)
Part 4 - A Hadoop cluster on Raspberry Pi 2 Model B(s) (~45mins)

Part 3 - Hive access with SAP Lumira (~30 mins)

In the first two parts of this blog series, we installed Apache Hadoop 2.6.0 and Apache Hive 1.1.0 on a Raspberry Pi 2 Model B, i.e. a single node Hadoop 'cluster'. This proved perhaps surprisingly nice and easy with the Hadoop principle allowing for all sorts of commodity hardware and HDFS, MapReduce and Hive running just fine on top of the Raspbian operating system. We demonstrated some basic HDFS and MapReduce processing capabilities by word counting the Apache Hadoop license file with the help of the word count programme, a standard element of the Hadoop jar file. By uploading the result file into Hive's managed data store, we also managed to experiment a little with HiveQL via the Hive command line interface and queried the word count result file contents.

In this Part 3 of the blog series, we will pick up things at exactly this point by replacing the HiveQL command line interaction with a standard SQL layer over Hive/Hadoop in the form of the Apache Hive connector of the SAP Lumira desktop trial edition. We will be interacting with our single node Hadoop/Hive setup just like any other SAP Lumira data source and will be able to observe the actual SAP Lumira-Hive server interaction on our Raspberry Pi in the background. This will be illustrated using the word count result file example produced in Parts 1 and 2.




Apart from having worked your way through the first two parts of this blog series, you will need to get hold of the latest SAP Lumira desktop trial edition at http://saplumira.com/download/ and operate the application on a dedicated (Windows) machine locally networked with your Raspberry Pi.

If interested in details regarding SAP Lumira, you may want to have a look at [1] or the SAP Lumira tutorials at http://saplumira.com/learn/tutorials.php.

Hadoop & Hive server daemons

Our SAP Lumira queries of the word count result table created in Part 2 will interact with the Hive server operating on top of the Hadoop daemons. So, to kick off things, we need to launch those Hadoop and Hive daemon services first.

Launch the Hadoop server daemons in your Hadoop sbin directory. Note that I chose to rename the Hadoop standard directory name into "hadoop" in Part 1. So you may have to replace the directory path below with whatever hadoop directory name you chose to set (or chose to keep).



Similarly, launch the Hiver server daemon in your Hive bin directory, again paying close attention to the actual Hive directory name set in your particular case.



The Hadoop and Hive servers should be up and running now and ready for serving client requests. We will submit these (standard SQL) client requests with the help of the SAP Lumira Apache Hive connector.


SAP Lumira installation & configuration

Launch the SAP Lumira installer downloaded earlier on your dedicated Windows machine. Make sure the machine is sharing a local network with the Raspberry Pi device with no prohibitive firewall or port settings activated in between.


The Lumira Installation Manager should go smoothly through its motions as illustrated by the screenshots below.



On the SAP Lumira start screen, activate the trial edition by clicking the launch button in the bottom right-hand corner. When done, your home screen should show the number of trial days left, see also the screenshot below. Advanced Lumira features such as the Apache Hive connector will not be available to you if you do not activate the trial edition by starting the 30-day trial period.



With the Hadoop and Hive services running on the Raspberry Pi and the SAP Lumira client running on a dedicated Windows machine within the same local network, we are all set to put a standard SQL layer on top of Hadoop in the form of the Lumira Apache Hive connector.


Create a new file and select "Query with SQL" as the source for the new data set.


Select the "Apache Hadoop Hive 0.13 Simba JDBC HiveServer2  - JDBC Drivers" in the subsequent configuration sreen.



Enter both your Hadoop user (here: "hduser") and password combination as chosen in Part 1 of this blog series as well as the IP address of your Raspberry Pi in your local network. Add the Hiver server port number 10000 to the IP address (see Part 2 for details on some of the most relevant Hive port numbers).


If everything is in working order, you should be shown the catalog view of your local Hive server running on Raspberry Pi upon pressing "Connect".


In other words, connectivity to the Hive server has been established and Lumira is awaiting your free-hand standard SQL query against the Hive database. A simple 'select all' against the word count result Hive table created in Part 2, for example, means that the full result data set will be uploaded into Lumira for further local processing.


Although this might not seem all that mightily impressive to the undiscerning, remind yourself of what Parts 1 and 2 taught us about the things actually happening behind the scenes. More specifically, rather than launching a MapReduce job directly within our Raspberry Pi Hadoop/Hive environment to process the word count data set on Hadoop, we launched a HiveQL query and its subsequent MapReduce job using standard SQL pushed down to the single node Hadoop 'cluster' with the help of the SAP Lumira Hive connector.


Since the Hive server pushes its return statements to standard out, we can actually observe the MapReduce job processing of our SQL query on the Raspberry Pi.


An example (continued)

We already followed up on the word count example built up over the course of the first two blog posts by showing how to upload the word count result table sitting in Hive into the SAP Lumira client environment. With the word count data set fully available within Lumira now, the entire data processing and visualisation capabilities of the Lumira trial edition are available to you to visualise the word count results.


By way of inspiration, you may, for example, want to cleanse the license file data in the Lumira data preparation stage first by removing any punctuation data from the Lumira data set so as to allow for a proper word count visualisation in the next step.



With the word count data properly cleansed, the powerful Lumira visualisation capabilities can be applied freely at the data set and, for example, a word count aggregate measure as shown immediately below.



Let's conclude this part with some Lumira visualisation examples.







In the next and final blog post, we will complete our journey from a non-assembled Raspberry Pi 2 Model B bundle kit via a single node Hadoop/Hive installation to a 'fully-fledged' Raspberry Pi Hadoop cluster. (Though it will be a two-node cluster only, but it will do just fine to showcase the principle.)



SAP Lumira desktop trial edition - http://saplumira.com/download/

SAP Lumira tutorials - http://saplumira.com/learn/tutorials.php
A Hadoop data lab project on Raspberry Pi - Part 1/4 - http://bit.ly/1dqm8yO
A Hadoop data lab project on Raspberry Pi - Part 2/4 - http://bit.ly/1Biq7Ta


[1] C. Ah-Soon and P. Snowdon, "Getting Started with SAP Lumira", SAP Press, 2015

Continuing with the security topics, I will cover the topic of staying up to date with security patches for BI.

While SAP practices a complete security development lifecycle, the security landscape continues to evolve, and through both internal and external security testing we become aware of new security issues in our products.  Every effort is then made to provide a timely fix to keep our customers secure. 


This is part 4 of my security blog series of securing your BI deployment. 


Secure Your BI Platform Part 1

Secure Your BI Platform Part 2 - Web Tier

Securing your BI Platform part 3 - Servers


Regular patching:

You're probably familiar with running monthly patches for windows updates, "patch Tuesday" on the second Tuesday of every month.

SAP happens to follow a similar pattern, where we release information about security patches available for our customers, for the full suite of SAP products.


BI security fixes are shipped as part of fixpacks and service packs. 

I will here walk you through signing up for notifications.



Begin by navigating to https://support.sap.com/securitynotes


Click on "My Security Notes*"


This will take you to another link, where you can "sign up to receive notifications"



Click on "Define Filter" , where you can filter for the BI product suite.


Sign up for email notifications:


Defining the filter: Search for SBOP BI Platform (Enterprise)

And select the version:


Note that currently the search does not appear to filter on version unfortunately, so you will likely see all issues listed.


Your resulting filter should look something like this:



The security note listing will look something like this:



Understanding the security notes:

Older security notes have a verbal description of version affected and patches that contain the fix.

For example, the note will say "Customers should install fix pack 3.7 or 4.3"...


Newer notes will also have the table describing the versions affected and where the fixes shipped:

Interpreting the above, the issue affects XIr3.1, 4.0 and 4.1.  

Fixes are provided on xr3.1 Fixpacks 6.5 & 7.2, on 4.0 SP10, and 4.1 SP4.


The forward fit policy is the same as "normal" fixes, meaning a higher version of the support patch line will also include the fixes.


The security note details will also contain a CVSS score.  CVSS = Common Vulnerability Scoring System.

It is basically a 0 - 10 scoring system to give you an idea of how quickly you should apply the patch.

More info on the scoring system https://nvd.nist.gov/cvss.cfm


1. Vulnerabilities are labeled "Low" severity if they have a CVSS base score of 0.0-3.9.

2. Vulnerabilities will be labeled "Medium" severity if they have a base CVSS score of 4.0-6.9.

3. Vulnerabilities will be labeled "High" severity if they have a CVSS base score of 7.0-10.0.


In short, if you see a 10.0, you better patch quickly!


Not applying the latest security fixes can get you to fail things like PCI compliance, so after you have locked down & secured your environment, please make sure you apply the latest fixes and keep the bad guys out!

Share your insights for the future of BI; Complete the BARC BI Survey 2015


Until the end of the month the BI Survey 2015 of BARC Research is op en for everyone willing to share his/her insights in the direction of BI.

Do you want to share your insights and make your voice heard?

  • The Survey is scheduled to run until the end of May
  • It should take you about 20 minutes to complete
  • Whether you are a Business or Technical users, as well as consultants, are all welcome to participate
  • Answers will be used anonymously
  • Participants will:
    • Receive a summary of the results from the survey when it is published
    • Be entered into a draw to win one of ten $50 Amazon vouchers
    • Ensure that your experiences are included in the final analyses


You can take the survey via : https://digiumenterprise.com/answer/?link=2319-HZXG9J6B


Thanks in advance


This was a SAP user group webcast today.  I was late but towards the end the SAP speaker said SAP Safe Harbor statement applies:


"This blog, or any related presentation and SAP's strategy and possible future developments, products and or platforms directions and functionality presented herewith are all subject to change and may be changed by SAP at any time for any reason without notice. The information on this blog is not a commitment, promise or legal obligation to deliver any material, code or functionality..."


This means anything in the future is subject to change and these are my notes as I heard them.


Enterprise BI 4.2


Figure 1: Source: SAP


Figure 1 shows the themes of BI4.2, overall being simplified, enhanced and extended


Figure 2: Source: SAP


Design Studio 1.5 has offline clickthrough applications, with the ability to reduce design time it takes to create charts, Lumira interoperability, import Lumira into Design Studio. Version 1.5 includes commentary / create use cases, and export data to PDF


Analysis Office/EPM will consolidate into one plug-in with one Analysis Office app for the BI suite.  On the right includes features for BI4.1 SP06 planned for next month.


Enterprise BI 4.2


Figure 3: Source: SAP


Figure 3 shows what is planned for BI4.2, including commentary for Web Intelligence, design Features for Mobility devices, HANA Direct access to Universe


BI4.2 Web Intelligence includes support for Big numbers and  set consumption .  With set analysis SAP is  re-introducing and consume sets in Web Intelligence


BI Platform features include commentary, recycle bin in CMC – enhancements to UMT and promotion tool to speed up promotions and upgrade


Packaged audit feature is in the suite


Semantic Layer includes linked UNX universes are back


Authored universes on BEx queries  was disabled in BI4.0/1 and is now back


Set Analysis is back


Installation improvements include one step update, faster to upgrade, as the current installation patching hasn’t been the best


There is a utility to remove unused binaries


Enhance DSL bridge, enhance BICS bridge, HANA enhancements for Web Intelligence on HANA; committed to enhancing Web Intelligence & BW experience.

SAP Lumira Roadmap


Figure 4: Source: SAP


Planned for SAP Lumira includes convergence and search


Question and Answer

Q: What happened with Dr. Who version WebI without microcube

A: Project cancelled; wanted to put support for Lumira for HANA-based integration

However, enhanced HANA based support for JDBC connection


Q: When will SP06 be available?

A: planned for 3rd week of June

Codeline finished yesterday – subject to safe harbour


Q: Recycle bin for Infoview?

A: It is just for CMC; submit for Idea Place


Q: Any plan to provide the option of linking data providers which was avialable in XI versions?

A: Enter in Idea Place


Q: We are about to upgrade from 4.0 Sp7 P5 to 4.1 SP4 should we upgrade to 4.1 SP6 instead?

A: Difficult question to answer; may be better to delay


Q: when will the PAM for 4.1.6 be available?

A: Third week in June (planned)


Q:  Specifically what offline capabilities are planned for Design studio (in context for mobile bi for iOS)?

A:  Cached based setting, when consume on device; will find out for sure


Q: Are there enhancements to the RESTful Web services API? Specifically can we now create and manage users using the API so we can get away from the .NET SDK?

A: Convergence to RESTful web service – strategy, nuances/needing a while paper


Q: Will there be full support for 'selection option' variables in Web Intelligence i.e. same functionality as in BEx?

A: put on Idea Place


Q: Is there provision for sensor and similar type data sources - IoT

A: Roadmap for IoT is within HANA – datasources for HANA


Q: Will BEx conditions be supported?

A: Look at  Idea Place


Q: Can we make Web Intelligence prompts hidden so that once a prompt value is set the prompt box will not appear?

A: Idea Place


Q: any enhancements (fixes) to integrity check in IDT tool?

A: Don’t know of anything new that have been added


Q: Will there be support for variables in defaults area of BEx queries?

A: Currently not supported in Web Intelligence; how much of BEx queries should surface to Web Intelligence


Q: Are there any plans to enhance Publications, specifically making Delivery rules available to Web Intelligence documents

A: Add to Idea Place; publications not enhanced between 3.x to 4.x


Q: Can you say more about differentiation of Lumira from competitors?  It looks to me that despite frequent releases you are still playing catch up.

A: This is why roadmap is substantial



SAP BI Suite Roadmap Strategy Update from ASUG SAPPHIRENOW

ASUG Webinars - May 2015

SAP SAPPHIRE and the ASUG Annual Conference were held last week at the Orange County Convention Center in Orlando, Florida. While most of the keynote action centered on S4/HANA and Hasso Plattner's Boardroom of the Future (see related Fortune article), there were three key messages in the analytics booths on the show floor.


All Roads (Still) Lead to SAP BusinessObjects BI 4.1


First, just in case you weren't paying attention, all roads (still) lead to SAP BusinessObjects BI 4.1 (see my previous State of the SAP BusinessObjects BI 4.1 Upgrade from December 2014). With mainstream support for SAP BusinessObjects Enterprise XI 3.1 and SAP BusinessObjects BI 4.0 ending on December 31, 2015, the race is on to get as many SAP customers as possible to the BI 4.1 platform. With the end of year quickly approaching, the time is now to get started on your BI 4.1 upgrade. SAP BusinessObjects BI 4.1 Support Pack 5 (SP5) is currently available (along with 5 patches) and Support Pack 6 (SP6) is still on track for mid-year. You couldn't see SP6 on the show room floor, but it started showing up in "coming soon" slide decks from SAP presenters. I'm curious to see free-hand SQL support in Web Intelligence and UNX support in Live Office, among other minor enhancements. SAP is also starting to talk about SAP BusinessObjects BI 4.2 (see Tammy Powlas' blog entitled  SAP BI Suite Roadmap Strategy Update from ASUG SAPPHIRENOW), but it most likely won't be ready in time for the impending support deadline. Instead, you should think of BI 4.2 as a small upgrade project once your organization is solidly using BI 4.1.


SAP Design Studio 1.5


SAP's second analytics message was about SAP Design Studio. I attended Eric Schemer's World Premier of Design Studio 1.5 session (see Tammy Powlas' blog entitled World Premiere SAP Design Studio 1.5 ASUG Annual Conference - Part 1). SAP Design Studio is the go-forward tool to replace both SAP Dashboards (formerly Xcelsius) and SAP Web Application Designer (WAD). Version 1.5 adds several new built-in UI capabilities, OpenStreetMap integration, and parallel query, just to name a few innovations. If your organization is not yet ready to start using Design Studio, remember that a new version arrives roughly every 6 months. Depending on your organization's own time table to begin using Design Studio, it might make sense to wait until the end of the year for Design Studio 1.6.


SAP Lumira on BI 4.1

SAP's third key message to analytics customers was about SAP Lumira. SAP Lumira v1.25 is a really big deal. The Lumira Desktop (starting with v1.23) includes a brand-new in-memory database engine that replaces the IQ-derived engine. Starting with v1.25, this engine is also available for the SAP BI 4.1 platform as an add-on, bringing SAP Lumira documents to the BI 4.1 platform (see Sharon Om's blog entitled What's New in SAP Lumira 1.25). No matter if you're currently on XI 3.1, BI 4.0 or BI 4.1, you'll want to plan for increasing the hardware footprint of your BI 4.1 landscape to accommodate the new in-memory engine, which runs best on a dedicated node (or nodes, depending on sizing) in your BI 4.1 landscape.



With BI 4.1 SP5, Design Studio 1.5, and Lumira 1.25, there are lots of new capabilities available for the BI platform starting today. And many more are planned for BI 4.1 SP6 and BI 4.2 over the next six to nine months. If you weren't able to attend SAP SAPPHIRE in person, you'll no doubt be hearing more on SAP webcasts and at the upcoming 2015 ASUG SAP Analytics and BusinessObjects User Conference, August 31 through September 2 in Austin, Texas.

Carsten Mönning and Waldemar Schiller

Part 1 - Single node Hadoop on Raspberry Pi 2 Model B (~120 mins), http://bit.ly/1dqm8yO

Part 2 - Hive on Hadoop (~40 mins)

Part 3 - Hive access with SAP Lumira (~30mins)

Part 4 - A Hadoop cluster on Raspberry Pi 2 Model B(s) (~45mins)


Part 2 - Hive on Hadoop (~40 mins)

Following on from the Hadoop core installation on a Raspberry Pi 2 Model B in Part 1 of this blog series, in this Part 2, we will proceed with installing Apache Hive on top of HDFS and show its basic principles with the help of last part's word count Hadoop processing example.

Hive represents a distributed relational data warehouse featuring a SQL-like query language, HiveQL, inspired by the MySQL SQL dialect. A high-level comparison of the HiveQl and SQL is provided in [1]. For a HiveQL command reference, see: https://cwiki.apache.org/confluence/display/Hive/LanguageManual.

The Hive data sits in HDFS with HiveQL queries getting translated into MapReduce jobs by the Hadoop run-time environment. Whilst traditional relational data warehouses enforce a pre-defined meta data schema when writing data to the warehouse, Hive performs schema on read, i.e., the data is checked when a query is launched against it. Hive alongside the NoSQL data warehouse HBase represent frequently used components of the Hadoop data processing layer for external applications to push query workloads towards data in Hadoop. This is exactly what we are going to do in Part 3 of this series when connecting to the Hive environment via the SAP Lumira Apache Hive standard connector and pushing queries through this connection against the word count output file.


First, let us get Hive up and running on top of HDFS.


Hive installation
The latest stable Hive release will operate alongside the latest stable Hadoop release and can be obtained from Apache Software Foundation mirror download sites. Initiate the download, for example, from spacedump.net and unpack the latest stable Hive release as follows. You may also want to rename the binary directory to something a little more convenient.

cd ~/
wget http://apache.mirrors.spacedump.net/hive/stable/apache-hive-1.1.0-bin.tar.gz
tar -xzvf apache-hive-1.1.0-bin.tar.gz
mv apache-hive-1.1.0-bin hive-1.1.0

Add the paths to the Hive installation and the binary directory, respectively, to your user environment.

cd hive-1.1.0
export HIVE_HOME={{pwd}}
export PATH=$HIVE_HOME/bin:$PATH

Make sure your Hadoop user chosen in Part 1 (here: hduser) has ownership rights to your Hive directory.

chown -R hduser:hadoop hive

To be able to generate tables within Hive, run the Hadoop start scripts start-dfs.sh and start-yarn.sh (see also Part 1). You may also want to create the following directories and access settings.

hadoop fs -mkdir /tmp
hadoop fs -chmod g+w /tmp
hadoop fs -mkdir /user/hive/warehouse
hadoop fs -chmod g+w /user/hive/warehouse

Strictly speaking, these directory and access settings assume that you are intending to have more than one Hive user sharing the Hadoop cluster and are not required for our current single Hive user scenario.

By typing in hive, you should now be able to launch the Hive command line interface. By default, Hive issues information to standard error in both interactive and noninteractive mode. We will see this effect in action in Part 3 when connecting to Hive via SAP Lumira. The -S parameter of the hive statement will suppress any feedback statements.

Typing in hive --service help will provide you with a list of all available services [1]:


Command-line interface to Hive. The default service.


Hive operating as a server for programmatic client access via, for example, JDBC and ODBC. Http, port 10000. Port configuration parameter HIVE_PORT.


Hive web interface for exploring the Hive schemas. Http, port: 9999. Port configuration parameter hive.hwi.listen.port.
jarHive equivalent to hadoop jar. Will run Java applications in both the Hadoop and Hive classpath.
metastoreCentral repository of Hive meta data.

If you are curious about the Hive web interface, launch hive --service hwi, enter http://localhost:9999/hwi in your browser and you will be shown something along the lines of the screenshot below.


If you run into any issues, check out the Hive error log at /tmp/$USER/hive.log. Similarly, the Hadoop error logs presented in Part 1 can prove useful for Hive debugging purposes.

An example (continued)

Following on from our word count example in Part 1 of this blog series, let us upload the word count output file into Hive's local managed data store. You need to generate the Hive target table first. Launch the Hive command line interface and proceed as follows.

create table wcount_t(word string, count int) row format delimited fields terminated by '\t' stored as textfile;

In other words, we just created a two-column table consisting of a string and an integer field delimited by tabs and featuring newlines for each new row. Note that HiveQL expects a command line to be finished with a semicolon.


The word count output file can now be loaded into this target table.

load data local inpath '~/license-out.txt/part-r-00000' overwrite into table wcount_t;

Effectively, the local file part-r-00000 is stored in the Hive warehouse directory which is set to user/hive/warehouse by default. More specifically, part-r-00000 can be found in Hive Directory user/hive/warehouse/wcount_t and you may query the table contents.

show tables;

select * from wcount_t;

If everything went according to plan, your screen should show a result similar to the screenshot extract below.



If so, it means you managed to both install Hive on top of Hadoop on Raspberry Pi 2 Model B and load the word count output file generated in Part 1 into the Hive data warehouse environment. In the process, you should have developed a basic understanding of the Hive processing environment, its SQL-like query language and its interoperability with the underlying Hadoop environment.


In the next part of this series, we will bring the implementation and configuration effort of Parts 1 & 2 to fruition by running SAP Lumira as a client against the Hive server and will submit queries against the word count result file in Hive using standard SQL with the Raspberry Pi doing all the MapReduce work. Lumira's Hive connector will translate these standard SQL queries into HiveQL so that things appear pretty standard from the outside. Having worked your way through the first two parts of this blog series, however, you will be very much aware of what is actually going on behind the scene.



Apache Software Foundation Hive Distribution - Index of /hive

Apache Hive wiki - https://cwiki.apache.org/confluence/display/Hive/GettingStarted

Apache Hive command reference - https://cwiki.apache.org/confluence/display/Hive/LanguageManual

A Hadoop data lab project Part 1 - http://bit.ly/1dqm8yO

Configuring Hive ports - http://docs.hortonworks.com/HDP2Alpha/index.htm#Appendix/Ports_Appendix/Hive_Ports.htm


[1] T. White, "Hadoop: The Definitive Guide", 3rd edition, O'Reilly, USA, 2012

Carsten Mönning and Waldemar Schiller

Hadoop has developed into a key enabling technology for all kinds of Big Data analytics scenarios. Although Big Data applications have started to move beyond the classic batch-oriented Hadoop architecture towards near real-time architectures such as Spark, Storm, etc., [1] a thorough understanding of the Hadoop & MapReduce & HDFS principles and services such as Hive, HBase, etc. operating on top of the Hadoop core still remains one of the best starting points for getting into the world of Big Data. Renting a Hadoop cloud service or even getting hold of an on-premise Big Data appliance will get you Big Data processing power but no real understanding of what is going on behind the scene.

To inspire your own little Hadoop data lab project, this four part blog will provide a step-by-step guide for the installation of open source Apache Hadoop from scratch on Raspberry Pi 2 Model B over the course of the next three to four weeks. Hadoop is designed for operation on commodity hardware so it will do just fine for tutorial purposes on a Raspberry Pi. We will start with a single node Hadoop setup, will move on to the installation of Hive on top of Hadoop, followed by using the Apache Hive connector of the free SAP Lumira desktop trial edition to visually explore a Hive database. We will finish the series with the extension of the single node setup to a Hadoop cluster on multiple, networked Raspberry Pis. If things go smoothly and varying with your level of Linux expertise, you can expect your Hadoop Raspberry Pi data lab project to be up and running within approximately 4 to 5 hours.

We will use a simple, widely known processing example (word count) throughout this blog series. No prior technical knowledge of Hadoop, Hive, etc. is required. Some basic Linux/Unix command line skills will prove helpful throughout. We are assuming that you are familiar with basic Big Data notions and the Hadoop processing principle. If not so, you will find useful pointers in [3] and at: http://hadoop.apache.org/. Further useful references will be provided in due course.

Part 1 - Single node Hadoop on Raspberry Pi 2 Model B (~120 mins)

Part 2 - Hive on Hadoop (~40 mins)

Part 3 - Hive access with SAP Lumira (~30mins)

Part 4 - A Hadoop cluster on Raspberry Pi 2 Model B(s) (~45mins)




Part 1 - Single node Hadoop on Raspberry Pi 2 Model B (~120 mins)



To get going with your single node Hadoop setup, you will need the following Raspberry Pi 2 Model B bits and pieces:

  • One Raspberry Pi 2 Model B, i.e. the latest Raspberry Pi model featuring a quad core CPU with 1 GB RAM.
  • 8GB microSD card with NOOBS (“New Out-Of-the-Box Software”) installer/boot loader pre-installed (https://www.raspberrypi.org/tag/noobs/).
  • Wireless LAN USB card.
  • Mini USB power supply, heat sinks and HDMI display cable.
  • Optional, but recommended: A case to hold the Raspberry circuit board.

To make life a little easier for yourself, we recommend to go for a Raspberry Pi accessory bundle which typically comes with all of these components pre-packaged and will set you back approx. € 60-70.


We intend to install the latest stable Apache Hadoop and Hive releases available from any of the Apache Software Foundation download mirror sites, http://www.apache.org/dyn/closer.cgi/hadoop/common/, alongside the free SAP Lumira desktop trial edition, http://saplumira.com/download/, i.e.

  • Hadoop 2.6.0
  • Hive 1.1.0
  • SAP Lumira 1.23 desktop edition

The initial Raspberry setup procedure is described by, amongst others, Jonas Widriksson at http://www.widriksson.com/raspberry-pi-hadoop-cluster/. His blog also provides some pointers in case you are not starting off with a Raspberry Pi accessory bundle but prefer obtaining the hard- and software bits and pieces individually. We will follow his approach for the basic Raspbian setup in this part, but updated to reflect Raspberry Pi 2 Model B-specific aspects and providing some more detail on various Raspberry Pi operating system configuration steps. To keep things nice and easy, we are assuming that you will be operating the environment within a dedicated local wireless network thereby avoiding any firewall and port setting (and the Hadoop node & rack network topology) discussion. The basic Hadoop installation and configuration descriptions in this part make use of [3].

The subsequent blog parts will be based on this basic setup.


Raspberry Pi setup

Powering on your Raspberry Pi will automatically launch the pre-installed NOOBS installer on the SD card. Select “Raspbian”, a Debian 7 Wheezy-based Linux distribution for ARM CPUs, from the installation options and wait for its subsequent Installation procedure to complete. Once the Raspbian operating system has been installed successfully, your Raspberry Pi will reboot automatically and you will be asked to provide some basic configuration settings using raspi-config. Note that since we are assuming that you are using NOOBS, you will not need to expand your SD card storage (menu Option Expand Filesystem). NOOBS will already have done so for you. By the way, if you want or need to run NOOBS again at some point, press & hold the shift key on boot and you will be presented with the NOOBS screen.


Basic configuration

What you might want to do though is to set a new password for the default user “pi” via configuration option Change User Password. Similarly, set your internationalisation options, as required, via option Internationalisation Options.

BasicConfiguration Menu.png

More interestingly in our context, go for menu item Overclock and set a CPU speed to your liking taking into account any potential implications for your power supply/consumption (“voltmodding”) and the life-time of your Raspberry hardware. If you are somewhat optimistic about these things, go for the “Pi2” setting featuring 1GHz CPU and 500 MHz RAM speeds to make the single node Raspberry Pi Hadoop experience a little more enjoyable.


Under Advanced Options, followed by submenu item Hostname, set the hostname of your device to “node1”.  Selecting Advanced Options again, followed by Memory Split, set the GPU memory to 32 MB.


Finally, under Advanced Options, followed by SSH, enable the SSH server and reboot your Raspberry Pi by selecting <Finish> in the configuration menu. You will need the SSH server to allow for Hadoop cluster-wide operations.

Once rebooted and with your “pi” user logged in again, the basic configuration setup of your Raspberry device has been successfully completed and you are ready for the next set of preparation steps.


Network configuration

To make life a little easier, launch the Raspbian GUI environment by entering startx in the Raspbian command line.(Alternatively, you can use, for example, the vi editor, of course.) Use the GUI text editor, “Leafpad”, to edit the /etc/network/interfaces text file as shown to change the local ethernet settings for eth0 from DHCP to the static IP address Also add the netmask and gateway entries shown. This is the preparation for our multi-node Hadoop cluster which is the subject of Part 4 of this blog series.


Check whether the nameserver entry in file /etc/resolv.conf is given and looks ok. Restart your device afterwards.



Java environment

Hadoop is Java coded so requires Java 6 or later to operate. Check whether the pre-installed Java environment is in place by executing:


java –version

You should be prompted with a Java 1.8, i.e. Java 8, response.



Hadoop user & group accounts

Set up dedicated user and group accounts for the Hadoop environment to separate the Hadoop installation from other services. The account IDs can be chosen freely, of course. We are sticking here with the ID examples in Widriksson’s blog posting, i.e. group account ID “hadoop" and user account ID “hduser” within this and the sudo user groups.

     sudo addgroup hadoop

     sudo adduser –-ingroup hadoop hduser

     sudo adduser to group sudo

SSH server configuration

Generate a RSA key pair to allow the “hduser” to access slave machines seamlessly with empty passphrase. The public key will be stored in a file with the default Name “id_rsa.pub” and then appended to the list of SSH authorised keys in the file “authorized_keys”. Note that this public key file will need to be shared by all Raspberry Pis in an Hadoop cluster (Part 4).


     su hduser

           mkdir ~/.ssh

     ssh-keygen –t rsa –P “”

     cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys


Verify your SSH server access via: ssh localhost

This completes the Raspberry Pi preparations and you are all set for downloading and installing the Hadoop environment.


Hadoop installation & configuration

Similar to the Rasbian installation & configuration description above, we will talkyou through the basic Hadoop installation first, followed by the various
environment variable and configuration settings.


Basic setup

You need to get your hands on the latest stable Hadoop version (here: version 2.6.0) so initiate the download from any of the various Apache mirror sites (here: spacedump.net).


     cd ~/
     wget http://apache.mirrors.spacedump.net/hadoop/core/stable/hadoop-2.6.0.tar.gz

Once the download has been completed, unpack the archive to a sensible location, e.g., /opt represents a typical choice.

     sudo mkdir /opt

     sudo tar –xvzf hadoop-2.6.0.tar.gz -C /opt/

Following extraction, rename the newly created hadoop-2.6.0 folder into something a little more convenient such as “hadoop”.

     cd /opt

     sudo mv Hadoop-2.6.0 hadoop

Running, for example, ls –al, you will notice that your “pi” user is the owner of the “Hadoop” directory, as expected. To allow for the dedicated Hadoop user “hduser” to operate within the Hadoop environment, change the ownership of the Hadoop directory to “hduser”.

     sudo chown -R hduser:hadoop hadoop

This completes the basic Hadoop installation and we can proceed with its configuration.


Environment settings

Switch to the “hduser” and add the export statements listed below to the end of the shell startup file ~/.bashrc. Instead of using the standard vi editor, you could, of course, make use of the Leafpad text editor within the GUI environment again.

     su hduser

     vi ~/.bashrc

Export statements to be added to ~/.bashrc:

     export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")

     export HADOOP_INSTALL=/opt/hadoop


This way both the Java and the Hadoop installation as well as the Hadoop binary paths become known to your user environment. Note that you may add the JAVA_HOME setting to the hadoop-env.sh script instead, as shown below.

Apart from these environment variables, modify the /opt/hadoop/etc/hadoop/hadoop-env.sh script as follows. If you are using an older version of Hadoop, this file can be found in: /opt/hadoop/conf/. Note that in case you decide to relocate this configuration directory, you will have to pass on the
directory location when starting any of the Hadoop daemons (see daemon table below) using the --config option.

     vi /opt/hadoop/etc/hadoop/hadoop-env.sh

Hadoop assigns 1 GB of memory to each daemon so this default value needs to be reduced via parameter HADOOP_HEAPSIZE to
allow for Raspberry Pi conditions. The JAVA_HOME setting for the location of the Java implementation may be omitted if already set in your shell environment, as shown above. Finally, set the datanode’s Java virtual machine to client mode. (Note that with the Raspberry Pi 2 Model B’s ARMv7 processor, this
ARMv6-specific setting is not strictly necessary anymore.)

     # The java implementation to use. Required, if not set in the home shell

     export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")

     # The maximum amount of heap to use, in MB. Default is 1000.

     export HADOOP_HEAPSIZE=250

     # Command specific options appended to HADOOP_OPTS when specified

     export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTSi -client"


Hadoop daemon properties

With the environment settings completed, you are ready for the more advanced Hadoop daemon configurations. Note that the configuration files are not held globally, i.e. each node in an Hadoop cluster holds its own set of configuration files which need to be kept in sync by the administrator using, for example, rsync.

Modify the following files, as shown below, to configure the Hadoop system for operation in pseudodistributed mode. You can find these files in directory /opt/hadoop/etc/hadoop. In the case of older Hadoop versions, look for the files in: /opt/hadoop/conf


Common configuration settings for Hadoop Core.


Configuration settings for HDFS daemons:
The namenode, the secondary namenode and the datanodes.


General configuration settings for MapReduce
. Since we are running MapReduce using YARN, the MapReduce jobtracker and tasktrackers are replaced with a single resource manager running on the namenode.


File: core-site.XML











File: hdfs-site.xml








File: mapred-site.xml.template ( “mapred-site.xml”, if dealing with older Hadoop versions)







Hadoop Data File System (HDFS) creation

HDFS has been automatically installed as part of the Hadoop installation. Create a tmp folder within HDFS to store temporary test data and change the directory ownership to your Hadoop user of choice. A new HDFS installation needs to be formatted prior to use. This is achieved via -format.

     sudo mkdir -p /hdfs/tmp

     sudo chown hduser:hadoop /hdfs/tmp

     sudo chmod 750 /hdfs/tmp

     hadoop namenode -Format

Launch HDFS and YARN daemons

Hadoop comes with a set of scripts for starting and stopping the various daemons. They can be found in the /bin directory. Since you are dealing with a single node setup, you do not need to tell Hadoop about the various machines in the cluster to execute any script on and you can simply execute the following scripts straightaway to launch the Hadoop file system (namenode, datanode and secondary namenode) and YARN resource manager daemons. If you need to stop these daemons, use the stop-dfs.sh and stop-yarn.sh script, respectively.



Check the resource manager web UI at http://localhost:8088 for a node overview. Similarly, http://localhost:50070 will provide you with details on your HDFS. If you find yourself in need for issue diagnostics at any point, consult the log4j.log file in the Hadoop installation directory /logs first. If preferred, you can separate the log files from the Hadoop installation directory by setting a new log directory in HADOOP_LOG_DIR and adding it to script hadoop-env.sh.


With all the implementation work completed, it is time for a little Hadoop processing example.


An example

We will run some word count statistics on the standard Apache Hadoop license file to give your Hadoop core setup a simple test run. The word count executable represents a standard element of your Hadoop jar file. To get going, you need to upload the Apache Hadoop license file into your HDFS home directory.


     hadoop fs -copyFromLocal /opt/hadoop/LICENSE.txt /license.txt

Run word count against the license file and write the result into license-out.txt.


     hadoop jar /opt/hadoop-examples-2.6.0.jar wordcount /license.txt /license-out.txt

You can get hold of the HDFS output file via:


     hadoop fs -copyToLocal /license-out.txt ~/

Have a look at ~/license-out.txt/part-r-00000 with your preferred text editor to see the word count results. It should look like shown in the extract below.


We will build on these results in the subsequent parts of this blog series on Hive QL and its SAP Lumira integration.



Apache Software Foundation Hadoop Distribution - http://www.apache.org/dyn/closer.cgi/hadoop/common/

Jonas Widriksson blog - http://www.widriksson.com/raspberry-pi-hadoop-cluster/

NOOBS - https://www.raspberrypi.org/tag/noobs/

SAP Lumira desktop trial edition - http://saplumira.com/download/



[1] V. S. Agneeswaran, “Big Data Beyond Hadoop”, Pearson, USA, 2014

[2] K. Shvachko, H. Kuang, S. Radia and R. Chansler, “The Hadoop Distributed File System”, Proc. of MSST 2010, 05/2010

[3] T. White, "Hadoop: The Definitive Guide", 3rd edition, O'Reilly, USA, 2012

It seems like every time I open up my RSS feed lately, I'm greeted with a large number of blog posts on yet another exploit being discovered.  Off the top of my head, the big ones that come to mind are Heartbleed, POODLE, FREAK - I could go on but I'm sure you're all too aware of these.

When these vulnerabilities are announced, my team will get a number of customers raising incidents with questions related to these types of vulnerabilities and the impact on their SAP BusinessObjects BI system.

These types of incidents are usually quite different than vulnerabilities identified as a result of a formal penetration test or a security scan.  I will go over the process on how to effectively raise an issue with SAP Support to deal with any vulnerabilities you may have uncovered in a future blog.  For now I would like to draw attention to the following Knowledge Base Articles (KBAs)* that have been the most popular in 2014 and 2015 so far (in no particular order):



HeartBleed & OpenSSL




I'd love to hear from you!  My aim is to bring clarity and transparency around security issues and how they impact the BI platform.  If you have any suggestions on what kind of content you'd like to see or questions on this topic, please leave a comment below or send me a direct message through SCN.


*Please note that these KBAs are available to our customers only, and a valid account is required.  Please contact your SAP Super-Admin for access or contact our GSCI team.

SAP's Thomas B Kuruvilla provided this webcast on US Tax Day, assisted by Gowda Timma Ramu


I thank them both for taking the time to support ASUG.


The usual legal disclaimer applies, that things in the future are subject to change.


Figure 1: Source: SAP


Server options for on premise include Lumira Server for teams, which is for a line of business for small teams, stand alone, admin,


Planned GA end of this month is Lumira Server for BI Platform


Figure 2: Source: SAP


SAP Lumira becomes 1st class citizen of BI platform, the speaker said.  Figure 2 shows saving the Lumira document to the BI platform.


Figure 3: Source: SAP


Figure 3 shows you can open and edit a Lumira document from the BI Platform



Figure 4: Source: SAP


A new query panel is delivered as an extension


Figure 4 shows support for a distributed deployment


ESRI support is planned for 1.25 release


Figure 5: Source: SAP


Figure 5 shows Windows support


Only English is supported. Browsers supported are IE10/11 and Chrome


Figure 6: Source: SAP


Figure 6 shows the New Universe Query Panel that is an extension


Figure 7: Source: SAP


Same host deployment is for testing, small production.  For production, SAP says to size the server – how many concurrent users?


What is the average document size?


Figure 8: Source: SAP


SAP recommends a distributed deployment for larger production deployments; an APS is needed.


Screen on the right what is shown when installing


Figure 9: Source: SAP


To support document refresh, the file needs to be in same location


It does not support HANA for refresh


Query panel extension is a manual install – separate but simple


SAP says to maintain the same version between BI platform and desktop


Future Plans (subject to change)


Figure 10: Source: SAP


Figure 10 covers future plans for 2015, including data refresh with BW acquisition, parity w/ SAP Lumira Desktop for FHSQL


A prepare room inside browser by end of year, enhance scheduling, support for Mobile BI, additional language support for Lumira Desktop, and improve auditing


The plan is to bring back Information Steward for data lineage, and they are investing on extension management


The option to refresh on open is planned in a release this year


Question & Answer

Q:  Any plans to introduce SAP Lumira in-memory engine into Design Studio? I think it will help with speed for NON- HANA customers and also with interoperability between these tools

A:  I am not aware of any such plans for in-memory engine in Design Studio. However, we do have plans for interoperability between these clients


Q:  Will there be architectural change on our end when updating to HANA as calculation engine later this year?

A:  No changed in architecture, HANA would be used as a calculation engine when you create Lumira document with HANA Online



Q:  what is velocity engine?

A:  It is a light weight in-memory engine used in lumira desktop and lumira server



Q:  Is velocity engine is nothing but IQ?

A:  No, it is not IQ


Q:  When will connection to BICS connections be available?

A:  BW Acquisition is currently planned forlate Q2



Q:  What is the source for this document?  Does it require a universe?  Can it source BW?

A:  Source is Universe, to be specific UNX. BW is not yet supported on Lumira Server for BIP. We do plan to have BW acqisition support in Lumira Desktop and Lumira Server for BIP in future


Q:  Is there no data source refresh for HANA views?

A:  Not supported with Lumira Server for BI Platform 1.25, is planned for future release. in the meantime, you can use Generic JDBC or UNX on HANA Views as the source


Q:  When will SAP "Authentication" be supported for SAP Lumira Server for BI Platfrom...?

A:  SAP Authentication is planned to be supprted along with support for BW Acquisition in late Q2


Q:  SAP Lumira server for BI Platform is seperate installation or its going to be part of future BO-BI platform installer?

A:  It is going to be an add-on for the near future including on BI 4.2


Q:  Would generic JDBC allow for "live" querying on the views?

A:  No, it would create and update the dataset on manual Refresh



Q:  Is there any limitation to no of rows/data volume that Lumira velocity engine can handle or is it dependent on the Server Hardware memory?

A:  We are currently working on the Sizing recomendation. we would be highlighting the numbers as part of sizing guide. for now, you can reffer to http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/60271130-0c90-3110-07a0-fe54fd2de79d?QuickLink  


Q:  All types of UNX supported?  (Multi-source, ECC jCo connections, etc?)

A:  Not all UNX and UNX features would be supported, would have the limitation documented in Lumira Desktop user guide


Q:  Related to sizing, would Velocity Engine resource utilization be higher/lower/same as the Explorer Server utilization on the same volume of dataset?

A:  unlike explorer , while working with the Lumira Document on BI Platform, the entire dataset would get loaded into memory. It would be multiplied in case of merged datasets.


Q:  Will there be any SIZING of SAP Lumira Server for BIP sessions ready for SAPPHIRE or ASUG SABOUC Conferences...?

A:  Thomas has a SAP Lumira Deployment options session BI1044 + we have round table sessions

A:  Please see full ASUG BI schedule for ASUG Annual conference at https://www.asug.com/discussions/docs/DOC-40691  

A:  Please join session https://sessioncatalog.sapevents.com/index.cfm/go/agendabuilder.sessions/?l=99&sid=23266_448723&locale=en_US  


Q:  Is there a plan to support Bex Query/BICS? Can we access BW data using an QLAP Relational universe now?

A:  support for BW data acqisition using Bex Queries is planned for late Q2. Yes, you can use Relational UNX on BW



Q:  Is there a support for Oracle based UNX universes

A:  Yes, UNX based on relational data sources are supported


Q:  Will Lumira support BeX queries in BW?

A:  Yes, planned for late Q2



Q:  Will Lumira Server for BI Platform be supported on a Windows OS?

A:  It is currently supported on Windows 2008 R2 SP1 and 2012 R2


Q:  If the XLS file is hosted on the BI Platform. Can you use that XLS/CSV file as a source for Lumira

A:  No, the files have to be on the file system



Q:  Can Lumira Documents be accessed from the Mobile app if it is published to the BI Platform. Does this need to be Enterprise or AD Auth only?

A:  support for viewing Lumira stories on Mobile BI application is planned for future releases



Q:  Why is the Universe Query Panel an extension and not built-in? Will the existing built-in option eventually be replaced with the new extension? Having two universe options with different functionality will cause confusion for users and a support nightmare.

A:  Support for universes will continue; Query panel extension has a more rich experience - it provides more flexible - it was an extension to reduce Lumira Desktop footprint - recommendation is to use query panel extension for the .UNX


Q:  How will Lumira BI Server and Lumira Server on HANA co-exist

A:  Had this with LIMA - see blogs on SCN elaborate for LIMA

A:  Lumira Server for BI platform does not require HANA



Q:  Can Lumira BI Server work in a multi-tier environment (web components installed on a different VM)?

A:  Yes, we have 4 components as part of installer. Lumira Server, Lumira Scheduling service, Restful Web Service and Lumira web application. all can be deployed on seperate boxes with pre-requisites



Upcoming ASUG-related Webcasts

ASUG Annual Conference

Join us: ASUG BI pre-conference session at ASUG Annual conference

Monday, May 4. (extra registration fees apply).

Register here: http://bit.ly/ASUGPrecon

Hands-on SAP BusinessObjects BI 4.1 w/ SAP NetWeaver BW Powered by SAP HANA – Deep Dive

See details here: ASUG Pre Conference 2015 - Analysis Office, Lum... | SCN


Focus on Analysis Office, Lumira, and Design Studio. You get to work with these for 7 hours! Full day BI workshop. Limited to 30 people. One person per machine (no sharing). Join us May 4th for ASUG Annual Conference Pre-conference Hands-on Design Studio, Lumira, Analysis - see thisblog


Also see the ASUG BI Session schedule ASUG BI Schedule 2015.xlsx | ASUG

The 404 or Not Found error message is a HTTP standard response code indicating that the client was able to communicate with a given server, however the server could not find what was requested?


It is understood that from BOE XI 4.x BIP Webapp supports OSGI bundles. Hence BOE 4.x webApps can be either OSGI or NON-OSGI webApps.


Coming to the Occurrence(s) when we could find such errors(HTTP responses)


1.The web site hosting server will typically generate a "404 Not Found" web page when a user attempts to follow a broken link, dead link, or dangling link in case of both OSGI and NON OSGI context.


In such cases we need to check the below


  • We need to check that URL is properly constructed. i.e. context path, file path etc. has been proper or not.
  • Sometimes URL will be encoded and need to check whether URL has been encoded or not.


2. Sometimes if we have some problem with OSGI bundle.


In such cases we need to check the below.


  • We need to check OSGI Bundles or running or not as follows.
  • First thing is we need to collect “sbInitLog.txt” which is a special log file that contains logging output which occurs when Servlet Bridge initializes. Currently this is only output to the sbInitLog.txt file. This files located in tomcats work dir: {Tomcat Home}/work/Catalina/localhost/BOE/This log file is generated after the first request comes into the server. This log file contains info about what config files were read, what bundles were started, and the state of the bundles.
  • If this file contains  an error saying “Error starting bundle=*some Bundle Name*” then we need to diagnostics osgi bundle to identify the problem why OSGI bundle did not start or it will tell you what constraints are unsatisfied, as follows.


Steps to check whether the OSGI bundles are running. { The below steps are specific to default BOE web server Tomcat}


  1. Stop Tomcat server.
  2. Go to the main web.xml for BOE (BOE/WEB-INF/web.xml)
  3. Modify the web.xml by adding in -console and port #, then save the web.xml
  4. Re-start server
  5. Go to putty, and telnet over to the  machine onto the port you specified, and click Open
  6. You should now have the OSGI console, and you can run the regular commands on the Console
  7. Run diag command with bundle given an ID and this bundle id can be find in sbInitLog.txt.
  8. Then it will tell us what constraints are unsatisfied


osgi> diag 123

update@plugins/webpath.Performance Management/ [123]
  Direct constraints  which are unresolved:   Missing imported package com.businessobjects.clientaction.shared.jamentries_1.0.0.0.


This way we can check whether all OSGI bundle(s) are running as intended or not.


Hope this helps.

On February 25, 2015, Onapsis released advisories for five SAP BusinessObjects Enterprise/Edge and SAP HANA vulnerabilities.  These vulnerabilities
were responsibly disclosed, allowing SAP to correct the vulnerabilities as quickly as possible.


Here is a summary of the advisories and more information around each. Of these five, three are considered "High Risk" and are exploited through the CORBA layer.


Vulnerabilities rated High:


Unauthorized Audit Information Delete via CORBA (CVE-2015-2075)


Exploiting this vulnerability would allow a remote unauthenticated attacker to delete audit information on the BI system before these events are written into the auditing database.


Details of the fix are available in SAP Note ID 2011396.  Please update your BusinessObjects BI 4.x  system to one of the following patches, or a subsequent patch or support pack:

  • BI 4.0 Patch 9.2
  • BI 4.0 SP10
  • BI 4.1 Patch 3.1
  • BI 4.1 SP04

SAP Note ID link: http://service.sap.com/sap/support/notes/2011396


Unauthorized File Repository Server Write via CORBA (CVE-2015-2074)


Exploiting this vulnerability would allow a remote unauthenticated attacker to overwrite files in the File Repository System (FRS), provided the attacker has knowledge of the report ID and path.  For example, “frs://Input/a_103/019/000/4967/1b14796c5b0d5f2c.rpt”.


Details of the fix are available in SAP Note ID 2018681.  Please update your BusinessObjects BI 4.x  system to the following support pack, or a subsequent patch or support pack:

  • BI 4.1 SP04

Note: Earlier versions of BI 4.x have a workaround, which is to configure the FRS to run in FIPS mode (add “-fips” to the command line arguments in the CMC) or enable CORBA SSL.

SAP Note ID link: https://service.sap.com/sap/support/notes/2018681

Unauthorized File Repository Server (FRS) Read via CORBA (CVE-2015-2073)

Exploiting this vulnerability would allow a remote unauthenticated attacker to be able to retrieve reports located on the FRS system, provided the attacker has knowledge of the report ID and path.  For example, “frs://Input/a_103/019/000/4967/1b14796c5b0d5f2c.rpt”.


Resolution:  Details of the fix are available in SAP Note ID 2018682.  Please update your BusinessObjects BI 4.x  system to the following support pack, or a subsequent patches or support packs:

  • BI 4.1 SP04

Note: Earlier versions of BI 4.x have a workaround, which is to configure the FRS to run in FIPS mode (add “-fips” to the command line arguments in the CMC) or enable CORBA SSL.

SAP Note ID Link: https://service.sap.com/sap/support/notes/2018682


Vulnerabilities rated Medium:


Multiple Cross Site Scripting Vulnerabilities in SAP HANA XS Administration Tool

Reflected cross site scripting vulnerabilities in this tool may allow an attacker to deface the application or harvest authentication information from users.

Resolution:  Details of the fix are available in SAP Note ID 1993349.  Please update your SAP HANA system to one of the following patches, or a later revision:

  • SAP HANA revision 72 (for SPS07)
  • SAP HANA revision 69 Patch 4 (for SPS06)

SAP Note ID Link:

Unauthorized Audit Information Access via CORBA (CVE-2015-2076)

Exploiting this vulnerability would allow a remote unauthenticated user to gain access to audit events in a BI system.

Resolution:  Details of the fix are available in SAP Note ID 2011395.  Please update your BusinessObjects BI 4.x  system to one of the following patches, or a subsequent patch or support pack:


  • BI 4.0 Patch 9.2
  • BI 4.0 SP10
  • BI 4.1 Patch 3.1
  • BI 4.1 SP04

SAP Note ID Link: https://service.sap.com/sap/support/notes/2011395

I strongly recommend keeping up to date on patches and support packs in order to take advantage of the most recent security fixes, but also new features in the product. Each of the vulnerabilities affecting the BI Platform have been resolved in BI 4.1 SP04+. If you haven’t already, this is a good opportunity to build the business case for updating your environment. Vulnerabilities left unaddressed put your business users and data at risk.

Information regarding each of the BI support packs/patches, including Administration guides, release notes, fixed issues in each and known issues in each can be found at http://help.sap.com/bobi/.

Information regarding the latest revision of SAP HANA, including install guides, security information and Administration guides can be found at http://help.sap.com/hana, and choose the HANA link appropriate for your environment.

SAP’s security notes portal can be found here: https://support.sap.com/securitynotes

Other links of interest:

I am a new blogger to SCN, but I’ve been with Business Objects and then SAP for several years.   I’m interested in bringing more transparency around security topics to SCN, so I’m curious to know what the BI Platform community thinks about these types of posts, as well as anything else you’d like to see.

Please feel free to leave a comment below or contact me directly, I’d love to hear from you!

This was an ASUG webcast this past week given by SAP's Thomas Kuruvilla


The usual disclaimer applies that things in the future are subject to change.


Figure 1 – Source: SAP


Figure 1 provides in introduction to SAP Lumira, Edge.


Figure 2: Source SAP


The groups created, shown above in Figure 2, are more for distribution lists


Figure 3: Source SAP


Figure 3 shows data acquisition and mashup is in Lumira Desktop; SAP is looking to bring it to the browser to do full workflow in browser


Figure 4: Source: SAP


With Lumira Edge, SAP does not want to add software or hardware to the deployment


SAP plans to support additional languages in coming releases


Figure 5: Source: SAP


The installation is in “three clicks”, including accepting the license


You can still create in Lumira Desktop 1.23 but it will not open document in browser


The size 699MB of the installation file.


Create users using their e-mail ID; similar to Lumira Cloud.




Figure 6: Source: SAP


Figure 6 is the roadmap it shows what is coming in the first half.  Second half is still in planning.  Next release is April and June.


Coming is the support for refreshing additional data – 1.25


Universe refresh in the team server (in case you do not want to use BI Platform) – you connect using the extension framework (planned for 1.24 release).


In 1.25, plan to have save as for personal use.


In coming release, will provide a story viewer, similar to Lumira Cloud


Only go to visualize/compose room if have edit rights – next release


Next release will included active directory (planned)


In June timeframe will provide Mobile BI support (iPad only, June timeline)


They will not constrain any upgrade release without intermediate updates


They plan to have auto fill functionality to remember e-mail ids; you start typing a name and it auto completes.  The sharing becomes easier


Today – can’t share to group; coming release share to groups and large number of users in one workflow


Lumira server for BI Platform is coming in Q2


April 1.25 – server for teams, server for BI platform and teams at the same time


Q&A Session for SAP Lumira Server for Teams: Deep Dive and Roadmap


Q: Is this running on a proprietary SAP WACS?  Does the portal run on other Web App servers?

A: WACS is bundled with the installer, doesn’t support deployment on other Web apps as this would be too technical for Business user


Q: Was the browser refresh by the user leveraging a DSN defined on the server or on the client?

A: The connection defined in client for a Lumira Document is saved to the server along with the Lumira Document


Q: Can I distribute the story boards on a predefined interval automatically?

A: Scheduling is planned for future release


Q: Is Team Server compatible with 1.23, now available.

A: Hi Josh - he addressed this - you can create the document in 1.23 but not open in browser


Q: Win 8.1 not touch enabled, does that mean it excludes MS surface?

A: Yes, touch is not enabled.


Q: Is this included with the BI Suite license from SAP?

A: Lumira Server for Teams (Edge Edition) is not covered under BI Suite License. However, Lumira Server for BI Platform (RTC Planned in April) is covered under BI Suite licenses


Q: browser needs to be IE 11 only? Not below IE versions

A: Yes, we only support IE11 with the existing release. Plan to support IE 10 with Q2 release‑


Q: Inclusion with BI Suite would be very nice, as many LOB team want autonomy from central managed BI Platform.

A: Lumira Server for Teams (edge Edition) is not included but Lumira Server for BI Platform (RTC in April) is included under BI Suite‑


Q: For Universe Support via DA Extension... is the expectation that Customers build these Extensions themselves, or will SAP be providing such an Extension?

A: SAP would be providing extensions for Universe. Universe support via DA extension is planned with Q2 release‑


Q: When will support for BW BEx data source be available?

A: Currently planned to be supported with June release‑


Q: Will we need to upgrade our BI Platform to add Lumira, or will it be an add-on like for Design Studio?

A: It will be an Add-On like Design Studio. Supported from BI 4.1 SP03 onwards (may need latest patch) ‑



Q: does that mean, we don’t need to rely Hana server when server for BI is available right?

A: Ramp-up - today Lumira Server relies on HANA - feedback is need something easy to maintain - new solution not require HANA‑



Q: Does Lumira Edge have any additional functionality that Lumira Server for BI Platform will not have?

A: Game is to keep at the same level; may see certain scenarios where BIP may have functionality earlier - BIP won't have less than team. Admin functionality is different for both solutions‑

A: Scheduling will come to BIP first‑


Q: What about the BW platform?

A: 7.x and higher‑



Q: When we say BI platform, you mean BEX queries, or directly the OLAP cubes

A: BI platform is the BOE‑


Q: What BW level is required?

A: BW7x as a data source‑

A: 7.x and higher‑




ASUG Annual Conference Pre-conference: Register here:  - featuring Hands-on SAP BusinessObjects BI 4.1 w/ SAP NetWeaver BW Powered by SAP HANA – Deep Dive includes SAP Lumira, Design Studio, and Analysis

Hi All,


Can someone point me to the above patch doc?


I see patch 3 was release on 2/27/2015 but cannot find the document listing the fixes included:







Filter Blog

By author:
By date:
By tag: