1 2 3 16 Previous Next

BusinessObjects BI for SAP

232 Posts

It's that time of year (yes, again already!) where we gather at SAPInsider, this year located in a very drizzly Vienna, for BI2016 (plus GRC2016, HR2016 and HANA2016). I am honoured and delighted (and always surprised ) to be asked back to present at this event. I actually think this will be my third or fourth time presenting for Wellesley Information Services (WIS) in one form or another.


Because there are so many sessions for us delegates to choose from I wanted to write a short introduction to mine in case it is useful for people considering whether to attend and a synopsis (obviously) doesn't always tell the whole story.

BIHANAEU2016_Cooper_Casestudybritishamerican_final.jpg

This year will follow on from where we left of in my last talk during the 2015 SAPInsider in Nice. Following the conclusion of that fun-filled event (great audience!) I wrote a blog-post summarising the session which includes links to the presentation as well as some of the historical back-story. If you have not attended one of my presentations before this will provide some background and prepare you for my presentation style . As you might gather from that post I have been trying for some time to remove the phrase "self-service' from the BI lexicon. I will actually end up using the term many times because senior management thought it hilarious, I'm sure, to involve me during the past twelve months in several concurrent projects all of which are conceived to enable/deliver/realise "self-service


You can find this year's session in the app or on the event website  << that link will show you which other events are happening at the same time.


Important: The session starts at 08:30 and I appreciate that this is very early so to make things easier on yourself please follow these instructions:


  1. Enter the venue and bear right toward the escalators
  2. Drink a coffee at the drinks station just to the right of the escalator
  3. Grab another coffee and take it with you up the escalator to the next floor (Level 1)
  4. Bear right toward the 'L" rooms, past L1 and L2 and then turn right for rooms L4-7 and L4 is on the left


I am toying with the idea of a "free can of Red Bull" incentive for the first ten attendees. Let me know in the comments or tweet me @CheOpsLaw if that would work for you!

SAPinsider_BI_Conference_Europe_2016.jpg

The synopsis does tell part of the story and provide general context for the session

 

British American Tobacco (BAT) has embarked on an ambitious new programme to embed BI in the fabric of the organisation, driving up user adoption and self-reliance. Attend this technical session to gain valuable lessons from the company’s journey and:


  • ​Learn about its transformational business-led initiative to drive BI adoption using a custom information-delivery portal with workflow and value-added functionality
  • Get an update on BAT’s migration to SAP BW on SAP HANA, including the information lifecycle management (ILM) solution used for nearline storage with SAP IQ in order to reduce data volumes pre-migration to SAP HANA
  • Understand BAT’s considerations and planning process for deploying a logical data warehouse capable of analysing IT “governed” data with non-IT “ungoverned” data
  • Hear lessons learned and a wish list of features and functionality required in SAP BusinessObjects BI to overcome BAT’s final hurdles to BI adoption

 

But here is an alternate, more up-to-date summary: After five years spent deploying a single instance BI solution (SAP BW + BWA and BI4.1) alongside a single instance ERP we have been trying to navigate a course to migrate to and adopt HANA, to move toward "Modern BI" with non-BW data models with deployment of "data discovery" applications and user created "mashed-up" content while at the same time maintaining and improving the performance of the current systems. To say this has been a challenge would be somewhat of an understatement and there will be a lot of focus on some of the pains we have gone through and lessons learned.

 

We will touch upon the approach we took with regard to HANA Solution Architecture (including sizing and licensing) to underpin BW and BI4.1

BIHANAEU2016_Cooper_Casestudybritishamerican_final 2.jpg

Also discussed will be genuine user requirements for merging and analysing disparate datasets

mashup_jpg.jpg

The real star of the show will the section on a highly successful project, orchestrated and run by the business, which saw the introduction of what Gartner refer to as the "Information Delivery Platform" for simplified consumption of (and access to) BI and related information.

BIHANAEU2016_Cooper_Casestudybritishamerican_final 3.jpg

During the talk I will cover the many different interpretations of what self-service mean in any given context and the subsequent misunderstandings and challenges that arise as a result. After the session I will upload the complete presentation here.

 

Final point before the presentation: once I have delivered a session like this I always like hearing from delegates who are either saved from repeating something we did which we later would have chosen to do differently, or who have already succeeded with something we are just starting out with. Quite often we all realise we are in very similar positions with identical challenges so, if nothing else, the sessions succeed simply as group therapy.


Come along on Wednesday and bring along any of your own challenges/questions/heckles (and also caffeine given the early start)!



Solution brief of the project


Customer                                                                      Partner

FC Bayern München                                                    objective partner AG


Industry

  • IS Retail / Sport


Functional Business

  • Material Management (MM)
  • Sales & Distribution (SD)
  • Warehouse Management (WM)


System Environment

  • SAP ECC on HANA
  • Deployed in SAP HANA Enterprise Cloud (HEC)


UI Technology

  • SAP Business Objects

 

About the project and its objectives


How to realize a reporting application that provides transactional data in real time, compressed and integrated throughout different lines of business?

 

What is s a convenient approach to retrieve this type of big data from a single source of truth exactly to the second?

 

How to enable a user experience which is clearly structured and tailored to reporting needs deployed on mobile and desktop devices?

 

Is there a way to realize these project requirements as a rapid deployment solution (RDS)?

 

These challenging project requirements were the cornerstones for a reporting tool that should cover an integrated view at the retail process including, warehouse management, material management as well as sales and distribution.

 

From the business perspective this tool should provide a direct access to data that represents the as is-state of the logistical value chain. The data should provide detailed information about inbound deliveries, goods receipts, order entries and outbound deliveries at a glance and in real-time. The outcome of the reporting application aimed to deliver a decision foundation, not only for the management of the respective functional area, but also to the operational employees. Taking incoming sales orders as an example, which have different distribution channels and order types for instance internet sales or e-commerce orders it is a big advantage to get daily updated data to the second. In detail this means the business is able to control the logistical retail process with predefined KPI’s on a daily basis.


KPI Extract

  • How many inbound deliveries and goods receipt were entered today
  • How many sales orders are coming in and through which distribution channel
  • How many sales orders are opened and located in the consignment
  • Which orders are blocked and what kind of blocking reasons are responsible therefor
  • Which products are concerned from the blocking
  • How high is the percentage of full and partial packed deliveries
  • How many outbound deliveries are performed in general

 

Based on this KPI’s crucial conclusions can be thrown regarding the daily performance of the sales business and the delivery process. Bottlenecks within the end-to-end process can be located quickly for instance when the amount of sales orders is inappropriate high compared to a low amount of outbound deliveries.

 

Project Approach and Solution


The first decision to be made was to evaluate whether to deploy the solution on a SAP BW or a SAP ECC as the single point of truth that provides the data foundation. Initially it seems to be obvious that a SAP BW is the best way to go considering the huge amount of transactional data that is generated within the logistical retail process. But in regard to the core requirement setting up a real-time data feed on a minute basis a BW scenario would also require a permanent data transmission in a very short interval between the ECC and BW.

 

So the alternative option would be to retrieve the data straight from SAP ECC. As the ECC system was already running on a HANA environment the advantage on the one hand would be a direct access to data without having a time gap due to data transmission and on the other hand all capabilities of HANA could be leveraged. That means, high performance when processing big data and full capabilities of the integrated HANA architecture.

 

A POC confirmed that SAP ERP on HANA can handle the huge amount of transactional data in real time and also provides a sufficient performance to navigate within continuously adjustable time periods. The last aspect is very helpful for analyzing KPI’s of previous weeks and months.

 

The appropriate technology for the user interface should provide a high user experience tailored to reporting purposes. Therefor SAP Business Objects was chosen as it delivers standard analytic content to create custom reporting applications. For the specific requirements such as integration to a single sign-on application, identity management to administer authorization roles or the layout adaptions according to the corporate identity, SAP BO delivers a full set of state-of-the-art technology to meet exactly these customer needs. Furthermore, it provides a direct connectivity to HANA which make it simple to use the HANA data sources modelled in SAP HANA Studio.

 

The implementation could be executed in a very rapid way by leveraging SAP HANA Studio for data modelling as it delivers all the necessary tools. Standard SAP tables can be chosen from the HANA catalog and re-modeled to compressed custom views. The different types of views such as analytical or calculation views serves as a data sources for the user interface layer in SAP Business Objects. The final reporting application is provided for the desktop and mobile devices running on a BO environment.

 

                                           



Benefits of the Reporting Logistics Application


Business Impact

  • Improved data transparency provided by one single tool. No need to search for the data from various standard transactions such as VA03 or ME2L
  • Increased speed of making decisions throughout different decision levels
  • Data feed is provided in real time and delivers a high flexibility when analyzing the data by using different filter criteria
  • Clearly structured data provisioning in an analytics context tailored to a high-quality user experience

 

Technical Impact

  • SAP HANA architecture provides a seamless integration of big data starting with data modelling in HANA Studio and ending with user experience tailored to reporting purposes
  • High performance of the data feed in real time
  • Standard SAP technology such as SAP HANA and business objects delivers a rapid deployment solution
  • Replacement of the legacy solution running as a custom report on ECC
  • Rapid deployment and low implementation effort due standard functionality

 

 

How the solution reporting logistics utilizes SAP technology to face the digital transformation

 

The logistics reporting solution shows how state-of-the-art technologies can be leveraged to cover tomorrow’s business requirements and face the digital transformation. Digitalized processes can only be realized with an integrated data foundation that delivers the required information in real-time according to the relevant business context. For dynamic companies with a strong retail alignment it’s important to ensure a smooth end-to-end process to satisfy the customer needs which starts with a high shopping experience and ends with on-time delivered packages.

Basically all industry solutions and functional businesses are affected by the digital transformation. Automated processes, integrated data, efficient workflows and a full transparency are essential elements for the added value in the daily business.

The appropriate technology to enable these implementations is already there. It just needs to be utilized properly.

I want preface this blog with the disclaimer that I am just beginning my tenure as a new BI admin.  I started our having some exposure to the CMC on BOE 3.1 (Windows) and also have upper-intermediate skills on the Linux platform.  From there, I took my notebook to Tech Ed to attend some of their workshops on upgrading from 3.1 to BI 4.X (great stuff) and then attended in-classroom training.  I took BOE330 - SAP BusinessObjects Business Intelligence Platform: Designing and Deploying a Solution (I strongly recommend it for any admin level who is going to install BI 4.X).  I also took a class on Tomcat to help me with configuration of the Web App platfom.  Everything else I know about BI 4.1 (and Tomcat) has come from trial and error and reading lots of articles out of the knowledge base.

 

My general experience with the documentation for BI 4.X is that it is generally Windows-centric (some stuff is universal, but is can still be challenging).  I did, however, find some especially helpful resources for Linux installations (and for rank newbies like myself).

 

First off, I recommend reading the pattern books.  They are a great resource to see how other people did it.  One of the references I used a great deal of is below:

 

Linux System Landscape Overview - Business Intelligence (BusinessObjects) - SCN Wiki

 

Here's an interesting read on Virtualizing your BI installation:

Virtualizing SAP BusinessObjects BI 4

 

And a good resource on Sizing:

Sizing and Deploying SAP BI 4 and SAP Lumira

The sizing information provided above is even more useful if you take the BOE330 course!

 

My best resource for the actual install (apart from the provided admin and install guides) came from this resource:

Installing SAP BI 4.1 SP01 on Red Hat Enterprise Linux 6.x Step-by-step

 

Although the PAM guide is a pretty good guide for compatibility, I ran into a bug that was just recently fixed.  If you had Red Hat 6.6 installed, you couldn't run Crystal 2001/2013 reports on BI 4.1 SP5.  See SAP Note 2098659 for which patch version are supposed to have fixed that.

 

Another Gem from the Pattern Books explains how to configure a reverse proxy in Apache and Tomcat:

Configuring the reverse proxy - Business Intelligence (BusinessObjects) - SCN Wiki

 

For some reason getting all of my logs in one place was a challenge.  Some excellent resources on a couple quirky settings I found are as follows:

SAP Note 1829761 - How-To: Configure Location of BI4.0 Web Application Log Files

SAP Note 2066722 - How to stop outputting or change log level about TraceLog_*_trace.glf files under Tomcat directly

 

Configuring both 32-bit and 64-bit oracle drivers seems to be a challenge for Windows and Linux folks alike.  I used this article as a reference for some suggestions:

BOBI 4.1 SP2 Oracle client installation on server | SCN

 

In the end, here's what I put in my BI users' .kshrc file as it applies to Oracle:

export ORACLE_BASE=/oracle

export ORACLE_HOME=/oracle/<my 64 bit install>

export ORACLE_HOME32=/oracle/<my 32 bit install>

export TNS_ADMIN=<the directory I put my tnsnames.ora in>

if [ "$LIBPATH" = "" ]

then

        export LIBPATH=${ORACLE_HOME}/lib:${ORACLE_HOME32}/lib:$BINDIR

else

        export LIBPATH=${LIBPATH}:${ORACLE_HOME}/lib:${ORACLE_HOME32}/lib:$BINDIR

fi

export LD_LIBRARY_PATH=$LIBPATH

export BOE_USE_32BIT_ENV_FOR=ORACLE_HOME

 

The above worked for me, your mileage may vary.

 

These are the best resources I have come across for large hurdles and minor annoyances that I have come across so far in development.  As I march towards production, I will add things as I find them.

BOE with Apache as a Web server and Cluster of Tomcat as a App  Server

 

This document would detail out the necessary implementation steps required for the Implementation of Apache as a Web server, Clustered Tomcats as a App server, BOE as the Central Processing server.

 

For the purpose of the entire landscape ,  I have consumed   Four  separate machines running the OS Windows 2008 SP2  64 Bit Edition with a hardware size of 16G Ram Quad core processors.

 

            

Machine

Machine Name

Purpose

Machine 1

V01874

Central BOE   Installation server

Machine 2

V01513

Apache  as a Web server Installation

Machine 3

V01289

Tomcat as a app   server Installation (Tomcat Cluster 1)

Machine 4

V02000

Tomcat as a app server   Installation (Tomcat Cluster 2)

 

The whole of this document is divided into Four milestones

MileStone 1 : Ensuring the Apache and Tomcat machine Connection successful

MileStone 2 :Deployment  of BOE web content : Static Content into Apache and Dynamic content  into Tomcat and their respective configurations.

MileStone 3 :Clustering of Tomcats so that Load balancing and Failover scenarios for Tomcat as an App server can be taken care .

MileStone 4 :  Performance Analysis of Apache

 

 

 

 

 

 

Milestone 1 :  Apache and Tomcat Handshake

 

Apache  Web server Set UP

 

Downloads Required

1.       Download the compiled Apache binaries from Apache Lounge here: www.apachelounge.com/download/

a.        At the time of documenting, the build I ‘m  consuming is httpd-2.4.10-win64-VC11.zip

 

2.       Also download the module packages from Apache Lounge because we will require mod_jk module

-           At the time of writing it is Modules-2.4-win64-VC11.zip

3.      Be sure that you have installed Visual C++ Redistributable for Visual Studio 2012. The Apache Lounge Web site has a link to this:

                        At the time of writing it is VC11 vcredist_x64/86.exe

 

 

Install and Start Apache HTTP Server 2.4

1.       Unzip the Apache zip file to a folder. C:\Apache24

 

2.       This is the default server root in Apache httpd.conf. If you wish to change the install directory, you will also need to change the ServerRoot and DocumentRootparameters specified in httpd.conf under folder \Apache24\conf\

3.       Find mod_jk.so from the downloaded Apache module package and place it under the folder C:\Apache24\modules

4.       Edit the httpd.conf file, and specify values for the following parameters

-           Listen 80

-           ServerName localhost

-           ServerAdmin youradminaddress@company.com

5.       Disable IIS if it is installed on the machine. If you wish to leave IIS running, you’ll have to manage which server responds to port 80.

6.       Start Apache httpd from command prompt (Start run cmd), and install as a service

 

cd C:\Apache24\bin\

httpd.exe -k install

Test the installation of Apache httpd by launching a browser and typing the URL: http://localhost. You should get a page saying "It works". Here you have successfully installed Apace HTTP Server 2.4

 

 

Install The Apache Tomcat as an App Server: (V01289)

1.       Download Apache Tomcat from the tomcat website,  I have used apache-tomcat-7.0.54.exe

2.       Go to Configure Tomcat Option->Java and add the below snippet for the java Options

-Dcatalina.home=C:\Tomcat7

-Dcatalina.base=C:\Tomcat7

-Djava.endorsed.dirs=C:\Tomcat7\endorsed

-Djava.io.tmpdir=C:\Tomcat7\temp

-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager

-Djava.util.logging.config.file=C:\Tomcat7\conf\logging.properties

-XX:PermSize=256m

-XX:MaxPermSize=256m

-XX:NewSize=171m

-XX:MaxNewSize=171m

-XX:SurvivorRatio=2

-XX:TargetSurvivorRatio=90

-XX:+DisableExplicitGC

-XX:+UseTLAB

3. Change the Initial memory pool and Maximum memory pool to 1024 MB.

4. Restart Tomcat.

 

Now Switch to the Machine (V01513)where Apache is Installed

 

1.     Download the Apache-Tomcat Connector Module. ( Download Tomcat Connectors JK 1.2 JK 1.2 Binary Releases win32 jk-1.2.xx "mod_jk-1.2.xx-httpd-2.2.x.so").

2.     Rename the downloaded module to "mod_jk.so" and move into directory "c:\Apache24 \modules".   (The modules folder where apache  is installed)

3.     Configure Apache - We need to configure the Apache HTTP Server to load and initialize the JK module.

4.     Create a configuration file called "mod_jk.conf" as follows and place it in “c:\Apache24\conf":

 

 

mod_jk.conf

 

# Load mod_jk module

# Update this path to match your modules location

LoadModule jk_module modules/mod_jk.so

 

# Where to find workers.properties

# Update this path to match your confdirectory location

JkWorkersFile conf/workers.properties

 

# Where to put jk logs

# Update this path to match your logs directory location

JkLogFile logs/mod_jk.log

 

# Set the jk log level [debug/error/info]

JkLogLevel info

 

# Select the log format

JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"

 

# JkOptions indicate to send SSL KEY SIZE,

JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories

 

# JkRequestLogFormat set the request format

JkRequestLogFormat "%w %V %T"

 

# Send everything for context /ws to worker ajp13

JkMount /ws ajp13

JkMount /ws/* ajp13

 

# Send everything for context /examples to worker ajp13

JkMount /examples ajp13

JkMount /examples/* ajp13

JkMount /docs/* ajp13

 

 

 

 

 

5.     The contents of the mod_jk.conf is attached with this document

6.     Create the "workers.properties" file and place it in " c:\Apache24\conf " as follows:

Worker.properties

# Define 1 real worker named ajp13

worker.list=tomcatlb  // Worker name

 

# Set properties for worker named ajp13 to use ajp13 protocol,

# and run on port 8009

#connecting port type, which needs to be same in tomcat connector in server.xml

 

worker. tomcatlb  .type=ajp13 worker. tomcatlb  .host=v01289.dhcp.pgdev.sap.corp

worker. tomcatlb  .port=8009

worker. tomcatlb  .lbfactor=50

7.     Configure Tomcat - The default configuration in Tomcat's "conf\server.xml" starts the AJP1.3 service via the following configuration, on TCP port 8009 (remove the comments if these lines are commented out).

<!-- Define an AJP 1.3 Connector on port 8009 -->

<Connector port="8009" enableLookups="false" redirectPort="8443" protocol="AJP/1.3" />

8.     Finally, Go to httpd.conf file and add the following below line:

Include conf/mod_jk.conf

 

9.     Restart Apache and load http://localhost/examples/ , You should see the  examples getting loaded from the Tomcat Machine from apache.

 

With this, we can be rest assured that the Connection between Apache and Tomcat is seamless.

 

Now next, we  need to configure apache and tomcat in a more custom manner  for the configuration file so that it is BOE specific so that   post wdeploy, tomcat fetches the details from the BOE and the  apache talks to tomcat to fetch these details.

Once successful, comment the included line in the httpd.conf

 

Include conf/mod_jk.conf  

 

 

Milestone 2 : Deploying the BOE static content in Apache web server and Dynamic files into Tomcat and making up and running the entire BOE set up via Apache as the Entry point

BOE Specific Apache Configuration:

1.      Download the attached file httpd-bi41.conf  and place it in the

in folder C:\Apache24\conf\extra

 

 

httpd-bi41.conf

 

#====================LoadModules======================

LoadModule jk_module modules/mod_jk.so

LoadModule headers_modulemodules/mod_headers.so

LoadModule deflate_modulemodules/mod_deflate.so

LoadModule cache_modulemodules/mod_cache.so

LoadModule cache_disk_modulemodules/mod_cache_disk.so

 

#====================Configure mod_jk==============

# Where to find workers.properties

# Update this path to match your confdirectory location (put workers.properties

#next to httpd.conf)

JkWorkersFile conf/workers.properties

# Where to put jk shared memory

# Write shared memory to the logs directory

JkShmFile logs/mod_jk.shm

# Where to put jk logs

# Update this path to match your logs directory location (put mod_jk.log next to

#access_log)

JkLogFile logs/mod_jk.log

# Set the jk log level [debug/error/info]

JkLogLevel    info

# Select the timestamp log format

JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"

 

#====================Configure mod_headers==============

FileETag None

Header unset ETag

   <FilesMatch"\.(gif|jpe?g|png|html?|js|css)$">       

                Header set Cache-Control "public, max-age=315360000"   

   </FilesMatch>

 

#====================Configure mod_deflate==============

SetOutputFilter DEFLATE

#Highest 9 - Lowest 1

DeflateCompressionLevel 9

# Don't compress images

SetEnvIfNoCase Request_URI\

\.(?:gif|jpe?g|png)$ no-gzipdont-vary

#Optional

#Logging

DeflateFilterNote ratio

LogFormat '"%r" %b (%{ratio}n) "%{User-agent}i"' deflate

CustomLog logs/deflate_log.log deflate

 

#====================Configure mod_cache==============

<IfModule mod_cache.c>

#Address the Thundering Herd identified in Apache’s Caching Guide

CacheLock on

CacheLockPath C:/temp/mod_cache-lock

CacheRoot C:/cacheroot

CacheLockMaxAge 5

#This parameter tells Apache to ignore unique session identifiers when caching

#static content.  SAP BI Platform 4.0 uses the strings jsessionid and bttoken to

#identify user sessions.

CacheIgnoreURLSessionIdentifiers jsessionidbttoken

#Don’t cache cookies as they are unique per user

CacheIgnoreHeaders Set-Cookie

#Enable mod_disk_cache instead of mod_mem_cache

<IfModule mod_cache_disk.c>

CacheRoot C:/cacheroot

CacheEnable disk /

CacheDirLevels 2

CacheDirLength 1

</IfModule>

</IfModule>

#====================End httpd-BI40.conf==============

 

 

2.       Include this line in the last in httpd.conf:

3.       Include conf/extra/httpd-bi41.conf

 

 

Wdeploy Settings  inCentral BOE Installation Machine

1. Go to <Install_Dir>\SAP BusinessObjects Enterprise XI 4.0\wdeploy\conf

 

2. Open config.Apache

Change the following according to the screenshot provided below

è Ws_dir = <WebServer_Install_dir>

è Deplyment_dir=<WebServer_Install_dir>\htdocs

è Plugin_install=<WebServer_Install_dir>\plugins

 

          111.jpg              

 

 

3. Save and exit

4. Go to <Install_Dir>\SAP BusinessObjects Enterprise XI 4.0\wdeploy\conf

5. Open config.Tomcat7 and change the following

    as_dir=<Path where Tomcat is installed>\Tomcat7

  As_instance = localhost

112.jpg

 

Deployment UsingwDeploy command line tool:

1.       Open command prompt andGo to the following path.

2.       <Install_dir>\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\Wdeploy

3.       Run the following command :

wdeploy.bat Tomcat7 -Das_mode=split -Dws_type=apache predeployall

4.       This would  help in segregating the static content and dynamic content required for the Apache and Tomcat separately.

5.       Once  this is done, You will get as build successful, refer the below screenshot how it would look like.

113.png

 

6.      

7.       Ensure over the network that the Apache24 in the apache machine and the Tomcat7 folder under  Tomcatmachine is accessible to the Central BOE machine, else share the folder for the Windows AD account name  in these machine names from the Windows Ad Login BOE is getting accessed.

8.       Now, we are ready to  do the actual deployment , again in the same level in the command prompt, run the following command :

wdeploy.bat Tomcat7 -Das_mode=split -Dws_type=apache deployonlyall

9.       After wdeploy  step is done, all the BOE specific  configuration files which are copied into the apache conf folder ,  the updated path seems to be incorrect with //….

It needs to be changed with absolute path for the Apache24 folder

10.   Open Browser in Apache Machine Launch http://localhost/BOE/Bi from and you should be seeing the  BOElaunch Pad. The same can be accessed from the IP address of the Apache machine

11.   If authentication is not appearing in the BI Launchpad, Go to Tomcat->Webapps->BOE->BILaunchpad.propertiesand enable authentication.visible=true

 

# Choose whether to let the user change the authentication type.  If it isn't shown the default authentication type from above will be used

authentication.visible=true

 

 

Milestone 3 : To set up two  nodes to Tomcat so that we  can do load balancing as well as failover scenario between the App server

 

Configuration required  in Apache Machine:

1.       Go to the Worker.properties and add the second tomcat node details with the lad balancing factors.

2.       Refer the below code for our  worker.propertiesfile with the new one.

Worker.properties

worker.list=tomcatlb,stat

 

# Set properties for worker named cluster1 to use ajp13 protocol,

# and run on port 8009

worker.cluster1.type=ajp13

worker.cluster1.host=v01289.dhcp.pgdev.sap.corp

worker.cluster1.port=8009

worker.cluster1.lbfactor=50

 

# Set properties for worker named cluster2 to use ajp13 protocol,

# and run on port 8009

worker.cluster2.type=ajp13

worker.cluster2.host=v02000.dhcp.pgdev.sap.corp

worker.cluster2.port=8009

worker.cluster2.lbfactor=50

 

#Load Balance Worker Configuration

worker.tomcatlb.type=lb

worker.tomcatlb.balance_workers=cluster1,cluster2

worker.list=jkstatus

worker.jkstatus.type=status

Configuration required in Tomcat  Machine:

1.       Go to the each of the Tomcat Machines server.xml file and add the jvmroute parameter for the worker name which is defined for the node in the  ApacheMachine worker.properties.

2.       In my case, the two nodes names defined in the worker.propertiesfor Apache is cluster1 and cluster2 .

3.       Add in the below snippet of code

<Engine name="Catalina" defaultHost="localhost" jvmRoute="[worker name]">

 

 

MileStone 4 :  Performance Analysis of Apache

1.       In the Apache Machine, Go to worker.properties file and in the end of the file , Include the line

worker.stat.type=status

 

2.       Go to httpd.conf file and add the following line of code:

JkMount / balancer

JkMount /status   stat

 

Final Architecture after the Setup :

 

114.PNG

This 'blog post was originally written as an introduction to the session run at TechEd, Vegas which was re-run as an ASUG webinar on November 3rd, 2015.



I was asked to write a short 'blog post introducing the customer story session to give people an idea of whether it was relevant or of interest (hopefully both).

 

Short version: The session deals with the many challenges of implementing a global BI4 instance for BW (and preparing for HANA) in a complex integrated global environment.

 

The presentation starts with a story that involves beer and pizza in a server room. If that isn't enough reason alone to attend then I will outline the topics I plan to talk about and discuss with people in the room.

 

The central theme of the presentation is the "IT reality" when trying to rollout global buzzwords. Even when the hype coalesces into a mature product the challenge of implementing it and integrating it a global enterprise is fraught with complexity and challenges.

 

Management hears "Cloud". I see "legal", "data sovereignty" and "integration".

 

When I attend any conference I pay attention to all the new buzzwords and associated hype but file them away for future use because my reality is trying to deliver last year's buzzwords or, in the case of "self-service" and "cloud BI" or even "BusinessObjects BI for SAP BW" I am actually still attempting to deliver buzzwords from several years ago.

 

I am not alone in this. I have been presenting at TechEd, Sapphire and Gartner events for the last four or five years and find my presentations consistently well received by peer customers and partner organisations facing the same or similar challenges.

 

For the last five years our organisation has been implementing a single global instance of SAP ERP with an accompanying global BI reporting solution (deploying BusinessObjects BI4 against SAP BW 7.3).

 

The solution was built "in the cloud" but this did not involve little fluffy things featuring unicorns and rainbows. Deploying in the cloud was tough. This is what deploying in the cloud looked like to me:

TEC213_26721_Presentation_2_pdf__page_10_of_45_.jpg

The above diagram is just of the BI platform but that was only part of the story. The integration of the BI platform (SAP BI4) with SAP BW (as both reporting source and 3rd party authentication entitlement system) was a challenge in itself but we also had to integrate with NW Portal (for presentation) and with SAP Solution Manager (for CTS+ and monitoring) and with Identity Access Management. Each of those integrations has pains and challenges beyond anything I could have imagined (and I have quite a good imagination).

 

There is plenty of advice out there about selecting the right products from the BI suite to meet your use case, unfortunately not may of them take "reality" into account. So I cover that as well. For example, two years ago the "right" product "on paper" for our requirement was Design Studio but "in reality" for our requirements, it was not ready. So what did we do in the interim? I'll talk about that.

 

Need new features or fixes? Think that "upgrading to the next version" or "applying a service pack" is trouble-free? Nope. I'll cover that too.

TEC213_26721_Presentation_2_pdf__page_15_of_45_.jpg

I actually expected the session to then cover integration with Solution Manager at length but a new "reality on the ground" emerged, the migration of BW on Oracle to BW on HANA. I'll have quite a lot to say on that subject especially around managing BW growth prior to migration. What I will be getting really worked up about though is the way we are having to overthink the HANA architecture to find a compromise between delivering value and high license costs.

TEC213_26721_Presentation_2_pdf__page_28_of_45_.jpg\

You will be pleased to know that, once I have got all that off my chest I will actually be talking about the value we delivered from our reporting capabilities.

TEC213_26721_Presentation_2_pdf__page_38_of_45_.jpg

The topic I end with is the one I normally get most worked up about, the buzzword that has been the bane of my life for many years, "self-service". If you were at SAP Insider for my session there then you may already have an idea what happens to that buzzword in my presentation. If not (or if any of the above sounds like it might be useful, insightful or just plain entertaining) you will just have to come along to the session and find out.

 

Debate, discussions and heckling welcomed before, during and after the presentation. Hopefully see you there. For those who can't attend I will upload the session slides after the event.

 

Twitter: @CheOpsLaw

How to fix error when the OLAP connection gets deleted

 

 

Applies to:

SAP BI Analytic. Business Intelligence Tools | BI &amp; Analytics | SAP

 

 

Summary

This Document describes how to fix the error which occurs when the developer accidentally deletes the connection.

Author           : Lakshminarasimhan Narasimhamurthy

Company     : Electronic Government Authority, Ras Al Khaimah

Created on  : 04/SEP/2015

 

 

Author Bio

Lakshminarasimhan Narasimhamurthy is BW certified and ABAP certified consultant and worked on multiple implementation, development and support project's within India and outside of India in SAP  BW, BI and ABAP. He has worked for fortune 500 clients like Nestle, Warner Bros, General Electric. He is strong in HANA modeling and strong in BO/BI tools like WEBI, Dashboard, IDT, Lumira.

 

 

Scenario –

We have a report created over an OLAP connection and by mistake another developer has deleted this OLAP connection. This document gives solution to handle such case.

 

 

Step by step approach –


We have an OLAP connection inside the folder “Lakshminarasimhan”.

The OLAP connection is pointing to the BEx.

This BEx query is going to be the DataSource for our WEBI.

 

Note

OLAP connection can point to

  • Individual BEx query

               (or)

  • Infoprovider ( All BEx  queries within the Infoprovider are available for reporting)

               (or)

  • BW  server ( All BEx queries of all Infoproviders can be accessed)

IDT_1.png

I have created a simple WEBI document given below. 

The report shows the difference between the case registration date and the case judgment date which gives the time take to give the judgment.

The query is executing fine. Now another developer has deleted the OLAP  connection by mistake and the report throws the below error.

 

First Error, when the user tries to refresh the report –

 

IDT_2.png


Second error, when we go to the WEBI report in the design mode and then edit the WEBI with data and then “Data Access” ----- “Edit”

 

 

IDT_3.png

 

Now we try to check if the BEx  query has been deleted or the  BEx query “Allow external access” check box has been removed etc, finally we identify that the OLAP connection is missing. Now we create a new connection to the BEx query. In WEBI we have an option “Change Source” .

 

IDT_4.png

 

IDT_5.png

 

Select “Specify a new data source”  and then select the BEx query,all the mappings will be tested by the system. The system checks if all the characteristics and key figures used in the WEBI are existing in BEx  query with same technical name and description.

 

IDT_6.png

 

If suppose a key figure is missing in the BEx  query but being used in the WEBI report then we will have the below message for the particular key figure being highlighted with yellow tool tip

IDT_7.png

You can reinsert the missing key figure and then try the same process again. Now all of the mappings are done and successful.

 

IDT_8.png

 

The report is working fine again.

 

Other references --

 

BI Platform Analytics | Business Intelligence | SAP

 

Thanks to my colleague's Balu Renganathan and Gowri Shankar.

Last week I attended the excellent SAP Insider event in Nice. On the final day of the conference I delivered my own presentation "The Battle to Define and Deliver Self-service" detailing the progress made to enable self-service capabilities on a large global BI deployment that I've worked on for nearly five years.

 

By the end of the presentation the audience in the room were unsurprised, or even in agreement, when I stated that I'd had a "Morpheus" moment and was now taking this to it's logical conclusion, actually killing off the term self-service! Because this wasn't a slide but something I thought of as we were talking, directly after finishing I composed and posted the following (tongue-in-cheek) tweet:

twit.jpg

Without the context of the presentation (and accompanying discussion with the audience) more than few people misinterpreted what I was saying (or missed the joke) and therefore I thought it was worth taking the time to document how I (along with what I felt was the majority of the audience) arrived at this conclusion.


The presentation originated from (and expanded on) a two year old blog-post I wrote here on SCN: The War on "Self-service". In that post I argued that "self-service" was not "a thing" but "something optionally done by a thing" i.e. that Lumira was not self-service in it's own right (and that is how it was being marketed) but a data visualization tool that could be used by IT to deliver a BI document or, optionally, delivered in such a way that end users could modify/create/share their own documents. In short, I argued, self-service is not something you deliver (because it is a capability/verb) and instead something you optionally enable in something you deliver (something tangible like a product/noun). The original post was unremarkable but attracted some very interesting comments including this fantastic response from Donald MacCormick of Antivia (and formerly Vice President for Strategy, BI Platform at SAP) which broadly supported my case.


The presentation I delivered at SAP Insider described the current landscape two years later with our implementation of self-service capability built into IT delivered BI. The major topics were:


  • The role of enterprise and solution architecture in defining and delivering BI capabilities, including self-service

  • Using solution patterns and standards to drive consistent behaviors and experiences globally

  • Technical deployment options and challenges with the BI platform (relative to enabling self-service)

 

I'm going to walk through just the relevant slides to explain what led to the "Morpheus" statement. At a very simple level we have been deploying a global ERP and BI deployment. replacing regional BI with BEx reporting.

context.jpg

One of the visions was to remove the barrier to adoption that was the creation of change requests to IT every time someone wanted even a minor change in a BI report. We wanted to enable analysts/consumers/superusers to easily modify the flexibly built reports and save personal or team variants.


Early on, because the term self-service is ambiguous, therefore open to everyone having their own opinion on what it is (a large part of the problem obviously), we defined what we intended to enable within the scope of this project so that we could share this with the people who matter most, the people who use it.
3.jpg

Using mostly Analysis OLAP reports we easily enabled levels 1 and 2 (while figuring out if BW Workspaces or HANA views would best support level 4)

4.jpg

5.jpg

These types of self-service enabled reports were already deployed in production, however when we played this capability back to the BI user community a senior (exec) and BI champion was asked what his expectation was from self-service. His answer wasn't quite what we expected.

8.jpg

Pausing on this slide for a while I confirmed the statement the exec made which was "I get fed up waiting for someone to run a report for me, self-service means access to the BI system and being able to get the report myself". I then asked if anyone in the room would have considered this to be "self-service" and most agreed with me that we would consider this simply using the report. I took this a step further and asked if "printing the report" once it had been run would be self-service, again no, simply "use" of the report.


Well, I asked, what if the user ran the report and then swapped a couple of columns around and then printed it? What if they adjusted the formatting and type of charts then saved a variant copy in the BI platform to share with other users? Isn't this all simply "using" the report? What if, un-satisfied with this particular report, a new report from the same source was created, isn't that ultimately simply "using" the product?


As most people seemed to agree that everything we had just talked about as being "self-service" was simply some of the many ways of using the system then actually it wasn't simply that self-service was not a "thing" in its own right, it wasn't a thing at all. It doesn't exist! That's when Morpheus entered the room and offered us the red pill...

n2jum.jpg

Of course that isn't all there is to say (and that's a fraction of the content we covered) but hopefully it's enough for people to think twice when offered the blue pill of self-service!

 


This is a session I attended last week.  These are my notes and anything stated in the future is subject to change.

 

It was a full house in attendance:

12fig.jpg

 

Below are some highlights.

1fig.jpg

Figure 1: Source: SAP

 

Figure 1 shows themes of cross client and interoperability.  SAP said they are looking at the big picture, how share amongst solutions

2fig.jpg

Figure 2: Source: SAP

 

BI4.1 SP6 due in June

 

Users can recall values (BI Variants) is first planned for Web Intelligence

 

Universe on BEx is single source

3fig.jpg

Figure 3: Source: SAP

 

Coming post 2015 is authored universe access to ERP tables

4fig.jpg

Figure 4: Source: SAP

 

Two themes for Design Studio convergence strategy BEx and XCelsius use cases

 

The speaker said it is “Never a sexy topic when you talk performance”  regarding parallel query execution

 

Design Studio 1.5 will still single source universe support

 

Geomaps can up to 10 layers; change base layer maps at run time; alternative street maps are provided

 

More custom templates are provided

5fig.jpg

Figure 5: Source: SAP

 

New chart picker – 22 chart types available

6fig.jpg

Figure 6: Source: SAP

 

When exporting, it is a static data set

7fig.jpg

Figure 7: Source: SAP

 

Planned innovations include calculations at run time

 

Snapshot to analysis office is planned

 

What is shown in Figure 7 is planned for second half of 2015

8fig.jpg

Figure 8: Source: SAP

 

Future direction is typically 12-18 months out; plans include local calculations, multi-source universe support and common annotations

9fig.jpg

Figure 9: Source: SAP

 

Figure 9 shows planned innovations and future directions (12-18 months out) for Analysis OLAP

10fig.jpg

Figure 10: Source: SAP

 

Analysis Office 2.1 is planned for release soon

11fig.jpg

Figure 11: Source: SAP

 

I hope to have another blog to cover this better but Figure 11 covers Lumira-BW integration planned for later this year.


Question & Answer

Q: Authored universes – will it do select statements?

A: Stripped down functionality on authored universe focused on business layer

 

Resources:

Upcoming ASUG Business Intelligence Community W... | ASUG

Don't Delay - Submit your ASUG SAP TechEd Abstract Today

World Premiere SAP Design Studio 1.5 ASUG Annual Conference - Part 1

In another edition of the Diversified Semantic Layer podcast, we were joined by David POISSON to discuss how SAP gets to use SAP BusinessObjects and SAP Analytics to run SAP (the podcast was a while ago, but the write-up is happening now).

 

Here is the embedded video, and below are the highlights.

 

 

00:20 Welcome back David Poisson!

 

02:50 Pleasantries are finally over, and it’s time to start. Apparently SAP has potentially 65,000 end users which runs on a single BI platform. The Sales organization has 5,000 people who regularly use an Explorer Sales Pipeline and several Web Intelligence reports.

 

05:00 HANA comes up by complete accident – apparently people love that they have zero latency in their reports.

 

05:45 David continues to rub it in – 4.1 has a higher degree of sustainability and so their internal platform support team plays a lot of tiddly winks.

 

07:00 SAP has around 40 servers on 4 different landscapes to serve up that – 10 VMs in Prod. (editors note: I must get to 4.1).

 

08:20 NO PHYSICAL MACHINES! 6 people manage it, and SkyNet is a far smaller threat.

 

09:44 More “Na-na na-na-na” from David.  Unlimited licensing. Apparently SAP folks are surprised when they get a WebI report anymore, just because of all the choices. RIGHT TOOL FOR THE JOB!

 

13:40 Every SAP IT person is supposed to meet with at least one customer each year. It doesn’t always work out, but a noble goal.

 

15:30 JUICY QUESTION ALERT – What impact does SAP IT (as a customer) have on the product direction?

 

17:55 SAP IT is a pretty decent utility for SAP Support – it’s a a place where they can help troubleshoot paying customers’ issues. #labmice

 

19:45 SAP used to have Qlikview. Sounds like nobody misses it.

 

22:15 Jamie gets subversive – “What can’t you deliver?”

 

24:40 Dashboards should explain to the end user.

 

27:30 Looking nice isn’t enough for a dashboard – I need great information to drive utilization.

 

29:00 Timo ELLIOTT is a great internal resource; others, less so.

 

32:20 Mobile is where it’s at.

 

35:25 ADVICE HOT TAKE: If you are new to BusinessObjects, start with 4.1. If you aren’t new, get there anyway.

 

For the original post, please visit the DSLayer website, and to subscribe to the podcast, please visit us on iTunes.

When the current installation does not contain all the necessary features, or some features of the SAP BusinessObject BI platform (4.0 or 4.1) need to be removed, there is a way to change that without having to uninstall or completly re-install a system.

From the list of installed patches and service packs select the first installation (whether BI 4.0 or BI 4.1) in the example below click on SAP BusinessObjects BI platform 4.0 (this is a bit unkward as your current patch level is SP5 of BI 4.1):

install_path.PNG

A button showing Change/Uninstall instead of Uninstall will be displayed, click on this button to launch setup:

uninstall_change button1.png

Instead of the setup for BI 4.0, this will launch the setup for the BI 4.1 SP5 (current service pack level installed from this example), you have to choose Modify, in order to go to feature selections screen:

Application maintenance.png

Feature selection screen will allow you to pick the missing features for the current service pack or patch and not the initial installation (in this example you will install features for BI 4.1 SP5 and not BI 4.0):

feature selection.png

Click on next and the new features will be deployed at the current patch or service pack level.

As probably a lot of you are aware by know, the ASUG BI community is very active and is constantly organizing great webinars for the community.

 

To make sure that people are not "missing out" on these great opportunities, I will post a list of upcoming webinars on a regular basis here.


For all webinars :

 

Start Time : 11:00 AM (CT), 12:00 PM (ET), 10:00 AM (MT), 9:00 AM (PT)


Duration : 1 hour


So here the webinars for the next 2 weeks:

 

 

 

 

 

  • Jan. 28: Customize SAP BusinessObjects Design Studio Dashboards for the Enterprise

    As a replacement for SAP Business Warehouse Web applications, users learn how to develop SAP BusinessObjects Design Studio dashboards with a standard look and feel. Helpful tips for using style sheets and images are presented. During this session, several dashboard design approaches for performance optimization are discussed and examples given.

 

  • Feb. 2: Running SAP Business Warehouse on SAP HANA on AWS Cloud - Kellogg Story

    Get insights on how Kellogg implemented SAP Business Warehouse on SAP HANA in a hybrid cloud with AWS. Learn how Kellogg worked with AWS for its initial proof of concept (POC) and deployment. Learn how a fully supported SAP BW on SAP HANA scale-out environment can be deployed in less than 30 minutes for POC and dev/test systems in a pay-as-you-go model.



I hope you enjoy these session.

 

Please note, that these are webinars organized by the ASUG BI group and for joining you need to be a ASUG Member.

SAP BusinessObjects Web Intelligence and Crystal Report for Enterprise now supports (from BI 4.1 SP05) manual entry of values for variables.

 

With this enhancements the end users now would be able to simply type in a values in the text box for manual entry instead of selecting each values from List of Values.

Note: Manual entry would be enabled on Web Intelligence and Crystal Reports for Enterprise clients only when the BEX Query variable is created with “Manual Entry/Default value” processing type in BEX Query designer

Bex Query Parameter.png

Upon updating your environment to BI 4.1 SP05, Manual entry of values would be supported on following variable types.

  • Single value variable
  • Multi single value variable
  • Interval (range) value variable
  • Single keydate variable
  • Formula variable
  • Selection Option variable

 

By default, selection option variables are mapped to a range. In order to use this feature, selection option variables has to be mapped to a multi-value prompt, for which you would have to run the following command depending on the client.

Note: Manual entry support will be enabled for other listed variables types by default with BI 4.1 SP05


For Webi:

  • Webi Rich Client: Run Registry Editor and Add the entry “Dsap.sl.bics.variableComplexSelectionMapping=multivalue” in the following registry key and restart the client

       registry path.png

  • Webi Java and DHTML Viewer: Launch CMC -> Servers -> WebI Adaptive Processing Server ->Properties -> in command line parameters add the entry “Dsap.sl.bics.variableComplexSelectionMapping=multivalue”

 

For CR:

  • CR4E: In the config file, add the entry “sap.sl.bics.variableComplexSelectionMapping=multivalue” and restart the client

        Config file is located at “[Install_Dir]\Crystal Reports for Enterprise XI 4.0\configuration\config.ini”

  • CR Viewer: Launch CMC -> Servers -> Crystal Reports Processing Server ->Properties -> Java child VM arguments add the entry  “Dsap.sl.bics.variableComplexSelectionMapping=multivalue”

 

With Manual entry of values you can make use of expressions and patterns to reduce the number of values to be filled and speed up the selection process depending on the requirement, following are couple of examples.

  • Single member: 1
  • Interval: 1 – 5 (do include the spaces)
  • Expressions with operators: >4 , >=4 , <4 , <=4
  • Exclusions: !5 (any key but 5)
  • Combinations: 1 – 10 ; !5 (all the keys from 1 to 10 except 5)
  • Patterns: *1 : all keys that end in 1 (for ex. 01, 11, 21) and *1* : all keys that include 1 (for ex. 01, 10, 11, 12, 13, 21)

Webi runtime.png

The support for manual entry of values does not have any impact on the existing capability to select values from List of Values, users can still continue to make use of list of values for providing values for prompts.

 

With this enhancement, following is the consolidated list of clients supporting manual entry of values for variables.


    Reporting:

  • Web Intelligence
  • Crystal Report for Enterprise

    Dashboarding & Apps:

  • Design Studio

    Agile Visualization:

  • Lumira Desktop
  • Analysis, edition for Microsoft Office
  • Analysis, edition for OLAP
  • Explorer

 

Refer to SAP Note 1869560 for detailed SAP Integration support matrix for BI on BW.

Simon To

BI Mentors Monday

Posted by Simon To Nov 10, 2014

SAP Mentors Monday are public webcasts where we share information, experience, and success stories with the community. There are a couple sessions coming up that are presented by the SAP Mentors who are BI-focus. Greg Myers, Owen Pettiford, Jamie Oswald and I are teaming up to present different BI stories.

 

Please mark your calendar. The following is the tentative agenda.

 

Part 1:

Date/Time: Dec 15, 2014,  4:30 PM Eastern/3:30 PM Central/1:30 PM Pacific

Presenters: Greg Myers and I

Topics:  -- I will talk about Location Intelligence dashboarding.

              -- Greg will share about a monster BI 4.1 deployment with 20 servers in a cluster serving 60,000 users.

 

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Part 2:

Date/Time: Jan 12, 2015,  4:00 PM Eastern/3:00 PM Central/1:00 PM Pacific

Presenters: Owen Pettiford

Topics: -- This session will cover Lumira Server, Fiori Business KPI and Fiori Launch Pad.

 

Access Information:

SAP Connect: https://sap.na.pgiconnect.com/sapmm +1-720-897-6637,, Participant Passcode: 170 133 7730#

 

Owen Pettiford bio from SCN: I am an experienced consultant who has been working with SAP and non-SAP systems since 1991. This experience enables me to help client understand how to extend SAP in ways that are approved by SAP.My vision is to deliver SAP to anyone, anywhere, on anything.Since 2005 I have worked with clients to create architectures and designs based on Service Oriented design principles. I have completed many (30+) Enterprise SAP Roadmaps for clients across 12 industry sector which has given me a wide knowledge of common patterns and problems.I now head CompriseIT with Dean Wood and since 2008 we have been building apps on top of SAP for many clients and keeping up to date with what is coming next - and helping clients to understand this and incorporate into their landscapes.I am and SAP Mentor and have a degree in Computer Science.Specialties: SAP Enterprise SOA, SAP Business Suite, xApps, Composite Applications, Service Oriented Architecture, Integration, Business Process Management, Mobility, HANA, SAP HANA Cloud.

 

 

 

simon

This blog discusses the use case where we can use list of values generated in SAP Business Objects Information Design Tool's (IDT) data foundation layer for SAP HANA input parameters (from SAP HANA information view).


As we know that we can map SAP HANA input parameters to universe parameters in data foundation layer of IDT and this can be done by creating derived table in IDT. If you are not sure how to achieve that then please refer to document here. You would need to write a SQL statement to create derived table in IDT. I would like to suggest you to look at SQL statement generated in HANA studio when you preview your data. You can copy the part in green and then use it in IDT (as shown below).


SQL Statement in SAP HANA Studio:




SQL Statement in IDT:




Challenge/Issue/Gap:


When we use SAP HANA input parameters in Information Design Tool then the list of values created in SAP HANA doesn't come along. For example, following is the input parameter created in SAP HANA information view (using SAP HANA studio). We are using other HANA information view (see "View/Table for value help:*") which contains text attribute for distribution channel.



When we preview data in SAP HANA studio then the we can select from list:




However, this list is not visible or comes along with input parameter in IDT.


Solution/Suggestion:


In this case, we can create list of values from same HANA information view or data model, which contains text attribute of distribution channel in data foundation layer of Information Design Tool. Please use following steps:


Click on "List of value based on SQL" under "Lists of Values" section in IDT's data foundation


Write the SQL statement and hit "Save" to save data foundation layer.



Once list is created then use it to map with distribution channel parameter.


 


Now, you should be able to carry this list as well as input parameters to IDT Business Layer and use them as universe prompts. Please note if you are using HANA variables then these variables are visible in data foundation layer and we can use them as Universe prompts as well. HANA variables did not have this issue/gap. So, you can use variables instead of input parameter based on your use case.

 

If you have attended the Developer Wars at the ASUG's SAP Analytics and BusinessObjects Conference last month, you know how exciting, energetic and entertaining it is. If you have not attended this event, you might want to make a mental note to check it out next year. My blog and the ASUGNews blog can give you some idea of what the Developer Wars is about.

 

But meanwhile, Megan Fox and Joy Coates from the beneficiary organization, Jobs for The Future, and the winning team from NTT Data have graciously agreed to do a webcast on Wednesday October 15, 2014 from 12:00 - 1:00 PM Central Daylight Time (10 - 11 AM Pacific Daylight Time). Access information will be added later.

 

This is the 3rd year that ASUG is hosting the Developer Wars at the BusinessObjects Conference. It is getting better and better each year, thanks to the countless hours spent by the ASUG volunteers and the wonderful ASUG staffs. And of course, the fantastic MC performance by our fellow SAP Mentor Jamie Oswald. This is a great representation of what a user community is about.

 

This year we have 4 teams entering the contest. Three of them are SAP partners and one of them is consisted of SAP customers (code named The Expendables). NTT Data team, code named Optimal Optimizers, have won the contest and they are going to show you their winning solution.

 

Please mark your calendar and join us on this webcast.

 

simon

 

 

These are the access information:

SAP Connect: https://sap.na.pgiconnect.com/sapmm +1-720-897-6637,, 170 133 7730#

US iPhone: +1-720-897-6637,, 378 224 4518#
US BBerry: +1-720-897-6637 x378 224 4518#

Participant Passcode: 378 224 4518#

Country Number

US and Canada   1-866-312-7353

US and Canada   1-720-897-6637

US and Canada   1-646-434-0499

US and Canada   1-484-427-2544

 

 

237_small.jpg241_small.jpg244_small.jpg296_small.jpg308_small.jpg

Actions

Filter Blog

By author:
By date:
By tag: