1 2 3 62 Previous Next

SAP HANA and In-Memory Computing

928 Posts

Hana Cloud connector is compatible with Java 1.6 and 1.7


Below are the steps I followed to run cloud connector with Java 1.8


I did download Hana Cloud connector for windows




And I have Java 1.8 on my system


Change go.bat file inside - sapcc- folder





Run go.bat

Open https://localhost:8443/




Not sure why go.bat was not given with 1.8 support, as most of the functionality works fine

The Journey Continues - Episode 8 of 10


Quite often Enterprise Architects (EAs) need to work within the framework of what their CIO’s give them.  EA’s are challenged to provide innovation and value while cutting costs and simplifying. This week’s webinar had Mike Bell, Strategic Engagement Executive, on the call who spoke to us from the standpoint of what CIO’s expect. It was great to get a real word perspective based on years of experience as CIO. Mike has been there, through the invention and rollout of SAP from the customer point of view.


To start with, Mike presented the challenge that every Enterprise Architect has to deal with – the Cost to Value Challenge. How can EAs support CIOs in balancing systems risk, cost and time to value? The rest of the presentation dealt with these areas and how SAP HANA can help. Mike illustrated his journey as a CIO as he worked with SAP at the time when SAP HANA was an emerging technology.



Many CIOs have concerns about moving into the SAP HANA technology. It is new technology, and they may not have any experience on the benefits an organization can achieve. In the webcast, the speaker took us through a journey from a vision of what his previous firm wanted, as an SAP Customer, to what could eventually be done. This involved asking breakthrough questions in 2011 like; ”When can we take an 85 terabyte ERP instance and compact it down into one box, and thousands of servers down to three datacenters?”


To help answer these questions, Mike discussed the Gartner PACE Layering methodology. Many enterprise architects are familiar with the Gartner PACE application layering model. It attempts to place all applications in one of three designations:

Weblog 8 Pic 1.png

All images © 2015 SAP SE or an SAP affiliate company. All rights reserved. Used with permission of the author.


This was presented with an interesting SAP overlay to show where the various SAP technologies fit in this framework:

Weblog 8 Pic 2.png


One additional layer was then proposed to expand this framework by introducing the concept of Systems of Discovery. SAP HANA gives you the ability to have “Systems of Discovery”.


From Mike’s experience, he talked about the impact of batch runs for reports and what happens when 48,000 batch jobs don’t go out. It has a real business impact. You can do business differently if you don’t have to worry about batch jobs.


Why Now?

One example shown in the presentation was the way a retailer can take advantage of agile pricing so they can re-price during the day to move product and reduce waste. Normally, price is an unknown and manual process, with the speed SAP HANA, you are able to do dynamic pricing optimization.


Another way of meeting the Cost to Value challenge is by introducing SAP HANA as a sidecar. In the webcast, we learned a few ways that SAP HANA can be quickly introduced and provide immediate value to the organization.

Weblog 8 Pic 5.png


An important point the speaker made was how to deal with the impression that implementing SAP HANA is a binary decision, go SAP HANA or not. This is not the case, you can upgrade your ERP and introduce SAP HANA as a “Sidecar” and take advantage of new capabilities. You can do this in the cloud as well, and have an attractive cost structure as you grow into it.



How to Start.

Mike proposed implementing SAP HANA as a Sidecar technology to your ERP system so that you can introduce new technology that delivers immediate value. He also proposed that any development activities done through this needed to be delivered in 12 weeks. Why 12 weeks? That is the amount of time he felt you could keep the interest and momentum of the organization and management support for a project.


Using the example of rebuilding a house, you don’t knock the house down to build a new foundation, you can paint a room, without rebuilding the house.

Weblog 8 Pic 4.png


Towards the end of the presentation, Mike also reviewed the concept of Design Thinking as a process to envision new business capabilities and processes. I liked the quote “Let’s find something real that is valuable right now” as a driver to try the design thinking process.


On the call we heard about what a design thinking exercise looked like and how one company arrived at some new business processes that delivered value to the organization. You will want to watch the webcast to see how they went through this process. – And how the outcome delivered £200M of benefit 4 years earlier than originally anticipated.


There were many concepts for Enterprise Architects in this webcast that would be worth your while to review - from someone with real world experience. After viewing the webcast, you should be able to answer the questions proposed in the title of the presentation: Why HANA, Why Now and How to Start. The speaker touched on PACE Layering with SAP applications, SAP HANA as a Sidecar implementation, and how Design Thinking can help envision new business processes where SAP HANA can help out.


The webcast replay link: http://event.on24.com/wcc/r/1019548/F4C30A43B6FAC57DCEA804FE744A7A18


Webcast Materials on ASUG.com: https://www.asug.com/discussions/docs/DOC-42375


A few of the webcast attendee key takeaway comments:

  • Pace layering (with SAP Applications).
  • When ECC 6 support goes away, new features will only (be) available on HANA going forward.
  • Think about ECC 6 on HANA like renovating your house, help to build a much better business case.
  • How to run a design thinking process.
  • Design Thinking was new and intriguing to me. I will look to apply that.
  • How to think differently in term of life cycle and maintenance of an organization's application environment - thinking in the 3 Gartner types or 4 Mike Bell classifications of systems necessary to run or drive a business.
  • Radical transformation of thinking when in a HANA environment.
  • Getting to the top part of pace layering (faster).
  • Rethink how Pace Layering can be enhanced with HANA 12 week quick release projects.
  • The business value that can sell Hana.
  • CIOs can get the quick payback on IT investments that they need.
  • SAP HANA can be used to deliver value quickly through sidecar projects.
  • Design Thinking can help with use cases for HANA.
  • I'm not sure that I fully understand the quick value return at the Systems of Innovation level but that is very intriguing and something I will spend time to understand more clearly so I can share that with our Management Team.



In the next webcast scheduled for October 6th, the speaker will be covering “Implications of Introducing SAP HANA into Your Environment”.


Complete Webcast Series Details https://www.asug.com/hana-for-ea


The final webcast will occur at 12:00 p.m. - 1:00 p.m. ET

October 13, 2015: Internet of Things and SAP HANA for Business

Few months back I was offered opportunity to speak at SAP TechEd 2015 in Las Vegas. First thing that crossed my mind was the question on what is the right subject to cover. Since I did not want to present just for sake of presenting there I had to find some topic that will not be redundant to the presentations from SAP, that will address area that is not completely clear to everyone (where I can bring additional value), that will be within the scope of my expertise and that will be seen as attractive by SAP and AGUS sponsoring the event.


I was lucky to have opportunity to get my hands on SAP HANA technology no more than just few weeks after SAP HANA was released to the market in general availability. After initial period where I was experimenting with different job roles around SAP HANA (being responsible for installation and configuration, designing security concept, configuring data provisioning, doing modeling, etc.) I decided to settle down on subject that is probably closest to my heart - SAP HANA architecture, infrastructure and deployment options - and this is the topic that I selected for my presentation this year - to talk about on-premise deployment options for SAP HANA.


You might wonder if on-premise discussion is still relevant when we are able to host SAP HANA in cloud. Answer is yes. First reason is simple fact that there are still customers that are not yet fully embracing cloud and they are still looking at options how to deploy SAP HANA in their own data centers. Second reason is that cloud vendors need to follow same rules as everyone else to ensure that result will be SAP certified - this means that their cloud solutions are based on similar principles as on-premise deployment. Understanding advantages and disadvantages of individual on-premise deployment options can help you to understand the limitations of individual cloud offerings.


Topic of SAP HANA deployment options is already covered quite well by SAP - is there anything new to offer? I believe that yes. SAP is doing great job by opening SAP HANA options by introducing topics like TDI (Tailored Datacenter Integration) and virtualization - but since they do not wish to give up on their commitment to deliver only the best performance they always release new set of regulatory rules prescribing configuration details. Result is that today there are many different options how SAP HANA could be deployed - appliances, TDI, virtualization, application stacking (MCOD, MCOS, MDC) - but it is incredibly difficult to stay clear on what are the regulations (and limitations) and which options can be combined together.


And this is where I decided to approach the subject from different angle. SAP is typically focusing on individual options in detail usually covering one option at a time - looking at simplistic examples to illustrate the approach. Here I intend to do exactly the opposite - first to briefly look on individual options from extreme point of view (how far we could potentially go) and then to outline how all these options could be combined together.


As you can see the subject is quite huge - and since I was given only 1 hour (which is the standard time allocated for ASUG sessions) I had to make tough selection on what will be presented and what not. Therefore I decided to move SAP HANA Business Continuity to a separate session (EXP27127) and also to leave some topics like SAP HANA Dynamic Tiering for another time.


So what will be covered in the ITM228 session? We will start by looking at situation with appliances - providing basic overview on different models across all hardware vendors, then we will look at SAP HANA Tailored Datacenter Integration (TDI) with all phases and approved options, we will review SAP HANA virtualization options with focus on VMware, then we will mention ways how to stack data from multiple applications on single SAP HANA server or virtual machine (MCOD, MCOS, MDC) and at the end we will look on ways how to combine all these options together - what everything is supported versus what combinations should be avoided.


In SAP HANA Business Continuity session (EXP27127) we will take a closer look on two most typical options - SAP HANA Host Auto-Failover and SAP HANA System Replication. I prepared animation illustrating how the options are designed to work and how SAP HANA is behaving during take-over. At the end of the session we will outline most typical deployment scenarios for SAP HANA Business Continuity.


On the screenshots below you can find example of the content that will be presented during the sessions. By this I would like to invite you to my sessions (ITM228 and EXP27127) and I am looking forward to meet you in person at SAP TechEd 2015 Las Vegas. Have a safe travel.


     Example 1: [ITM228] Overview of available appliance models and their usage.


     Example 2: [ITM228] Visualization of SAP HANA stacking options and their approved usage for production.


     Example 3: [EXP27127] Overview of typical single-node SAP HANA Business Continuity deployment options.


I would like to express big thanks to Jan Teichmann (SAP), Ralf Czekalla (SAP), Erik Rieger (VMware), John Appleby (Bluefin) for reviewing the slide deck and providing suggestions for improvement.

Invite colleagues to join the SAP HANA International Focus Group!

https://jam4.sapjam.com/profile/65vSipQJGc4SGkojVvOWPJ/documents/syOrhTxxhiKQFfW08It4oM/thumbnail?max_x=850&max_y=850&version_id=5065197To celebrate SAP’s signature corporate volunteer initiativeSAP's October Month of Service (MOS), a campaign is being held to invite (& confirm) the next 1200 members of  the SAP HANA international Focus Group (iFG) consisting  of customers, partners, and experts focused exclusively on SAP HANA implementation and adoption. We need your help to achieve this goal!

Here’s how it works (see below for more details):

  1. INVITE: The iFG team will donate $1 per new member to two great causes: “Doctors without Borders” and “The Hope Foundation” (India) for up to 1200 new members confirmed.
  2. REGISTER:  Forward this link to colleagues who value SAP HANA – www.saphanacommunity.com Encourage them to support a great cause and the benefits of membership!
  3. SOCIAL:  Tweet or e-mail your own message or share “Join the #SAPHANA iFG community. Help us reach our goal of 1200 new members in 30 days. Visit >>  www.saphanacommunity.com
  4. WATCH:  Current and new members can track progress by visiting the SAP HANA iFG Jam group and seeing the number of members in the upper left hand corner grow.

SAP HANA Helps Humanity
As SAP HANA is a strategic initiative for your organization, SAP “approaches corporate social responsibility (CSR) strategically – in order to ensure a sustainable future for society, our customers, and our company. By focusing our talent, technology, and capital on education and entrepreneurship, we strive to enact positive social change through economic growth, job creation, innovation, and community.”


The SAP HANA iFG selected these two organizations based on their great teaming with SAP, customers, and partners around the globe and the synergy with charities selected for the SAP HANA Innovation Awards 2015.  We want a fun way to grow the community and make social impact during the Month of Service!


Thank you for your consideration to invite your SAP HANA colleagues to the SAP HANA iFG community and join us as HANA Helps Humanity.  This initiative will last from October 5 to November 5; we hope to surpass our goal!

https://jam4.sapjam.com/profile/uit7WY0ZrikCVja7RvnWkZ/documents/ByB0bv7fJVCWaug7TBYm16/thumbnail?max_x=850&max_y=850&version_id=4477096Click HERE if you’re already a member.  If not, click here for an invitation to join or email saphanacommunity@sap.com  if you have any questions!


Background Information:

https://jam4.sapjam.com/profile/65vSipQJGc4SGkojVvOWPJ/documents/4YfucYSJtcsK3LUDelmzGI/thumbnail?max_x=850&max_y=850&version_id=5065194The Hope Foundation (India)
HOPE foundation works to bring about change in the lives of children, young people and vulnerable individuals. They educate children, provide healthcare and train young people and women in skills for livelihoods. Their team of 550 people and many more volunteers and partners work in 26 cities in India through over 100 programs and community-based services. Their mission is to bring hope to those with none and change the lives of everyone they work with, including their staff, donors, volunteers and partners. http://www.hopefoundation.org.in/

https://jam4.sapjam.com/profile/65vSipQJGc4SGkojVvOWPJ/documents/I9MNaQoLlkdZbhopqQmuYM/thumbnail?max_x=850&max_y=850&version_id=5065192Doctors without Borders

Help people worldwide where the need is greatest, delivering emergency medical aid to people affected by conflict, epidemics, disasters or exclusion from health care. http://www.doctorswithoutborders.org/about-us


https://jam4.sapjam.com/profile/65vSipQJGc4SGkojVvOWPJ/documents/cs8QaLQVzc2xG62PnRiDy4/thumbnail?max_x=850&max_y=850&version_id=5065198Joining the SAP HANA International Focus Group (iFG) Jam Community!

This exclusive community provides a single, central global location for unique SAP HANA updates only available to our members

Benefits include:

  • Access to private & selected webinars with SAP HANA Experts
  • On-demand recordings and slides from many popular topics (i.e. HANA SPS10, Dynamic Tiering, Modeling, Hadoop Integration, etc.)
  • SAP TechEd updates / sessions specific to SAP HANA
  • Early access to SAP HANA related product updates
  • Unprecedented global networking around SAP HANA topics
  • Insights from SAP HANA experts from around the world
  • 1 free ticket to a major SAP conference to the first 10 customers who agree to a 1 hour HANA Spotlight webcast.










For quite sometimes, I was working with the team on SAP HANA System Replication. This is mainly focused for CRM on HANA or SoH HA/DR POC.

CRM on HANA is a scale-up solution – for HA part, we prefer SAP HANA System Replication within the same Datacenter whereas for DR, we leverage storage replication across Datacenters.

There are two aspects:

- SAP HANA System Replication setup/failover Testing

- CRM HA : Extend SAP HANA System Replication as a HA solution for CRM

Due to business criticality, CRM system HA failover should be Auto-Failover with zero data loss.


There are some technical points in this regard -

SAP HANA System Replication is primarily a Disaster Tolerance (DT) / Disaster Recovery (DR) Solution and NOT a full-fledged HA solution.

• HANA System Replication is NOT Host Auto-Failover

• HANA System Replication synchronizes data between two data centers (Site A and Site B)

• HANA System Replication works only for Scale Up


In this blog, I will discuss about SAP HANA System Replication – possibility to make it as automated failover. But I will not touch how to setup the systems to perform SAP HANA System Replication.


My recommendation for the above as follows – which is the best solution in industry as of today:

Combination of SUSE Linux Enterprise High Availability Extension Cluster (SLES HAE) with SAP HANA System Replication. But as on date, SLES HAE is taking care of HANA Database, it is not fully SAP Application-aware.


Without SLES HAE,

Yes, HANA System Replication can be used as HA solution if the connections from database clients that were configured to reach the primary system, and need to be "diverted" to the secondary system after a failover with an automatic way via IP redirection, DNS redirection, etc. along-with SAP HANA Service Auto-Restart watchdog function. But again, we have to take care Host Auto-failover functionality.

Remember, in this way, SAP HANA System Replication can be used as main HA failover for zero or near-zero downtime maintenance or failures.


Pre-requisite/Assumption :

- SAP HANA System Replication is already configured as per SAP standard guide.

- DB Takeover is happening from Primary to Secondary Node in perfect manner.

- People/Team having required skill-set and proper access, authorization to perform the activity.


Preparation at ABAP Application Server :

- Set greater value for rdisp/max_wprun_time from its default value of 300 seconds. It should be greater than DB Takeover process from Primary to Secondary node.

- Set the parameter rdisp/wp_auto_restart = 0

- Set the parameter dbs/hdb/quiesce_check_enable to "1" (default value is 0).


Just before the Takeover, we have to create a file named "hdb_quiesce.dat" using touch command in the DIR_GLOBAL directory (i.e., /usr/sap/<SAP_SID>/SYS/global).

This will suspend the connection between the application server and database server (Primary node, in this case), one can check via R3trans command.

Newly started ABAP processes do not open a connection to the database until the file is removed. Although SAP Application using the dynamic profile parameter dbs/hdb/quiesce_sleeptime (default

value is 5sec.), checks whether the file named "hdb_quiesce.dat" still exists in the DIR_GLOBAL directory. So, when Secondary DB node is fully active, one can check via R3trans command – if it is successful, then we have to remove the "hdb_quiesce.dat" file. Now Application can connect to HANA Database but actually to the Secondary node. Also one can reset the parameters value as the activity is over. 


But during the above DB Takeover process, we have to make necessary changes for Secondary DB Node as the default DB node for the SAP Application. Required IP Address change and restart of network services should be performed via Scripts to avoid confusion/errors.


Little bit complicated, not able to understand fully? For that reason, I have created a flow chart.


Flowchart for Host Auto-Failover while using SAP HANA System Replication

Hope it is clear now.


We have tested the whole scenario for few times and worked fine in all the cases.


There are some restrictions as follows, which need to be considered :

- Long-running database transactions like background jobs, etc. are not interrupted during this activity.

- Here, Application to Database connection is closed or suspended. External connections, e.g. connection between this HANA system and SAP Solution Manager System, are not interrupted.

- This activity is only applicable for ABAP application server. Database connections from the Java stack are not interrupted.


BTW, as the connection from Solution Manager is alive during the activity, one can leverage auto-reaction method along with scripts to perform whole scenario. And we have tested that in our environment also and worked in smooth manner.


For more details, consult SAP Note 1913302 - HANA: Suspend DB connections for short maintenance tasks.

I've seen many posts on how to setup Hana System Replication and its takeover, however, none or few of the post that covers client reconnect after sr_takeover.


In order to ensure the client is able to find seamlessly the active HDB node (doesn't matter primary or secondary), we can either use IP Redirection or DNS Redirection. In this blog, i'll emphasize on simple IP redirection as it is much easier, faster and less dependancies compare to DNS redirection.


For details info on IP and DNS redirection, please refer to the guide:



Introduction to High Availability for SAP HANA

How to Perform System Replication for SAP HANA



First of all, we need to identify a virtual hostname/ip, create them in your DNS. Below is the sample virtual hostname/ip and physical hostname/ip used:


Virtual IP/Hostname: [10.X.X.50 / hanatest]

Primary Physical IP/Hostname: 10.X.X.20 / primary1

Secondary Physical IP/Hostname: 10.X.X.21 / secondary2


In normal operation, [10.X.X.50 / hanatest] is bind to Primary Physical Host - primary1

SAP instances, HTTP, BO, SAP DS and etc are connect to HDB via [10.X.X.50 / hanatest]

During any unplanned outage/ Disaster, [10.X.X.50 / hanatest] will be unbind from primary host and bind to Secondary Physical Host.


And below are the steps on mapping virtual IP [10.X.X.50] to its MAC address in Linux:


1) Bind virtual ip (10.X.X.50) to Primary Physical Host


primary1:/etc/init.d # ifconfig eth0:0 10.X.X.50 netmask broadcast 10.X.X.255 up

2) Check eth0 entry:


primary1:~ # ifconfig

eth0      Link encap:Ethernet  HWaddr XX:XX:XX:XX:XX:C2

          inet addr:10.XX.XX.21  Bcast:10.XX.XX.255  Mask:


eth0:0 Link encap:Ethernet  HWaddr XX:XX:XX:XX:XX:C2

inet addr:10.XX.XX.50  Bcast:10.XX.XX.255  Mask:



3) Ping hanatest and it is resolvable.


PING hanatest (10.XX.XX.50) 56(84) bytes of data.

64 bytes from hanatest (10.XX.XX.50): icmp_seq=1 ttl=64 time=0.028 ms

64 bytes from hanatest (10.XX.XX.50): icmp_seq=2 ttl=64 time=0.038 ms

64 bytes from hanatest (10.XX.XX.50): icmp_seq=3 ttl=64 time=0.024 ms


4) For all HDBs' clients, connect using virtual hostname [hanatest]


a) SAP - hdbuserstore:


sidadm 52> hdbuserstore list

DATA FILE       : /home/sidadm/.hdb/XX/SSFS_HDB.DAT



  ENV : hanatest:30515



Login to SAP, and you'll see DBHOST is pointed to primary1


In DBACOCKPIT -> DB CONNECTION -> Ensure virtual host is used:


b) HANA Studio:

Sevices are running on Physical Host primary1


c) ODBC - connect using virtula host



d) HTTP - xsengine







------------------Unplanned outage *DISASTER*:--------------------------------------------


During Disaster. we will:


i) Ensure primary HDB is down and not accessibile to avoid any split-brain

ii) Unbind virtual ip [10.X.X.50] currently binding to Primary Physical Host. in ifconfig, eth0:0 should not visible after you execute below command.


primary1:~ # ifconfig eth0:0 10.XX.XX.50 down

iii) clear ARP cache in client [optional]


iv) initiate -sr_takeover and wait HDB on secondary to be up and ready


v) Once HDB on secondary host is up and running, bind virtual ip [10.X.X.50] to Secondary Physical Host

secondary2:/etc/init.d # ifconfig eth0:0 10.XX.XX.50 netmask broadcast 10.XX.XX.255 up

secondary2:~ # ifconfig

eth0      Link encap:Ethernet  HWaddr XX:XX:XX:XX:XX:C8

          inet addr:10.XX.XX.21  Bcast:10.XX.XX.255  Mask:



eth0:0 Link encap:Ethernet  HWaddr XX:XX:XX:XX:XX:C8

inet addr:10.XX.XX.50  Bcast:10.XX.XX.255  Mask:



vi) Ping hanatest and it is resolvable. virtual host [hanatest] is currently bind to Secondary Physical Host - secondary2


PING hanatest (10.XX.XX.50) 56(84) bytes of data.

64 bytes from hanatest (10.XX.XX.50): icmp_seq=1 ttl=64 time=0.028 ms

64 bytes from hanatest (10.XX.XX.50): icmp_seq=2 ttl=64 time=0.038 ms

64 bytes from hanatest (10.XX.XX.50): icmp_seq=3 ttl=64 time=0.024 ms


----------------------- End-to-End Client Reconnect Verification -----------------------------

once done, you can perform end-to-end client reconnect verification without the need to perform any changes.


a) SAP Instances after sr_takeover and running on secondary host:


a.i) Developer trace – SAP reconnect ok to secondary host:secondary2

B Connection 1 opened (DBSL handle 1)

B successfully reconnected to connection 1

B ***LOG BYY=> work process left reconnect status [dblink       2158]

M ThHdlReconnect: reconnect o.k.


M Tue Sep 29 13:50:13 2015

M ThSick: rdisp/system_needs_spool = false

C FDA DB protocol version from connection 0 = 1


B Tue Sep 29 13:55:16 2015

B Connect to XXX as system with hanatest:30515

C Try to connect as system/<pwd>@hanatest:30515 on connection 1 ...


C Tue Sep 29 13:55:17 2015

C Attach to HDB : (fa/newdb100_rel)

C fa/newdb100_rel : build_weekstone=0000.00.0

C fa/newdb100_rel : build_time=2015-04-15 10:44:35

C Database release is HDB

C INFO : Database 'TST/05' instance is running on 'secondary2'

C INFO : Connect to DB as 'SYSTEM', connection_id=300064


a.ii) SAP status (HDB switched from primary1 -> secondary2)


b) HANA Studio








xsengine: http://hanatest:8005/





Hopefully this blog will serve as a reference for client reconnect strategy when setting up Hana system replication. Also, hopefully more consultant are aware of the three execellent guides above, which provided detailed info on client reconnect mechanism and hana system replication.



Nicholas Chang

Hello Everyone,


In this blog let us see how we can bind dynamic images (i.e. based on user input) to SAPUI5 Image control.Let’s take an example of storing images of  100 employees and then displaying it as their profile pic based on employee id.


Firstly you need to process the cool images and store it in HANA!! Now how do we do that?? There are many ways to do this e.g. using  python,java,etc but I choose the JAVA way to store it as BLOB in HANA Table.. BLOB datatype can store images/audio/video up to 2GB.


Below is the code snippet for opening an image file, processing it and storing it in HANA table. Place all your image files in a folder(eg. C:\\Pictures).

public class ImageOnHana  {
      public static final String hanaURL = "jdbc:sap://<hostname>:3<instance>15/";
      public static final String hanaUser = "AVIR11";
      public static final String hanaPassword = "ABCD1234";
      public static final String pics = "C:\\Pictures";
      public static void main(String[] args) throws IOException, SQLException, ClassNotFoundException {
      Connection conn = DriverManager.getConnection(hanaURL,hanaUser,hanaPassword); //Open HDB Connection
      String query = "INSERT INTO \"AVIR11\".\"EMP_IMAGES\" VALUES(?,?)";
      PreparedStatement pstmt = conn.prepareStatement(query);
      File folder = new File(pics);
      File[] images = folder.listFiles();
      System.out.println("*****OPEN FILES NOW****");
      try {
            if (images != null) {
                for (File image : images) {
                  String imgName = image.getName();
                  FileInputStream fis = new FileInputStream(image);
                  pstmt = conn.prepareStatement(query);
                  String[] parts = imgName.toUpperCase().split(".JPG");
                  String id = parts[0];
                  pstmt.setInt(1, Integer.parseInt(id));
                  pstmt.setBinaryStream(2, fis, (int) image.length());
                  System.out.println(imgName + " image upload to HANA successful");
      } catch (Exception e) {

Row inserted  - “AVIR11”.”EMP_IMAGES”. Column IMAGE with BLOB datatype


For providing this image to the UI lets create a XSJS service that would process the blob data from table. Make sure that the content-type is set to image/jpg.


var empId = $.request.parameters.get("empId");
var conn = $.db.getConnection();
try {
    var query = "SELECT IMAGE FROM \"AVIR11\".\"EMP_IMAGES\" WHERE ID = ?";
    var pstmt = conn.prepareStatement(query);
    var rs = pstmt.executeQuery();
        $.response.headers.set("Content-Disposition", "Content-Disposition: attachment; filename=image.jpg");
        $.response.contentType = 'image/jpg';
} catch (e) {


Note : Odata does not support BLOB datatype, hence couldn't send the response in Odata.


Done!! We are good to go and integrate this service to the UI5 image control !!

<Image src="http://<hostname>:8000/avinash/services/XJ_Emp_Images.xsjs?empId=1"
       width="100%" height="150px">
      <layoutData><l:GridData span=”” linebreakL=””/></layoutData>            

Above view.xml snippet shows hardcoded/specific Employee ID. For dynamic Employee id set  <Image id=”image> and refer this id in your controller.xml for setting the source.




Voilà my fav star pic for my Employee Id !!


If your scenario is to Upload a file from the UI using an Upload button you can use the SAPUI5 FileUploader control and use XSJS to get the entities The later processing and UI image binding remains the same as above..

Happy Learning !!

Avinash Raju

SAP HANA Consultant


Part I: Why Size matters, and why it really matters for SoH (Part I)

In the first part of this blog I described some reasons why you want to try and keep your database small.

Apart from cost there are also some compelling technical reasons why you will eventually come to a hard stop
in terms of database growth. I.e. the limit of technology today.
The biggest x86 systems on the market today (and certified) are currently 16 socket 12 TB nodes, with one vendor offering
a potential 32 socket 24TB node.


With future x86 advancements (Broadwell and beyond) SAP may release higher socket/Memory ratios (i.e. use of 64GB Dimms),
but for the time being we are limited to

  • 2XSocket           1.5 TB
  • 4XSocket           3    TB
  • 8XSocket           6    TB
  • 16XSocket         12  TB



Take a look at what you store in your Database


When you look at your existing Business Suite database, what are your high growth areas?


  • Do you store attachments in your DB, i.e. PDFs/jpgs/Word docs/Engineering drawings?
  • Do you use workflow heavily?
  • Do you rely on application logs?
  • Do you keep your processed IDOCs in the DB?
  • Do you generate temporary data and store it?
  • Do you keep all the technical logs/data that SAP produces on a constant basis?



The above does not even look at business relevant data or retention policies.


You will be surprised at how much data is stored in your DB that your users never use or very infrequently.

Does this data really belong in your Business critical application that should be running at peak performance all of the time?


Probably not but there are valid (and invalid) reasons why it is stored in there.




Lets take attachments as an example.


Think back when your SAP system was first implemented. There were probably budget and time constraints, made all the worse
because of project over runs.
That great design your solution/technical architect came up with, using an external content/document server, that required a separate
disk array, server and Document server license, was likely have been torn up as SAP provided a local table just for purpose of storing attachments.


The architect lost the argument and the data was stored locally (Yes I have been there).


This scenario actually has two consequences.

a) Resulting in a large database (I have seen the relevant tables grow to above 2TB)

b) Slow performance for the end user, as you have to access the database, load the relevant object into DB memory, then into the application Server Memory
    before it is shown to the user.


With a remote document store, the user is passed a url pointing directly to the relevant object in the document store, bypassing the application servers and DB server at the same time reducing the load on the server network.





One a workflow is completed, does it really need to sit in the users inbox? Yes in some industries, e.g. aerospace I can imagine you need to keep a record of all workflows, but do they need to be stored online? Would it not be more secure to store them in a write once/read many archive where the records cannot be edited?

Again I have seen workflow tables so large (and over engineered) that the SAP Archive process can't even keep up with the rate of creation.


Application Logs


Again how long do you need to keep these? Is there a compliance reason, again what is the maximum amount of time these logs are actually relevant.


IDOCS and other transient objects.


Once an IDOC is successfully processed the data will already have been loaded into the relevant tables. After than the data loaded into the IDOC tables in most cases is pretty much irrelevant, if you have to keep the IDOC data, store the incoming/outgoing IDOC/XML file. Large IDOC tables can cause significant performance issues for interfaces.

Is there any other temporary data you create that is truly transient in nature? Consider if you really need to keep it.



SAP can create various logs in the database at an alarming rate. I've seen DBTABLOG (log of all customizing changes) at 1TB, SE16N_CD_DATA (a log of data deleted via SE16) at 100GB (what are you doing deleting data via SE16 anyway?!?!?!)


Business Data Retention Periods


This is the hardest nut to crack. As stated in Part I, disk is cheap. Getting the business to agree on retention periods was nigh on impossible and a battle the poor suffering OPS guys/gals would retreat from.

With In-Memory databases this is a battle line that will need to be redrawn. As stated in the introduction, there are technical limits as to how far your database can grow without suffering severe performance degradation or costs will increase an order of magnitude more than they did with disk based technologies.


Hard questions have to be asked.


  • Why do you have to keep the data online?
  • At what point does your data become inactive?
  • Once inactive will you need to change it?
  • Is the reason for Legal/Compliance reasons or just because somebody said they want all data online?
  • If this inactive data is only going to be used for analysis, would it not be better storing it elsewhere in a summarized form? (this is one of the reasons why BW will not die for a while)


One area where users complain about Archiving, is that they have to use a different transaction to get at archived data. You may have a counter argument now.
With the journey to SAP HANA you may well be considering Fiori. A complete change in User Interface, so the user has to re-train anyway, so it becomes a moot point.




I realize I have not talked much about HANA in this part. Old hats like me would have heard the above again and again in regards to traditional databases. We have often lost the argument or maybe even just thrown disk at the problem rather than getting into the argument in the first place.


With In-Memory databases, a jump from one particular CPU/Memory configuration to another can be a doubling in price, rather than a linear increase with disk based databases.


If your In Memory database is so big that it reaches the limits of current technologies, you may be in big trouble. An emergency archiving project is always nasty. It will be political. Your system can crawl as you frantically use all available resources to offload data, and the end-users will complain about new transactions they have to use as the change will be forced upon them.

The Journey Continues - Episode 7 of 10


Is SAP HANA ready for your data center? How are you going to architect it so that it fits in? These are questions for Enterprise Architects that were answered in this installment of the webcast series SAP HANA for Enterprise Architects. The webcast speaker was Ralf Czekalla,  Product Manager SAP, who delivered a very detailed presentation on just some of the data center readiness story.


WARNING: This is a deep dive presentation so you may need your acronym dictionary as we cover topics like HA and DR, RPO/RTO, VM, TDI etc.


In past webcasts we have had speakers talk about SAP HANA Cloud Platform and HANA Cloud integration.  This presentation was mainly about running SAP HANA on premises. I know there are many customers out there with SAP HANA appliances that are three years old or older. What new options are there out there for backup and recovery? There are many more options for you to consider when it comes time to renew the hardware that powers your SAP HANA implementation. SAP has come a long way!


** As a side note, this webinar comes with one of the most extensive slide decks I have every seen. You will definitely want to download the presentation materials, as we did not have near enough time to cover the depth of material.


In approaching the topic of Data Center Readiness, Ralf, divided it up into the following areas:



All images © 2015 SAP SE or an SAP affiliate company. All rights reserved. Used with permission of the author.


I will attempt to highlight some of the things that I found interesting in the presentation.

Through out the webcast, Ralf spoke to some of the historical points in the development of SAP HANA as well as the roadmap and options going forward. You will see copious references to SAP Technical Notes throughout the presentation to give you links to the source documents for reference. This is a point in time presentation and many of the slides focus on the SPS 10 release of SAP HANA.


As groundwork for the presentation, Ralf covered some of the existing deployment methods of SAP HANA. This set the stage for talking about many of the aspects of design and setup like multi-tenancy, performance criteria and single instance vs. scale out architectures.


From the webcast, I think one of the under utilized solutions for SAP HANA is the Tailored Data Center Integration (TDI). Many organizations have bought into high-end converged infrastructure and now find that it is underutilized. Running SAP HANA on your existing data center hardware is a way of cutting costs and using what you have. Your hardware still needs to meet the specifications that SAP sets, but in these challenging economic times, it is another option.



An early-perceived weakness of SAP HANA was the backup and recovery options that were available at launch. It was great to see that the internal capabilities have been updated as well as support for many vendor tools to handle this task. There are many options that Enterprise Architects can include that take into account their existing backup tools or environment.


One of the many gems in the presentation was the discussion around virtualization using VMware. Ralf had a great slide that spoke to the pros and cons of using VMware vs. bare metal.



From the slide we see that there are performance impacts across different SAP HANA tasks. You really need to know what your application is doing in SAP HANA to determine if VMware is the right fit.


Further into the presentation it was obvious that there was not enough time to cover all of the agenda and so details on topics like Monitoring & Administration and Security & Auditing were left for a future webcast.

Ralf included lots of links to external content that you should check out in the slide deck. Here is one on backups and recovery:



I think some of the highlights that I took away from the presentation were that SAP HANA has matured over the last few years. Initially it was very weak in deployment, disaster recovery and back options. Now there are many different solution possibilities based on you performance, availability and management needs.


This webcast was a whirlwind tour of SAP HANA in the data center. The speaker discussed many different aspects that you need to consider as you build out solutions. Review the slide deck for more information; the content is all there.


To view the webcast:



The PDF file of the presentation with over 190 slides is found here: https://www.asug.com/discussions/docs/DOC-42296


A few of the webcast attendee key takeaway comments:

  • zero-downtime maintenance - very impressive!
  • DR, Backup and Recovery, High availability
  • TDI enabling External/Corp Storage.
  • Key considerations or HA, DR and backups.
  • SAP is continuously updating their strategy and product capability


In the next webcast scheduled for September 29th, the speaker will be covering “Why SAP HANA, Why Now and How?”


Complete Webcast Series Details https://www.asug.com/hana-for-ea


All webcasts occur at 12:00 p.m. - 1:00 p.m. ET on the days below. Click on the links to see the abstracts and register for each individual webcast.

September 29, 2015: Why SAP HANA, Why Now and How?

October 6, 2015: Implications of Introducing SAP HANA Into Your Environment

October 13, 2015: Internet of Things and SAP HANA for Business

Hello Everyone,


This blog shows you, how to secure the communication between HANA Server and HANA Studio through SSL. It is highly recommended when there are lot of sensitive data handled in the system, which you want to secure from the middle-man attacks. There could be multiple documents available in SCN on this topic, but here I wants to show my experience on setting this up, in short time.



  • HANA Server is installed and running
  • HANA studio is installed in the local system
  • Access to the HANA server
  • Putty / WinSCP tools


HANA Server and client without SSL configured:




Steps need to be performed in HANA Server:

Login to HANA server system using Putty, as a root user and check if the libssl.so file exists. If not, create a symbolic link to libssl.so.0.9.8.




Now login to HANA server system, as a “<sid>adm” user.




Create the Root Certificate:

  1. Go to Home directory “/usr/sap/<sid>/home”
  2. Create directory with a same “.ssl”
  3. Get into “.ssl” directory


   4.  Execute the following command

openssl req -new -x509 -newkey rsa:2048 -days 3650 -sha1 -keyout CA_Key.pem -out CA_Cert.pem -extensions v3_ca6.JPG

   5.   Enter the relevant details


   6.   This will create couple of files (CA_Cert.pem and CA_Key.pem) in “.ssl” directory



Create the Server Certificate:

  1. Get into “.ssl” directory
  2. Execute the following command and Enter the relevant details

openssl req -newkey rsa:2048 -days 365 -sha1 -keyout Server_Key.pem -out Server_Req.pem -nodes9.JPG





















   3.   This will create a couple of additional files (Server_Key.pem and Server_Req.pem) in “.ssl” directory

   4.   At this time, you will have 4 .pem files under “.ssl” directory




Sign the Server Certificate:

  1. Get into “.ssl” directory
  2. Execute the following command and Enter the relevant details

openssl x509 -req -days 365 -in Server_Req.pem -sha1 -extfile /etc/ssl/openssl.cnf -extensions usr_cert -CA CA_Cert.pem -CAkey CA_Key.pem -CAcreateserial -out Server_Cert.pem


   3.   At this time, you will additionally have one new .pem file(Server_Cert.pem) and one new .srl file(CA_Cert.srl) created under “.ssl” directory as shown above


Chain the Certificate:

  1. Get into “.ssl” directory
  2. Execute the following command

cat Server_Cert.pem Server_Key.pem CA_Cert.pem > key.pem

   3.   At this time, you will additionally have one new .pem file(key.pem) created under “.ssl” directory. Totally there will be 7 files under this directory


Copy the Certificate:

  1. Get into “.ssl” directory
  2. Execute the following command

cp CA_Cert.pem trust.pem

   3.   This will create one new trust.pem file, as you just did a copy



Restart HANA Server:

  1. Go to /usr/sap/<sid>/HDB<InstNo>
  2. Stop the HANA Server using ./HDB stop and then start the HANA server using ./HDB start



Steps need to be performed in HANA Studio:

Copy “trust.pem” to local client:

Using WinSCP Tool copy the trust.pem from “.ssl” directory to c:\temp\


Import “trust.pem”:


  1. As user ‘Administrator’, or with administrative access, import trust.pem into Java’s keystore. This can be done as below
  2. Copy the Java bin directory location from HANA Studio


   3.   Run the Command prompt (with Run As Administrator), and go to Java bin directory location copied above



   4.   Execute the command keytool.exe -importcert -keystore "C:\Program Files\SAP\hdbstudio_Rev93\plugins\com.sap.ide.sapjvm.jre.win32.x86_64_81.0.0\jre\lib\security\cacerts" -alias HANServer -file c:\temp\trust.pem


   5.   Enter the keystore password and the default password for the Java keystore is “changeit”. Once the password is entered, and the certificate details will be shown. Enter “yes” to trust the certificate


   6.   Now the Certificate would be added to the keystore



Enable SSL Communication:

  1. Close HANA Studio(if it’s opened already)
  2. Open the HANA Studio and go to Administrator’s perspective, right click and add the HANA system (MK2 in our case)
  3. Enable “Connect using SSL”, in the Connection Properties dialog and click Finish


   4.   Now hover the added HANA(MK2) system, you will observe a small lock on the system along with SSL indication in the tooltip as shown below



Now the SSL has been configured between HANA Server and HANA Studio and the communication is secured.


Hope this helps.




At SAPPHIRE this year, you may have seen ConAgra win a HANA Innovation award for the work they have done with SAP on a new solution called SAP Total Margin Management based on SAP HANA.


ConAgra Foods, Inc. in one of the North America's largest packaged food companies with branded and private branded food found in 99 percent of America's households, as well as a strong commercial foods business serving restaurants and foodservice operations globally. Within this industry, increasing competitive pressure requires more accurate forecasts of future costs in order to maximize margin.


The company partnered with SAP to co-innovate on a margin management solution that provided visibility to costs at the lowest level of granularity, and the forecasting capabilities to model scenarios to better predict the future.


There are two key components to SAP Total Margin Management:


The first provides a better ability to understand the past by:

  • Decomposing complex Bill of Materials
  • Creating models based on history
  • Break those models down by customer / product combinations
  • Allow these models and drivers to be used for forecasting and scenario analysis
  • By breaking everything down to a base level where you are able to compare like with like


The second key component is the ability to efficiently model the future by:

  • Converting drivers to levers
  • Take into consideration inventory position when projecting the future
  • Having the information available at a level of detail necessary to provide "margin flow analysis", which is to understand variances based on price, product mix and volume.


A new video showcasing the power of the solution is not available here


Good Cost management identifies items where increased cost level must be understood, addressed or taken into account in pricing and operational planning, allowing you to respond to changing market conditions speedily. SAP Total Margin Management helps you to understand the Profit and Loss Statement at any and every level of the business - by customer, or product, by brand, or by area of responsibility.


SAP HANA is the high-speed in-memory platform needed to process and visualize this amount of information. Users can quickly perform iterative scenarios and "What If" forecasting on large amounts of data.


SAP Total Margin Management is generally available as of May 2015 and is an excellent example of how SAP is working directly with customers to solve real business problems and help their businesses RUN SIMPLE.

With the Rugby World Cup now on, I decided to put some of the SAP kit bag to the test.

25-Sept-2015, We have updated Lumira Cloud visualisations and added a few more screenshots below


The latest output of this *should* be automatically republished daily at 22:00 BST to Lumira Cloud, allowing you to interact with it.


Rugby Tweet Analysis v2.png

During the first 7 days of the tournament I have already captured over 1.4 million tweets from the #RWC2015 Twitter Feed.  I hope to keep the data capture running throughout the tournament


In this example I have used

1. Smart Data Integration (SDI) within SAP HANA to acquire the tweets from Twitter in real time from the #RWC2015 feed

2. SAP HANA to store, process and the data

3. Text Analysis to turn Tweets into a structured form

4. Text Mining to identify Relevant Terms

5. SAP HANA Studio to model

6. SAP Lumira Desktop to create some analytics

7. SAP Lumira Cloud to expose the output



1. Data Acquisition through the SDI Data Provisioning Agent

From HANA SPS 09 Smart Data Integration has been added directly in HANA. One of the data provisioning (DP) sources available is a Twitter.  I won't repeat the steps to setup the DP agent here, as Bob has created a great series of SAP HANA Academy videos of this setup here.

SAP HANA Academy - Smart Data Integration/Quality : Twitter Replication Pt 1 of 3 [SPS09] - YouTube


With the virtual table now available in HANA you can make this real-time by issuing the following SQL.


--Create SDA Virtual Table
--Create a target table
--Create Subscriptions
create remote subscription "HANA_EIM"."rt_trig1"
as (select * from "HANA_EIM"."RWC_R_STATUS" where "Tweet" like '%#RWC2015%')
target table "HANA_EIM"."RWC_T_STATUS";
--truncate table "HANA_EIM"."RWC_T_STATUS";
--Queue the subscription and start streaming.
alter remote subscription "HANA_EIM"."rt_trig1" queue;
alter remote subscription "HANA_EIM"."rt_trig1" distribute;
select count(*) from "HANA_EIM"."RWC_T_STATUS";
--Stop Subscription


This table holds the raw Tweets coming in from twitter

Twitter Data Table.png


Twitter provide a number of columns, the Tweet itself is the most useful of these for this analysis.

Twitter Table Definition.png


With the data now being acquired "automatically" it's possible to monitor the acquisition via the XS Monitoring URL http://ukhana.mo.sap.corp:8000/sap/hana/im/dp/monitor/?view=DPSubscriptionMonitor


3. Text Analysis

As I previously described Using Custom Dictionaries with Text Analysis in HANA SPS9, for Formula One Twitter Analysis creating custom dictionaries for your subject area is very easy.

I've added one to include the Rugby teams, Twitter handle and short name.  This new dictionary was included in a new configuration.

HANA Web IDE.png

To turn on Text Analysis on the acquired twitter data, use the following syntax

LANGUAGE COLUMN "isoLanguageCode"


Text Analysis is really clever and identifies some useful elements, beyond the basics. Who, Where, When, etc.  The more advanced output is often known as fact extraction, of these "facts" Sentiment, Emotion and Requests are three of these that could potentially be useful in the Rugby Tweet data.


4. Text Mining the Tweets

Now I wanted to try something more than just sentiment, mentions and emotion.  For this I decided to use Text Mining which is also built into HANA, and has been further enhanced is SPS10 with SQL access to Text Mining functions.  Activating Text Mining is very easy, it's done when when specifying the FULL TEXT index by using the syntax as above TEXT MINING ON.


Text Mining has multiple capabilities which are applicable at a document level, for this I treated each Tweet as a document which served a purpose. As tweets by nature are very short you don't gain that much additional insight from the document level analysis.


TOP 16
) AS T


After investigating the Text Mining functions TM_GET_RELEVANT_TERMS and TM_GET_RELATED_TERMS with Twitter data I found the core Text Analysis functions to be more than capable for my analysis purposes. If however I was analyzing news reports, blogs or documents then Text Mining would be much more appropriate

Text Mining Output.png


5. HANA Modelling

This piece took the longest and was fairly challenging as you need to model the Tweets with final output in mind.  This turns the structured $TA table into a format suitable for analysis in Lumira (or other BI tool) by identifying the entities and the relationships, Countries, Tweets, Sentiment.


I created 2 Calculation Views in HANA Studio, they are still a work in progress, but are sufficient to give some useful output.

I felt it easier to create 2 as they are at different levels of granularity. One is at the Country level, the other at Country, Key Word


Base Data in the $TA_RWC-TWEETS table

Screen Shot 2015-09-25 at 10.55.17.png

Selected output from the Projection_3 above

Screen Shot 2015-09-25 at 10.53.47.png

Aggregation_2 from the Calc View above, showing fields being used.

Screen Shot 2015-09-25 at 10.56.54.png




6. SAP Lumira Desktop to create some visualisations

With the modelling and manipulation taken care of in HANA, using Lumira is then easy (although you can spend some time perfecting your final output).  Here we can build some visualisations as below and then encapsulate them into a story board.

Screen Shot 2015-09-23 at 10.34.32.png

My original visualisations have now been greatly enhanced by Daniel Davis into a great Lumira Story.

Daniel has also created a England Rugby Wall chart available for download from here http://www.thedavisgang.com/

Screen Shot 2015-09-23 at 10.46.32.png

7. SAP Lumira Cloud

To share the output in an interactive way we can publish the visualisaitons, stories and dataset to SAP Lumira Cloud.  There's one crucial story option "Refresh page on open" that is required to  update the visualisations within the story which by default is OFF. Set this to ON and the story also gets updated.


Lumira Desktop has a scheduling agent built in, once enabled it can automatically refresh and republish to Lumira Cloud.

I have set this to refresh the Rugby Tweet Analysis every day at 22:00


Within Lumira Cloud we now need to make the story public, this is set under the Story optionsLumira Cloud Share.png

Change Access.png


We now have the URL which can be shared with others, for ease of consumption I created a Short URL pointing to this long URL with http://tiny.cc/


To View the full interactive Lumira Story Board please use the link below



Tweets over Time.png

Hi again,


My name is Man-Ted Chan and I’m from the SAP HANA product support team. This is part 2 to my High Availability/System Replication blog, part 1 can be found here.


This will continue where the last blog left off


How to turn off replication

First we will unregister the secondary server, this means no more data from the primary will go to this server:





After this have been unregistered we can check the hdbnsutil –sr_state to confirm this:


However, if you check the primary node you will see that the replication is still enabled, but no server for the replication is listed.


Next we can disable the replication on the primary




Once this is done you can check the replication tab and hdbnsutil –sr_state



As a test, I stopped the primary to see what happens on




Other things tested during this phase

As a test I stopped the primary to see what happen to the replication. No automated takeover will occur, but we will see the following network communication errors in the trace files

e Stream NetworkChannelCompletion.cpp(00524) : NetworkChannelCompletionThread #2 NetworkChannel FD 28 [0x00007fc028072818] {refCnt=3, idx=2}> ConnectWait,[---c]

: Error in asynchronous stream event: exception  1: no.2110001 (Basis/IO/Stream/impl/NetworkChannelCompletion.cpp:450)

    Generic stream error: getsockopt, Event=EPOLLERR - , rc=111: Connection refused

Please note that if you stop the replication server the primary server will throw the following alerts

ReplicationError with state INFO with event ID  1 occurred at <DATE> on xxxx36f509:30007. Additional info: Communication channel closed

Associated with Alert ID 78

The following error will be found in the trace files

e TNS TNSClient.cpp(00671) : sendRequest dr_getremotereplicationinfo to xxxx301545c:30001 failed with NetException. data=(I)drsender=1|

e sr_nameserver TNSClient.cpp(06880) : error when sending request 'dr_getremotereplicationinfo' to xxxx301545c:30102: connection refused,location=xxxx301545c:30001

i EventHandler EventManagerImpl.cpp(00602) : acknowledge: ReplicationEvent(): Communication channel closed



If you run into this alert in your own system you should check to see if the secondary node is down (can you start it or was there a crash?)


How to perform a takeover

*Please note that performing a takeover should be done only if there is an issue if the primary or if you would like zero down during a HANA upgrade
Right click on the secondary node and open the “Configure System Replication”




At an OS level you will see the takeover process


To perform the takeover via the command prompt you would run the following on the secondary server:

Hdbnsutil –sr_takeover

*After the takeover a new server needed to be made so the server name is different from 301545c to 59e3753f1

Please note on your replication server you will now be able to open the admin panel and not just the diagnosis mode (in the diagnosis mode only ‘Processes’, ‘Diagnosis Files’, and ‘Emergency Information’ tabs are available)

On the old primary server and old replication we can check the Landscape->System Replication and see there is no replication


Since the replication hasn’t been disabled we will see the communication errors again on the original primary

i EventHandler EventManagerImpl.cpp(00780) : --removeAllEvents: ReplicationEvent(): Communication channel closed



On the old replication server the nameserver trace will show the following during the takeover if it was successful

i sr_nameserver TREXNameServer.cpp(15647) : re-assign for databaseId 2 volume 2 returned successfully

i sr_nameserver TREXNameServer.cpp(15647) : re-assign for databaseId 2 volume 4 returned successfully

i sr_nameserver TREXNameServer.cpp(15647) : re-assign for databaseId 2 volume 3 returned successfully

i sr_nameserver TREXNameServer.cpp(15703) : issueing "/usr/sap/MV1/SYS/global/hdb/install/bin/hdbupdrep -s MV1 --user_store_key=SRTAKEOVER -b"

i sr_nameserver TREXNameServer.cpp(15686) : reconfiguring all services



Check the global.ini and nameserver.ini on the secondary node (the primary will not change)

/usr/sap/MV1/global/hdb/custom/config> cat global.ini


site_id = 2

mode = sync

actual_mode = primary

site_name = rep



mo-59e3753f1:/usr/sap/MV1/global/hdb/custom/config> cat nameserver.ini


id = 55de6934-1b45-7f0a-e100-00000a6116ac

master = mo-59e3753f1:30001

worker = mo-59e3753f1

active_master = mo-59e3753f1:30001

idsr = 55f36543-7352-8161-e100-00000a61131b

roles_mo-59e3753f1 = worker



In order to minimize memory consumption, the following parameters should be set in the secondary system:



1) global.ini/[system_replication]/preload_column_tables = false

2) global.ini/[memorymanager]/global_allocation_limit =

<size_of_row_store + 20%>



If the parameter "preload_column_tables" is set to "true" on the secondary side, the secondary system will dynamically load tables into memory according to the preload information shipped from the primary side.

During the takeover procedure, the "global_allocation_limit" should be increased on the secondary side to the same value as on the primary side.


Memory on the primary can be consumed in async mode there is a log buffer that gets loaded and then sent over to the secondary, the amount of memory this takes up is set by

  1. global.ini -> [system_replication] -> logshipping_async_buffer_size = <size_in_byte>





For additional information during a takeover please run the following

alter system alter configuration ('nameserver.ini','SYSTEM') SET ('trace','failover')='debug' with reconfigure;

alter system alter configuration ('nameserver.ini','SYSTEM') SET ('trace','ha_provider')='debug' with reconfigure;

Perform failover test. Once done you can turne off this tracing

alter system alter configuration ('nameserver.ini','SYSTEM') UNSET ('trace','failover') with reconfigure;

alter system alter configuration ('nameserver.ini','SYSTEM') UNSET ('trace','ha_provider') with reconfigure;

For general tracing during the replication you can go edit in the SAP HANA studio global.ini-> trace-> sr_dataaccess = debug and studio global.ini-> trace->stream= debug. This will add additional tracing in the indexserver trace.



System Replication Configuration Parameters








Issues Encountered


-After SP9 users ran into Alert 79, Configuration Parameter Mismatch, to resolve this you can edit global.ini->system_replication->keep_old_style_alert = false

The ini’s will still be mismatched, but the alert will stop appearing. User can manually check the mismatches, or can go to /usr/sap/<SID>/global/hdb/customer/config and copy from the primary and paste it to the secondary, but do not overwrite global.ini->system_replication and nameserver.ini->landscape section as this will break replication. Another option you can do is run the SQL script to find the differences:




Network Related

-‘Communication Channel Closed’ errors, the replication server is either down or there is a networking error. (Check to see if the HANA services are running, if they are talk to your networking team about blocked ports)

-(DataAccess/impl/DisasterRecoveryProtocol.cpp:3478) Asynchronous Replication Buffer is Overloaded exception throw location:

This error occurs only if you choose ASYNC replication, this can occur if there is a slowness in the network. You can check your network statistics on with the following table



If you need to resolve this issue prior to looking into you network you can do one of the following:

1) Change the replication mode, -sr_change mode –mode= sync|syncmem

2) Change global.ini->system_replication->logshipping_async_wait_on_buffer_full = false, this will temporarily decouple the synchronization.



Registration fails




Unable to contact primary site error: at 30001


Check the host name you have entered, something’s to check:

The hostnames are unique

The secondary host name is not a substring of the primary

Do not use the IP address


f sr_nameserver TREXNameServer.cpp(10651) : remoteHost does not match with any host of the source site. Please ensure that all hosts of source and target site

Can’t resolve all hostnames of both sites correctly.


Run the following query and

select name from m_topology_tree where path = '/host/'



Startup of secondary fails


Secondary nameserver starup fails after registration of secondary to primary: TREXNameServer.cpp(02876) : source site is not active, cannot start secondary site. Please run hdbnsutil -sr_takeover in case of a disaster or start primary site first. -> stopping instance ..


Do not use secondary hostnames that are substring of primary hostnames.



nameserver server:30001 not responding.

collecting information ...

error: source system and target system have overlapping logical

hostnames; each site must have a unique set of logical hostnames.

hdbrename can be used to change names;




This is caused by connection timeouts, but if you see it only for a few services check to see if the landscape are the same.



MultiDB issue


"unhandled ltt exception: exception 1000003:

Index 1 out of range [0, 0)" when i check the sr_state after running


Resolved in 97.01 and 102




i LogReplay RowStoreTransactionCallback.cc(00226) : starting master-slave DTX consistency check

e LogReplay RowStoreTransactionCallback.cc(00264) : Slave volume 3 is not available



Resolved in rev 74.04 and 82


Work around:

1) Add following INI parameters as 'false' in indexserver.ini and statisticserver.ini


check_slave_on_master_restart = false

check_global_trans_consistency = false

2) The, restart your system.



From time to time the takeover process hangs

w Backup BackupMonitor_TransferQueue.cpp(00048) : Master index server not available! Following trace Entries are in written to the trace file, and there is a time gap in the trace of 30m: [11596]{-1}[-1/-

i PersistenceManag PersistenceManagerImpl.cpp(02359) : Activating periodic savepoint, frequency 300 e TrexNet Channel.cpp(00362) : active channel 33 from 53223 to reading failed with timeout error; timeout=1800000ms elapsed



There is no work around, this issue is fixed in 85.02 and 90



If a takeover is performed on a secondary system where not all tenants could be taken over (e.g. because they were not initialized yet) then the takeover flag is not removed from the topolgy (/topology/datacenters/takeover/*)



Resolved in HANA 10.1



Crash on secondary

indexserver crash at DataRecovery::LoggerImpl::IsSecondaryBackupHistoryComplete on the secondary system.

The bug is fixed as of revision 90 so a permanent solution is available via an upgrade.

In the interim the workaround to the issue is the setting of the parameter [system_replication] ensure_backup_history = false within the global.ini file.

The setting of this parameter disables the maintenance of the backup history.  The takeover process is not affected by this parameter but full recovery scenarios after takeover (using old primary data/log backups with new primary log backups) may be impacted.



SAP Notes

1995412 - Secondary site of System Replication runs out of disk space due to closed data shipping connection

1945676 - Correct usage of hdbnsutil -sr_unregister

2057595 - FAQ: SAP HANA High Availability

2100052 - How to disable parameter mismatch alert for system replication

2050830 - Registering a secondary system via HANA Studio fails with error 'remoteHost does not match with any host of the source site'

2021186 - Garbage collection takes a long time during HANA service restart

2075771 - SAP HANA DB: System Replication - Possible persistence corruption on secondary site

1852017 - Error 10061 when connecting SAP Instances to failed over HANA nodes

2063657 - HANA System Replication takeover decision guideline

2062631 - high availability limitation for SAN storage

2129651 - Indexserver crash caused by inconsistent log position when startup

1681092 - Multiple SAP HANA DBMSs (SIDs) on one SAP HANA system

2033624 -System replication: Secondary system hangs during takeover

2081563 - secondary system's replication mode and replication status changed to "UNKNOWN"

2135107 - Log segment for backup history is still missing after reconnect with log shipping

Hi again,


My name is Man-Ted Chan and I’m from the SAP HANA product support team. Recently I’ve been seeing a few issues in regards to High Availability (HA) environment using system replication so I’m writing this piece on setting up the HA along with some troubleshooting tips, and SAP notes.

To avoid confusion with the terminology I will refer to another posting on the SCN:


  • System Replication is NOT Host Auto-Failover
  • System Replication is NOT Scale Out
  • System Replication is Disaster Tolerance (DT) / Disaster Recovery (DR)
  • System Replication synchronizes data between two data centers (Site A and Site B)
  • There is always one (logical) primary and one secondary system, e.g. site A is primary and site B is secondary. After a takeover, site B is (logically) primary system. Thus, primary and secondary changes, whereas site A and B will refer to a physical instance.
  • A takeover is making a secondary system functioning as primary system. Note that this explicitly does not include changing the state of the primary (in exceptional/disaster situations, the secondary must not depend on having access to the primary site to be able to change the state)
  • Failback: back to original setup, e.g. a takeover from the backup site to the preferred site: the preferred site may have a better internet connectivity, better reachable by clients, etc.

Also I've had to break up this blog into two parts as I hit a limit on the number of images that can be in a single blog posting.



  • Have separate primary and secondary server with HANA installed with equal number of services and nodes. The revision of HANA on the secondary server has to be equal to or new than the primary.
  • Secondary system has the same SAP system ID and instance number.
  • Ports 3<instance number>15 and 3<instance number + 1>15 must be available
  • The primary server must have a backup available

Setting up System Replication

These are steps from, but done in an SP09 environment:


I have included screen caps, tests, and log snippets

Setting up primary
When setting up the system replication a backup needs to exists, as a test I will show what happens when there is no backup:



Right click on your primary system and select ‘Configure System Replication…’





As we can see we cannot proceed with the replication as there is no backup. In the next few images we will create the backup.




Afterwards try and create the replication again. Please note that field ‘Primary System Logical Name’ can be whatever you want, but I chose the name ‘primary’.


After this is ran the following can be found in the nameserver trace

==== Starting hdbnsutil, version (fa/newdb100_rel),

i Basis            TraceStream.cpp(00469) : MaxOpenFiles: 1048576

i Basis            TraceStream.cpp(00472) : Server Mode: L2 Delta

i Basis            ProcessorInfo.cpp(00713) : Using GDT segment limit to determine current CPU ID

i Basis            Timer.cpp(00650) : Using RDTSC for HR timer

i Memory          AllocatorImpl.cpp(01326) : Allocators activated

i Memory          AllocatorImpl.cpp(01342) : Using big block segment size 8388608

i Basis            TopologyUtil.cpp(03894) : command: hdbnsutil -sr_enable --name=primary --sapcontrol=1

w Environment      Environment.cpp(00295) : Changing environment set SSL_WITH_OPENSSL=0

i sr_nameserver TopologyUtil.cpp(02581) : successfully enabled system as system replication source site


If you wanted to use the command line to create this replication run the following:

hdbnsutil -sr_enable --name=< Primary System Logical Name>

After this your system is now enabled for system replication

Setting up the secondary node

You will have to stop the HANA servers on the secondary server prior to setting up the replication. Right click on the server and select ‘Configuration and Monitoring’->’Configure System Replication…’ again, please note that the SID is the same.





At this step you will name the replication in the ‘Secondary System Logical Name’ and enter the host from the above (note that the Instance number is non-editable)

Replication mode options that are available are the following:

  • Synchronous with full sync option (mode=sync. Full sync is configured with the parameter [system_replication]/enable_full_sync) means that log write is successful when the log buffer has been written to the logfile of the primary and the secondary instance. In addition, when the secondary system is disconnected (for example, because of network failure) the primary systems suspends transaction processing until the connection to the secondary system is re-established. No data loss occurs in this scenario.
  • Synchronous (mode=sync) means the log write is considered as successful when the log entry has been written to the log volume of the primary and the secondary instance.
  • Synchronous in memory (mode=syncmem) means the log write is considered as successful, when the log entry has been written to the log volume of the primary and sending the log has been acknowledged by the secondary instance after copying to memory.
  • Asynchronous (mode=async): The primary system sends redo log buffers to the secondary system asynchronously. The primary system commits a transaction when it has been written to the log file of the primary system and sent to the secondary system through the network. It does not wait for confirmation from the secondary system. This option provides better performance because it is not necessary to wait for log I/O on the secondary system. Database consistency across all services on the secondary system is guaranteed. However, it is more vulnerable to data loss. Data changes may be lost on takeover.

The above is from the SAP HANA Admin guide:



This can be done via the command line

hdbnsutil -sr_register --remoteHost=<primary hostname> --remoteInstance= --mode=<sync/syncmem/async> - -name=< Secondary System Logical Name>


During this registration I ran into the following error


I then ran it via the command line to show the error



I checked the listed nameserver trace to see if there is any other information

==== Starting hdbnsutil, version (fa/newdb100_rel),

e Configuration    ConfigStoreManager.cpp(00693) : Configuration directory does not exist.

e Configuration   

  1. TopologyUtil.cpp(03894) : command: hdbnsutil -sr_register --remoteHost=xxxxx509 --remoteInstance=00 --mode=sync -name=sec

e sr_nameserver    TNSClient.cpp(06778) : remoteHost does not match with any host of the source site. all hosts of source and target site must be able to resolve all hostnames of both sites correctly


From this error we can see that the landscape between the two do not match. I checked the landscape in the primary and secondary



Here we can see that in secondary server there is the ‘sapstartsrv’ process. After this is resolved re-run the wizard or enter in the hdbnsutil command


‘Initial full data shipping’ is the equivalent to running hdbnsutil –sr_register –force_full_replica

If parameter is set, a full data shipping is initiated. Otherwise a delta data shipping is attempted.

If you run this via command line you will have to manually start up the secondary server

For more information on the hdbnsutil options please refer to the following reference guide



Checking the status of the replication

You can check the status of the replication in studio and via command line. The below screen caps




Check the name server trace to see following success messages upon startup to see the replication:

TREXNameServer.cpp(12634) : called registerDatacenter from registrator=xxxx301545c

i sr_nameserver    TREXNameServer.cpp(12776) : registerDatacenter; new disaster recovery site id =2

i sr_nameserver    TREXNameServer.cpp(12864) : matched host xxxx509 to xxxx301545c

i sr_nameserver    TREXNameServer.cpp(15138) : volume 1 successfully initialized for system replication

i sr_nameserver    TREXNameServer.cpp(15138) : volume 2 successfully initialized for system replication

i sr_nameserver    TREXNameServer.cpp(15138) : volume 4 successfully initialized for system replication

i sr_nameserver TREXNameServer.cpp(15138) : volume 3 successfully initialized for system replication


Please note that when you add the replication server you’ll notice that you cannot open the Administration panel or run SQL queries. So you will not be able to check the data in the replication server.

Instead you are opening Diagnosis mode, below the screen cap shows the difference between the 2

Diagnosis Mode


Admin Panel


Click here for part 2

The Journey Continues - Episode 6 of 10

SAP has acquired a few different “Software as a Service” (SaaS) companies over the last few years. This presents a challenge for architects in how to integrate SAP ERP, Cloud and other external systems in the SAP landscape. The presentation by Sindhu Gangadharan, Vice President and Head of Product Management - HANA Cloud Integration (HCI), as part of our ongoing series for Enterprise Architects, delivered many of the answers I know EAs are after. You may have implemented Success Factors along with other SAP technology and wondered what the roadmap for true integration is. Watch the webcast if you missed it to get all of the details.


As the VP responsible for HCI, Sindhu knows what she is talking about. The product roadmap for Hana Cloud Integration showed how SAP is working towards greater integration and data sharing across their products.


The presenter started with some background on integration, progressed through customer case studies and rounded out the webcast with live demos and web links for more information. Overall, a logical progression through the material, in an easy to follow format, that should get any Enterprise Architect the background they need to have a management level conversation around SAP cloud integration.


To start off, Sindhu went through the compelling reasons for using SAP HANA Cloud Integration (HCI). This included an overview of the SAP ecosystem and where HCI fits into the architecture.


All images © 2015 SAP SE or an SAP affiliate company. All rights reserved. Used with permission of the author.


Sindhu presented the benefits realized by several SAP customers including Owens Illinois, one of the world's leading manufacturers of glass containers, as well as partners like Applexus and ITelligence.


Later on in the webcast we saw a live demo of HCI and the connectors that exist right now for integration between SAP and non-SAP products. I always like live demos as it shows the actual product and the interface users can expect. We saw how to pick the connector between SAP and Success Factors and how to pick the data fields in a typical data exchange scenario.


Many of you may recall when SAP first acquired Success Factors, the integration methods were limited and there was a lot of flat file passing between systems. SAP has come a long way since then. In the Success Factors Employee Central context one of the preferred methods of integration was using Dell Boomi for integration between Success Factors Employee Central and SAP ERP. In the presentation we found out that SAP has a roadmap to use HCI as the preferred method of integration going forward.



There is a free trial program to test-drive HCI for 30 days that Sindhu made the audience aware of; so SAP has made it pretty easy to take it for a test drive and come up with your own integration strategy.


The presentation also included great link pages for those looking at Hana Cloud Integration. This includes links to certifications that the service has obtained.



I know that this webcast answered many of my outstanding questions on how SAP integrates ERP/Cloud/Non-SAP systems together with a few hints at what is coming in the near future. Does HCI meet what you are looking for? - Let me know in the comments or directly emailing me.

Webcast attendees commented with the following key takeaways:

  • (SAP Integration) Can be cloud or on-premise
  • There are a lot of and easy integration options for HCI available.
  • The case study was helpful, and also the upcoming message speeds
  • All the current pre-packaged content was nice to see... esp the future integration with solution manager.
  • Roadmap for HCI and changes related to SF EC using BOOMI
  • SAP has a strong strategy for Cloud integration
  • The cloud is real for integration between SAP products
  • HCI Rocks!


In the next webcast scheduled for September 22th, the speaker will be covering how SAP HANA fits into the data center and the different architectural considerations.


Watch the Episode 6 webcast recording for more details on this blog entry.



Webcast Materials on ASUG.com: https://www.asug.com/discussions/docs/DOC-42211


Complete Webcast Series Details https://www.asug.com/hana-for-ea


All webcasts occur at 12:00 p.m. - 1:00 p.m. ET on the days below. Click on the links to see the abstracts and register for each individual webcast.

September 22, 2015: SAP HANA and Data Centre Readiness

September 29, 2015: Why SAP HANA, Why Now and How?

October 6, 2015: Implications of Introducing SAP HANA Into Your Environment

October 13, 2015: Internet of Things and SAP HANA for Business


Filter Blog

By author:
By date:
By tag: