cancel
Showing results for 
Search instead for 
Did you mean: 

High Availability Linux OS cluster not working after successful installation.

0 Kudos

Dear All,

I was installing SAP ERP 6.0 in a 2 Node Linux OS Cluster. Node1 (x.16.9.x1) and Node2 (x.16.9.x2). This culster was being installed for the DR site. Our Data Center has our ERPPRD over a similar cluster and its working all good. Problem is coming for the DR site cluster.

following steps were performed at DR site

1. Cluster software installed by the infra team. and following directories and path were shared between both the nodes.

/oracle/PRD

/sapmnt/PRD

/usr/sap/trans

/;usr/sap/PRD

rest all are not shared between the two nodes.

2. Before Installation the infra team gave us 30 virtual IPs  and against those virtual ip's i created entries for hostnames in the /etc/hosts

x.x.x.202  xyzerpprd

x.x.x.203  xyzerpdb

x.x.x.204    xyzers

3. I started installation on Node1. I installed ASCS instance on virtual hosts  xyzerpprd . At the end of the installation of ASCS instance. The SAPinst tries to start the ASCS instance. But SAPinst was not able to start the instance. After checking we found that there were no services created in the cluster. Only the IP's were created as cluster resource. which we used to mark virtual hosts. So afterwards we created blank services named ERPPRD. i.e just the service. We  assigned only the virtual IP's present with us as resource. The IP resource assigned to ERPPRD service was x.x.x202. Other than IP resources it had nothing. Just saw the name and created a service similar to that present in the working cluster of DC. After creating this service our installation continued and the ASCS instance was successfully installed.  Hence we created similar  services for DB and ERS instance wi the following virtual IP's as resource

Service name  IP resource assigned

ERPDB                    x.x.x.203

ERS                          x.x.x.204



4. I performed the installation of instances in the following fasion

    Instance Type          Virtual hosts used

    ASCS                        xyzerpprd

    DB                              xyzerpdb

      ERS                          xyzers

    CI                              xyzerpprd

Installation of all the instance were successfully done. ASCS and CI were installed on the same virtual host  xyzerpprd  which is for the virtual IP x.x.x.202 . and the service corresponding to this virtual IP was ERPRD


Does there is any issue in keeping the virtual host name and the service name same or different.


5. After the complete installation. The system was up and running. I checked the services using "clustat" and all services ERPRD/ERPDB/ERS were running on the their respective nodes which were mentioned while creating the service.


6.For the sake of testing I turned the service ERPPRD off. Then i checked My AS was not up . The Gui was not available to log on. So Somehow the sap services were closed down. When i tried to stop the DB service ERPDB , I checked that it has no effect on my database at all. Then i again checked my AS server. All relevant SAP process were still running. But somehow i was not able to logon to the system. So my services are either incorrectly linked to the SAP AS and DB or they are not at all linked.


7. So as a post installation activity i did the following

  a ) Created resource in the cluster same as my working production details. I created 3 SAP instance resource for  ASCS and CI and ERS and one database instance for the DB. All having similar features and properties as the one running our DC cluster.

  b) Created Failover domains for all the four instances just as present in the running DC cluster

  c) Assigned all the created SAP instance and DB instance resources to the respective service. like my ERPPRD service was having two SAP instance as resources ASCS and CI and one virtual ip x.x.x.202 as the resource and so on.

  d) I deployed the changes to the cluster configuration file. i.e cluster.conf

  e) I then took a restart of the both the nodes.


8. In-spite of doing the cluster configuration just as similar as present in our working cluster.  My services are not yet linked. They don't show any effect. They show no dependency. 


On the same node 1 on which i performed the installation, I am not able to start/stop my AS and DB by using the services which are actually running on the same node 1. I want to resolve the above mentioned issue first then i will look into switching of service in other nodes and controlling sap from there. But first i want to control the SAP As and DB with the service running on the node on which the installation was performed.

I want to achieve the following


when i start/stop my DB service ERPDB, my database should get startup/shutdown properly.

When i start/stop my ERPPRD service my AS should get start/stop properly

Also when i try to start ERPPRD service it should check that the ERPDB service should be up and running and if ERPDB is not runing, then the ERPPRD service SHOULD NOT START


I have attached two files

DC Cluster : cluster.conf of working cluster

DR Cluster: cluster.conf of nonworking cluster which as the issue.




Dear All,,


Please help me if i have missed any thing or any step which was crucial or prerequisite to control the SAP AS and DB from the services.  Any help would be appreciated.


Thanks and Regards








Accepted Solutions (0)

Answers (1)

Answers (1)

Former Member
0 Kudos

Hello BASIS Consultant,

based on the provided cluster configuration files it looks like you are trying to set up your environment on Red Hat Enterprise Linux (which version?). Therefore I would recommend that you have a look at the following document which provides some guidelines on setting up HA environments on RHEL:

Deploying Highly Available SAP Servers using Red Hat Clustering - Red Hat Customer Portal

In general I would recommend the following approach when setting up a clustered SAP environment:

1. make sure that all the OS resources (IP addresses, virtual hostnames, filesystems, ...) required by the SAP system are available on all cluster nodes (for example by adding the IP addresses and virtual hostnames in /etc/hosts so that they can be resolved locally)

2. Manually configure the OS resources for all instances on one node (for example using ifconfig to add the IP addresses and mount to mount all required filesystems

3. Install the DB and all SAP instances on this node using the "SAPINST_USE_HOSTNAME" environment variable to configure the SAP instances to use the virtual hostnames

4. verify that it is possible to manually start and stop all instances (including the DB) on this node and that is is possible to connect to the SAP system from the outside (using the virtual hostnames) when it is running on that node

5. Stop all instances (including the DB) on the host and unconfigure the OS resources on this node.

6. Configure the OS resources on the next node and then verify that it is possible to manually start all SAP Instances (and the DB) on this node; Unconfigure all OS resources afterwards on the node; do the same on all other cluster nodes one node at a time

7. start setting up the services in the cluster; first only add the OS resources for all the instances and verify that the cluster is able to start and stop them on each node; when the OS resources have been started by the cluster on a node manually try to start all SAP instances on that node and verify that it is possible to connect to the SAP instances via the virtual hostnames from the outside

8. add the appropriate SAPDatabase and SAPInstance resources to the cluster services and then verify that the cluster is able to start and stop them

This looks like a lengthy process, but it is the only way to ensure that all components of your cluster are working properly.

If you need further assistance with this I would recommend to open a support message at SAP.

Regards,

Frank