Additional Blogs by Members
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

Part I: Introduction  (Creating SAP system clones using Solaris 10 virtualization concepts (Part 1))

 

This is the 2nd part of a blogger series describing how to easily create runable shadow copies of productive systems using Sun Solaris 10 native OS virtualization functionalities (Solaris Zones).

These shadow systems can be used for applying updates, patches or similar tasks which would result in a downtime of the productive system usually.

After the desired task was performed successfully the shadow systems can be switched with the productive systems. This way the planned downtime of the productive system landscape can be reduced dramatically.

In this part the basic steps for creating a shadow system using the Zettabyte File System (ZFS) of Solaris 10 are described.

 

Overview

First, a ZFS snapshot based on the file system containing the productive zones will be created.

A ZFS snapshot is a reflection of a file system on a specific state and is just created in a second. Afterwards a ZFS clone based on this snapshot is created.

Other than a snapshot a clone writable and can be changed therefore. The data changed within the clone is saved as a kind of a delta copy. This means that a clone increases its size accordingly to the changes applied after the related snapshot (and file system). More information regarding ZFS is available at [[1] | http://www.sun.com/software/solaris/zfs_learning_center.jsp].

After the file system is created, two new zones based on the clone file system are created. Afterwards some entries within the zones are adapted.

Finally the zones are booted. When starting the SAP system and the database a shadow system of the productive system is available.

Before we start, let's define the used terminology to avoid confusions:

0.1.

0.2.

sap_zone_1, db_zone_1: Name of the zones of the productive systems

0.3.

sap_zone_2, db_zone_2: Name of the zones of the shadow systems. These are the systems which are created by cloning the productive systems and are used for the update, test, or whatever.

0.4.

0.5.

productive system: The source system; available for the clients.

0.6.

0.7.

shadow system: The system which is used for the update; not available for the clients.

0.8.

Note: Regarding to SUN there will be a standard cloning mechanism for zones available in an upcoming release of Solaris 10. This will make a lot of the following steps obsolete.

 

<li>Creating a ZFS snapshot and clone

Hint: Before creating the snaphot you should set the database instance to maintenance mode to prevent caching mechanisms (including shared pool, DB buffer, and log buffer). For more information consult the database manual and/ or your database administrator.

Creating a snapshot of the volume containing the productive zones (SAP system and database):

zfs snapshot pool/volume@shadow

Hint: There will be a "-r" Option available soon which provides recursive snapshots of subvolumes. As long as this option is not available you should choose not to use two different subvolumes for the SAP system and the database to provide a unified snapshot. Use different directories located on the same subvolume instead.

0.1.

Creating a Clone based on the snapshot:

zfs clone pool/volume@shadow pool/shadowvolume

0.1.

Rename the directories of the SAP zone and the DB zone to avoid confusions:

mv /pool/sapvol_clone/sap_zone_1 /pool/sapvol_clone/sap_zone_2

mv /pool/sapvol_clone/db_zone_1 /pool/sapvol_clone/db_zone_2

Hint: The file system listed here is identical to the ZFS file system structure. However, keep in mind that when using zfs commands not to use a slash before the pool name.

The following graphic summarizes the steps above: The snapshot taken from the volume is read-only (RO). A clone is also writeable (RW) and differs from the original volume therefore.

Figure 1: ZFS structure

Remember, the volume-clone is not really an independent volume but still depends on the volume it was created from!

 

<li>Creating the shadow zones

1.

2.

Create a new zone for the SAP shadow system.

Start the zone configuration tool to create a new zone for the sap shadow system (sap_zone_2):

zonecfg -z sap_zone_2 create

3.

4.

Set the zonepath to the volume containing the ZFS clone to the directory of the shadow sap zone:

set zonepath=/pool/shadowvolume/sap_zone_2

Quick explanation: pool is the ZFS pool, shadowvolume is the clone you created and sap_zone_2 is the directory where the root-directory of the sap zone is located in.

5.

6.

Set the pool parameter to the corresponding pool you specified for the shadow system:

set pool=sap_shadow_pool

7.

8.

Add a private network interface for internal communication to the database shadow zone:

add net

set address=192.168.2.1/24

set physical=ce0

end

You may have recognized that 192.168.2.1 identifies a private network different from sap_zone_1 (private network address of sap_zone_1 is 192.168.1.1 as listed within the previous part of this blog). This way the communication between the productive zones and the shadow zones is isolated.

9.

10.

Add a "dummy" interface for later use:

add net

set address=192.168.2.3/24

set physical=ce0

end

Hint: This interface will be used to bring up the public network when the systems are switched. It is not used for any communication at the moment but just acts as a "wooden leg" for later use. You may decide to use a NIC differing from the interface of the productive system to dispatch the network load (e.g. when copying data via network connection etc.).

0.1.

Optional: If you want to make the shadow system available within the public network, add an additional interface with a public address now. Make sure that there is no communication between the productive zones and the shadow zones! You may decide to use another physical NIC to reduce the load of the network card productive system.

0.1.

Verify and commit the zone configuration:

verify

commit

exit

Figure 2: Sample output of"zonecfg -z sap_zone_2 info" after configuring the sap shadow zone

 

0.1.

Create a new zone for the database shadow system.

Start the zone configuration tool to create a new zone for the database shadow system:

zonecfg -z db_zone_2 create

0.1.

Set the zonepath to the volume containing the ZFS clone to the directory of the shadow database zone:

set zonepath=pool/sapvol_shadow/db_zone_2

0.1.

Set the pool parameter to the corresponding pool you specified for the shadow system:

set pool=db_shadow_pool

0.1.

Add a private network interface for internal communication to the sap shadow zone (db_zone_2):

add net

set address=192.168.2.2/24

set physical=ce0

end

As already mentioned above, this interface is used for internal communication between the sap zone and the database zone.

0.1.

To enable local access to the /sapmnt directory within the DB zone a local loopback mount is used.

Add a new filessystem resource for sharing the /sapmnt directory:

add fs

set dir=/sapmnt

set special=/pool/shadow_volume/sap_zone_2/root/sapmnt

set type=lofs

end

0.1.

Optional: Add a "dummy" interface for later use.

add net

set address=192.168.2.4/24

set physical=ce0

end

Hint: This interface will be used to bring up the public network when the systems are switched. It is not used for any communication at the moment but just acts as a "wooden leg" for later use. You may decide to use a NIC differing from the interface of the productive system to dispatch the network load (e.g. when copying data via network connection etc.).

0.1.

Optional: If you want to make the DB shadow system available within the public network, add an additional interface using a public address.

0.1.

Verify and commit the zone configuration

verify

commit

exit

0.1.

Within the global zone, open the index file within /etc/zones and adapt the state within the new zones from "configured" to "installed":

(global)# vi /etc/zones/index

Figure 3: Structure of the /etc/zone/index file. Don't be confused by the different zone names.

+Important: Please note that this procedure is not recommended by SUN at the moment.

Anyway, it is not possible to install the zone using "<font face="Courier">zoneadm -z <zone> install</font>" due to the root directory already exists! Remember not to install the zone via zoneadm directly! Otherwise the file system just created will be overwritten.

+

<li>Adapting zone host configuration

1.

2.

Before booting the new zones, adapt the network entries to your requirements

This is done within the root file system of each zone located within /pool/sapvol_shadow/[zonename]/root/.

1.

2.

Change the hostnames within the /etc/nodename files in both zones:

echo "newnodename" > /etc/nodename

In this example "rigsun04" is used as the hostname for the zone hosting the SAP system. "rigsunvirtual01" is the virtual hostname of the SAP system. For the database zone the names are "rigsun05" and "rigsunvirtual02".

3.

4.

Adapt the hostnames within the /etc/hosts file and/ or ipnodes file:

chmod 700 /etc/hosts

vi /etc/hosts

chmod 444 /etc/hosts

Hint: If you are not using IPv6 you may create a link from /etc/inet/ipnodes to /etc/inet/hosts to avoid a double configuration (Background: Solaris 10 takes entries from ipnodes first, SAPInst only checks the hosts file).

0.1.

Your entries should look similar to the picture below:

Figure 4: Host file of the sap shadow zone

Explanation: rigsunvirtual01 is the virtual name of the sap system, rigsunvirtual02 is the virtual name of the database, rigsun04 is the public physical hostname of the zone. Within the sap zone and the database zone these names are looked up via the hosts file and not via DNS. The clients instead getting their information from DNS exclusively.

0.1.

If your system is connected to the public network via DNS, make sure that the hierarchy of the name resolution is in a right sequence (I e the resolution via file needs to have a higher priority than the resolution via DNS). This behavior is configured within /etc/nsswitch.conf:

Figure 5: Lookup hierarchy within nsswitch.conf

In the screenshot above "files" is listed before "dns". This implicates that the system will look for the name within the local configuration first.

0.1.

Boot the zones:

zoneadm -z sap_zone_2 boot

zoneadm -z db_zone_2 boot

0.1.

Start the SAP system and the database:

+Hint: You may log on to the zones using "zlogin -C -e@. zone_name" for example.

This way you can log out from a zone using exit and pressing "@." when the logon dialog appears.

If the public network is configured you may also logon using ssh for example.+

0.1.

Test if both systems are running correctly e.g. via the telnet console:

!https://weblogs.sdn.sap.com/weblogs/images/9759/2006-10-26-Solaris_Screenshot_5.jpg|height=372|alt=i...!</body>