cancel
Showing results for 
Search instead for 
Did you mean: 

Improving device IO performance

former_member207908
Participant
0 Kudos

Dear Experts.

I see that there is IO contention on a particular device and less/no contention on other sybase devices.
I want to improve performance(IO). We are planing to drop the current devices and recreate multiple smaller devices.


I want to know what is the suitable device size for the following environment

ASE 15.7SP122 on AIX 7.1 for OLTP+ OLAP

Prod ASE database size 928 GB
free space 500GB

Note: I have attached 7 sp_sysmon, diskio reports for your consideration

Appreciate your response.


Regards,
Rajesh

former_member207908
Participant
0 Kudos

Hi Mark,


Thanks for your valuable insights. Much needed information indeed.

I think it would be right if I say that during peak time, I've observed "higher IO" requests instead of "IO contention".

Supporting my statement, I've attached the following information for your consideration and will be waiting for your advice on tuning ASE devices/FD disks/datacache for better IO performance.


--sp_cacheconfig output

--monDeviceIO output

--monIOQueue output

-- screen shots of wait events: 29, 31, 51, 54 and 55


Regards,

Rajesh

former_member207908
Participant
0 Kudos

Attaching more wait events screen shots..

Mark_A_Parsons
Active Participant
0 Kudos

Whether or not 'higher IO' requests is good or bad really depends on the queries being run at the time, eg, 1) batch processing that performs a large(r) volume of writes than 'normal' activity will see more disk writes so the extra writes are likely 'ok', 2) reporting that requires pulling old(er) data from disk will require a larger volume of reads than 'normal' activity so the extra reads are likely 'ok', etc.

monDeviceIO, monIOQueue and monOpenObjectActivity (for finding 'hot' tables/indexes) need to be sampled during the period in question and deltas calculated.  In other words, for these MDA tables to be of much use tracking down issues during a given time period you need to process them the same way as the monProcessActivity table (see previous thread: )

former_member207908
Participant
0 Kudos

Hi Mark,

I agree and accept to your points on mon tables. Thank you.

Further information on CIO and journaling in AIX

--CIO is not enabled for AIX filesytems. If enabled we will gain better IO service rates. Do we have any other impact by enabling CIO?

--We are using extended journal file system (JFS2). What is the impact of enabling/disabling journaling?

--Do we have any affect of distributing sybase devices across multiple file system mountpoints/disks?

Regards,

Rajesh

Mark_A_Parsons
Active Participant
0 Kudos

re: CIO

- generally speaking disk rates should improve since cio allows for better concurrency (see Jeff's comments in )

- I'm not aware of any negative issues with enabling CIO except ... ASE devices should have dsync=false and directio=false, and you'll need to unmount/remount the FS in order to enable cio

I suggest you also check out the AIX/ASE document Jeff mentions in

re: journaling

What is the purpose of (OS-level) journaling? One key item is recoverability of data in case of an issue with the host and/or FS.

ASE already has IO recoverability built in via the write ahead log (as long as the disk subsystem can guarantee writes).  Therefore journaling adds unnecessary overhead since each ASE write request must also wait for the OS/FS journaling construct to be updated.

Generally speaking OS/FS journaling is not recommended with ASE ... whether it be Windows, Unix or Linux.  (NOTE: On Windows this also means you should probably consider disabling the indexind service as well as compression => both can slow down write activity.)

---------------

General steps for ASE using FS devices on AIX (where cio is available):

ASE: disable dsync/directio on all devices (NOTE: it is not possible to disable dsync on the master device)

ASE: shutdown

OS: unmount FS

OS: mount FS with cio enabled, journaling disabled

ASE: startup

---------------

Spreading ASE devices across disks has always been a good idea as it tends to provide access to more read/write heads.

Then again, nowadays it's very rare for ASE to be writing to individual physical disks ... often times instead we find ASE is writing to some sort of 'disk image' as presented by a RAID and/or SAN system.  Net result is that DBAs need to work with disk subsystem administrators to insure they understand the need for a database engine to spread IOs across as many physical disks as possible.

So, spreading ASE devices across OS/FS mounts may or may not improve disk/IO performance as you really need to understand the makeup of the disk subsystem.  An extreme example: you could define 20 different OS/FS mount points but disk/IO performance will still suffer if all 20 mounts point to the same single physical disk.

Even if you manage to get your ASE devices spread across different 'disks', you may still see some disk/IO performance lag if you happen to have several hot tables/indexes sitting on the same ASE device.  Net result is that the DBA may also need to spend some time making sure hot tables and indexes (or rather the associated partitions if running ASE 15.7+) are allocated across different ASE devices (via the placement of user-defined segments).

---------------

I would recommend you first concentrate on fixing your AIX/OS mount point configurations (cio enabled, journaling disabled; ASE dsync/directio disabled) and then see what your disk service times look like. (Obviously you'll want to get some sample service times from the MDA tables before and after the changes so you can measure the improvements.)

Accepted Solutions (0)

Answers (0)