cancel
Showing results for 
Search instead for 
Did you mean: 

Logic behind same FS for log and data backups

Former Member
0 Kudos

Hi team,

I have a doubt , when compared to traditional databases like oracle, db6 and sybase. we have a different data and log file systems.

Here in HANA why is it the same any reason or else any recomendation from SAP since its designed like this.

In many customers i have seen the backups for logs and data in the same file system , i am not sure about the logic. May be some reasoning behind this.

Message was edited by: Tom Flanagan

Accepted Solutions (1)

Accepted Solutions (1)

HayBouten
Product and Topic Expert
Product and Topic Expert
0 Kudos

Is it really the same? SAP HANA has two parameters that specify the data and log location. There is the parameter basepath_datavolumes and the parameter basepath_logvolumes. They should not point to the same file system.

What appliance or system do you have?

Former Member
0 Kudos

Hay,

I am aware of what you are talking.

global.ini:basepath_logbackup
global.ini:basepath_databackup


First i would like to clarify my question, the question was the backup destinations on the disk there are on the same FS.

For a multinode its shared backup directory and all the backups go into the FS , i have seen with IBM and HP.

Now comparison arised due to like this in DB6 or Oracle how de we write scripts whenever we have the FS above some threshold it should be backed up so as to release the free space.

Now to your question , i think i have seen in both IBM and hp the paths might be different but they were pointing to the same FS.

probably forum members can share

info regarding how they have seen in their in appliances Log/data path. log/data backup paths same fs or a different fs.

Thanks,

Former Member
0 Kudos

It has to be different since the log would be a different kind of disk when compared to the data storage since log would be like a ssd or fusion io devices but data one would be a slower one right.

HayBouten
Product and Topic Expert
Product and Topic Expert
0 Kudos

I think you are now mixing the 4 parameters that are involved in this.

  1. basepath_datavolumes => Location for the datafiles
  2. basepath_logvolumes => Location for the logsegments
  3. basepath_databackup => Location for the database backups
  4. basepath_logbackup => Location for the log backups


The parameters 1) and 2) are for the running database and should be different filesystems as these storage locations have indeed very different specifications for there KPI's. What kind of hardware the hardware verndor uses is up to the hardware vendor.

The 3) and 4) are used to store the database data and log backups. The best practice for these two locations is that they are also on different filesystems and these should not be on the same location as 1) and 2). Even better the data and log backup location should not be in the same server as were the SAP HANA is running.

Maybe it is confusing that after an SAP HANA installation the parameters basepath_databackup = $(DIR_INSTANCE)/backup/data and basepath_logbackup=$(DIR_INSTANCE)/backup/log look very identical, but the are pointing to different locations.

Former Member
0 Kudos

I must admit that in the Oracle world I haven't often seen disk backup areas, because backups were mainly written directly to tape.

On SAP HANA side I have recently come across a situation where a company located log backups, data backups and trace files on the same disk area. This disk area filled up to 100 % and as a consequence - nothing happened! At least not in the first place: Data and log backups were no longer taken and traces were no longer written, but this didn't impact the running production system directly. It would have become worse if the maximum number of allowed log segments (default: 10240) would have been reached, but in this case the file system was extended after half a day and everything remained stable all the time.

This means: The famous Oracle problem "archiver stuck" illustrated in SAP Note 391 doesn't happen on SAP HANA in the same way and so the log backup area is less critical here.

Former Member
0 Kudos


Yeah true martin, i think you got i was trying to say.

We design scripts based on file system usage and both would be written into the same FS then alerting would be difficult since we cannot segregate the log size by disk usage.

Former Member
0 Kudos

We have the backup locations for data and log in the same NFS.

Question was how did you enable script based backup since the FS would always be full due to databackups.

Any change in the logic had it been adopted?

HayBouten
Product and Topic Expert
Product and Topic Expert
0 Kudos

Indeed, maybe you can't, but Linux can! The problem is that you look at the file system usage instead of the directory usage.

The logbackup location is a different directory then the databackup location, so using the commands:

  • du -sh $(DIR_INSTANCE)/backup/data
  • du -dh $(DIR_INSTANCE)/backup/log

would give you different size indications.

Former Member
0 Kudos

Hay,

thats true, now we would have to define parameters to check for the min and max and based on that trigger a backup.

How have you written your logic for the backing up of the log files.

HayBouten
Product and Topic Expert
Product and Topic Expert
0 Kudos

p517710 sap basis wrote:

... We design scripts based on ...

I haven't written scripts for backing up the log files, but you have.

In your scripts you said you were not able to distinguish between log and data backups. I showed you how you can distinguish between those. So now you can incorporate this in your script and it should work.


On help.sap.com you can find the the documentation on SAP HANA Database Backup and Recovery


Former Member
0 Kudos

Hi Martin,

I don't think that's the case - the archivelog principles are the same for HANA. If the backup volume fills and the database is doing auto-log backups, then the database will freeze if it is unable to rotate the redolog files (because the archivelog destination is full). I've seen this a few times. It is actually trickier to handle than other (unmentionable) databases because simple releasing space on the backup volume does not cause HANA to auto-resume. When last I checked. I presume this will be modified at some point.

Regarding whether or not the *backup* fileystem can be shared for database backups and log backups - yes - why not. And it is the same for most production systems, irrespective of vendor. The only special case for database storage is the redolog destination, which benefits from isolation from other activity, protection from being inadvertently filled and also the highest I/O speed possible to minimize redo write times. As these files are filled they are peeled off the archivelog destination- in HANA's case - the backup/log directory - where it is a straight sequential copy and I/O rate is not a concern.

mark teehan

singapore

Former Member
0 Kudos

The database bakup and recovery concepts for hana have been adopted from maxdb its pretty much the same.

HayBouten
Product and Topic Expert
Product and Topic Expert
0 Kudos

It is similar, but different in several area's.

And now looking at your other problem were you cannot recover it might be a good moment to start reading the SAP HANA Backup and Recovery.

Former Member
0 Kudos

Thanks Hay, recovery i was asking a scenario. I am aware of the procedure to recover probably i havent put in words correctly regarding what i was referring to?

I have updated the thread now.

Former Member
0 Kudos

The data and log volumes concept is still similar in max db right.

Same savepoint , same log concept many have been incorporated from maxdb right.

I am talking only with reference to data and log volumes not the entire database engine hay.

Answers (0)