cancel
Showing results for 
Search instead for 
Did you mean: 

downtime optimization for system copy

former_member432274
Participant
0 Kudos

Hi,

I am planning copy my source database to target database. Here my challenge is the source is a non-compression database and target is a compression database. How can i reach the optimize downtime. My sandbox database is 4TB. It is an system copy-> oracle->source system export-> based on as-abap-> table splitting

I did a system export it took me the system down time of 8 hours with 44 parallel jobs. I took 2 jobs for  r3load process for one cpu ( 22 cpus).

Now for optimizing I am going for table split preparation and I am very new to it. If you have any documents can you please share it to my email.

now I am planning to do table splitting preparation. For this table splitting preparation.

I downloaded the MIGCHECK_2, MIGMON, MIGTIME, ORABRCOPY and SPLIT as it suggested in note: 784118 - System Copy Tools for ABAP Systems


I also found the couple of notes 1875778 - Performance Optimization for System Copy Procedures and 936441 - Oracle settings for R3load based system copy.


I need some more stuff from the experts so i can go ahead and kick off the table splitting preparation.



Note: here my operating system and database are same for source and target.

O.S: Unix and database : oracle.


Thanks in advance.



Accepted Solutions (0)

Answers (7)

Answers (7)

former_member432274
Participant
0 Kudos

Is there any way from basis perspective to run the t-codes which the tables are compressed. How can i test the compression tables as a basis person.


eg: cdhdr, cdcls are my tables compressed. How can i figure what t codes are run in to it.


so that i can do a test that all my tcodes are running better and fast 

william_nicol
Explorer
0 Kudos

Hi,

May I ask a basic remark: if you perform an homogeneous system copy (same oS & same DB), you don't need to perform en R3load export/import.

Better way, is :

  1. Copy source datafiles to target (traditional backup/restore)
  2. Rename/Restart target DB
  3. Compress target DB
  4. Reduce target FileSystem

Pro :

- no perf impact of source

- you can compress online (open for end users)

Cons :

- need more temporary space .. to be compared with used space for R3load epxort

Regards

former_member432274
Participant
0 Kudos

Hi Nicol,

Thanks for the reply.

If I took an offline and online back up and restore in to my target system. if I start compression using any oracle tool or etc in my target system. Then also my system should be shutdown rite. Since if I gave access to the system I will definitely lose my data, the source and target systems will be inconsistent. correct me if I am wrong.

Thanks

william_nicol
Explorer
0 Kudos

Hi,

You're wrong

You have to use BRSPACE tool and it's online (with off course some perf. impact)

See :

Regards

former_member432274
Participant
0 Kudos

Is there any way from basis perspective to run the t-codes which the tables are compressed. How can i test the compression tables as a basis person.

william_nicol
Explorer
0 Kudos

Hi,

The better way is to ask to your business people to test. And in all cases, if performances are bad, you un-compressed table ... so it's quite flexible .

Regards

former_member432274
Participant
0 Kudos

In order to the above thing  if I used the export and import parallel and with distributed monitoring can I hit the downtime for my sandbox system to 8 hours for 4TB.

former_member185954
Active Contributor
0 Kudos

I would run Migration Monitor manually, I would also use parallel export/import to further reduce my downtime, in my experience parallel export/import reduces overall downtime by around 40%, which is considerable, but in your environment it could vary depending on system capability, network speeds etc.

I have used package splitting + table splitting + parallel export/import which considerably reduces downtime.

As stefan mentioned, if you add distribution monitor to this mix, you can achieve very high throughput.

Regards,

Siddhesh

stefan_koehler
Active Contributor
0 Kudos

Hi,

we should clarify something more important before thinking about table splitting details.

>> I did a system export it took me the system down time of 8 hours with 44 parallel jobs. I took 2 jobs for  r3load process for one cpu ( 22 cpus).


I guess you run the export on the database server, right? So the database processes and all R3load export processes are running on the same 22 CPUs. You even doubled the amount of parallel jobs regarding the CPUs. So i assume (from my experience) that the CPUs are fully loaded as long as the 44 parallel export jobs (and the database) are running. So a higher parallelism degree based on table splitting would not really beneficial here.

I guess you have to use distribution monitor _and_ table splitting to get a higher parallel degree with the corresponding throughput. Distribution monitor is for spreading the CPU load (= R3load processes) across several servers.

Regards

Stefan

former_member432274
Participant
0 Kudos

Hi Stefan,

Thanks for the reply and I really appreciate your time.

I should take this oppurtunity to thank you . I read very good blogs written by you and also I gain lot of knowledge by reading your suggestions in SDN blogs.

Yes, stefan as you mentioned that was my first method. I took the database export directly. I just logged in to my source system and just took the database Instance export to my local file system only to fasten the process. As my sandbox is of 4TB it took the downtime of 9 hours in total. The dump I got is 400GB in my ABAP directory of the export location.

In method 2. I approach of splitting my top size tables which I got from DB02 or ST04. I manually created a text file as follows tablename%number of splits

ex cdcls%5

     cdhdr%4

I saved the file in my system and when the installer asked I gave the txt file and run the table split.  But in this split I took only top 5 tables to check. I got the whr.txt file and I used that file when I am exporting the database instance. Figure as follows. 

fig1: The txt file i created manually with the top 5 table names

Fig: 2 I gave the whr.txt file while I am exporting the database instance export.

I did not run any migration monitor or distribution monitor here. When it asked me I just went with the default options.  

The downtime I met here was same to my direct export downtime.

stefan_koehler
Active Contributor
0 Kudos

Hi,

thanks for clarification, but you still run the export (R3load processes with table splitting) locally on the source database server, right?

>> The downtime I met here was same to my direct export downtime.

I do not doubt this as it seems like you are CPU bound based on your given amount of available CPUs and the provided configuration. Higher parallelism (with package and/or table splitting) would not speed it up dramatically as your CPUs on the source system are fully loaded anyway.

However you should also use the Time Analyzer for analyzing the R3load runtimes and split up accordingly. Sometimes the biggest tables are not causing the long overall-runtime.

It is usually essential to measure your OS load while exporting/importing for finding the right R3load / database configuration and export method. For example on AIX and Linux this can be done very easily with nmon and analyzed with nmon_analyzer afterwards.

4 TB is not really that huge and can be migrated much faster than 8 hours, if setup correctly. I usually use Distribution Monitor due to the client's system sizes and downtime requirements. You can also speed up the export for each R3load process with Oracle PX, but this is some kind of detailed configuration which can be made, if you know the bottleneck.

In my experience SAPinst GUI based migration is something for small and mid-sized SAP systems with no critical downtime requirements - otherwise you mostly have to use Distribution Monitor.

Regards

Stefan

former_member432274
Participant
0 Kudos

Hi Stefan,

Thank you for the reply. I will find and start working on distribution monitor. Will keep the thread open till that time.

I have a one more question, our production database is 20 TB by using the distribution monitoring can I  optimize the downtime to 20 hrs. At the end of the day we want to do it on prod. The downtime given to me is less than 24 hours for 20TB of data.

Is this achievable?

Thanks in advance.

stefan_koehler
Active Contributor
0 Kudos

Hi,

>> I have a one more question, our production database is 20 TB by using the distribution monitoring can I  optimize the downtime to 20 hrs.

This can not be answered in general. It depends on a lot of influencing factors, but mostly on your available resources (e.g. throughput of I/O subsystem, available CPUs and servers) and on your kind of SAP system (e.g. OLTP or OLAP/BI - it is important for splitting). There are some official real world R3load examples like 2 TB in 2.25 hours (= 888 Gb/h) on high-end hardware from 2010, but the hardware capabilities increased a lot since then.

>> At the end of the day we want to do it on prod. The downtime given to me is less than 24 hours for 20TB of data.

The biggest problem in your case is that you try to setup, configure and test the migration with a 4 TB database but your productive system is 5 times larger. The configuration and test migrations should be done with nearly the same amount of data and structure.

Regards

Stefan

P.S.: Please check out my profile / website, if you need further assistance by your migration. I offer such consulting services

former_member182967
Active Contributor
0 Kudos

Hello guy,

Back to your original concern about how to reduce the total downtime, you can consider the following things:

Export / Import in parallel

Note 954268 - Optimization of export: Unsorted unloading

Note 936441 - Oracle settings for R3load based system copy

SYSTEM COPY & MIGRATION OPTIMIZATION

http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/8091fedf-5346-2a10-e4a0-a3cafe860...


Regards,

Ning Tong

former_member432274
Participant
0 Kudos

Hi Tong,

Thanks for the reply.

Here the thing I followed the document you mentioned took some input from it too.

Did some process hit and try methods.

I tried exporting the database and i also tried splitting the top max tables and run. Still I could not optimize the downtime.

You can see the images I posted in the above reply.

Thanks,

JamesZ
Advisor
Advisor
0 Kudos

Hi,

Are you referring to system copy guide? The system guide talks about how to perform the table
split via SAP tools, which is recommended.

I think you can refer to the guide to perform the split in the sandbox system to get used to this feature.
If you do not have system copy guide, please let me know and I can show you where to download the guide from SAP service market place.

Best regards,
James

former_member432274
Participant
0 Kudos

Hi Zhang,

Thanks for the reply yes I am doing a system copy. My source and destination or same. Its similar to homogeneous copy. But my target database is compressed.

I am working optimizing my downtime. Here the thing  i did in two methods.

method1: An export on my database directly   and after table splitting and exporting my database it gave me same time line.

this is the document I am following, since mine is sap 7.02 system

https://websmp201.sap-ag.de/~sapidb/011000358700001419492012E

JamesZ
Advisor
Advisor
0 Kudos

Hi,

The guide you are referring is right. Table split would be faster during import, not faster for export.
Thus table split is recommended. Is there any issue by referring the guide?

Best regards,
James

former_member432274
Participant
0 Kudos

Hi Zhang,

thanks for the reply.

My concern is I dont have that much downtime. End of the day i need to optimize the production downtime. Max i will get 24 hours for the production downtime for 20TB.

So, if i get downtime of lets think 8 hrs for export and 4 hours for import. Its total 12 hours for 4TB for 20TB it will be more than that correct?. 

I will check the distribution monitor but i am very new to that so lets see how it works. If you have any document can you please send me across.

Thanks

ACE-SAP
Active Contributor
0 Kudos

Hi

You should read the here under note that provides a more efficient way to prepare table splitting for Oracle.

1043380 - Efficient Table Splitting for Oracle Databases

That post also provide information on that splitting method

Regards


former_member432274
Participant
0 Kudos

Hi,

I do found the note, but I used the r3ta option and gave my top 5 tables with number of splits and ran the table split.

ex: cdcls%5

      cdhdr%5.

n did the table split and also did my database instance export both timings took the same time when i did the export