Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

SAP Test Data Migration Server (TDMS) is a high-speed data extraction tool that populates the development, test, quality assurance, and training systems with SAP business data from the production environment with a reduced dataset. SAP TDMS operates at client level, that is, the selected data from a client in the sender system is copied to a client in an existing system (non-production system).

The rationale behind SAP TDMS is as follows: Approximately 80 % of the data volume of a typical database is contained in less than 10 % of the tables. The biggest tables are normally transaction data tables. This means that small non-production systems for different purposes can be created by including only those parts of the data that are needed for the given purpose.Note that it is not meant to be used for creating production systems.

The phases in TDMS process tree are :

1)      Package Settings

2)      System Analysis

3)      Data Transfer

4)      Postprocessing

In this blog we will look into different issues and problems which can be faced in "ERP Initial package for Time Based Reduction" scenario.

1.1      The Performance of activity TD05X_FILL_EQUI is insufficient

Reason:

Activity TD05X_FILL_EQUI has a very long runtime for some system constellations, sometimes as long as several days in System Analysis phase.

Solution:

Implement SAP Note: 1037712 set the activity parameter P_FULL to the value 'X' for the activities TD05X_FILL_EQUI and TD05X_FILL_EQUI_O in transaction CNVMBTACTPAR

Note 1037712 - Performance of activity TD05X_FILL_EQUI

1.2       TIME_OUT dumps during activity Calculate Access Plans (Simulation)

Reason:

For some customer tables, proper Size Category is not assigned. Hence the data selection activity runs under DIALOG process instead of BACKGROUND, which results in short dump TIME_OUT.

In this case, it is attempted to calculate the access plan in the time span of a synchronous RFC call from the central to the sender system. The time limit for dialog work processes in the sender system exceeds.

Solution:

  1. 1. In transaction RZ11 (Sender system), check for the system parameter "rdisp/max_wprun_time". It should be set to 900 seconds.
  2. 2. If the parameter is already correctly set, change the Size Category for the conversion object that was processed.

The size category should be set to 'A' in the PCL definition of this conversion object, for the current package.

For this, go to the view V_CNVMBTCOBJ in central system, enter the package number, and then select the conversion object which was processed, and change the "Size Cat" to 'A'

Size Category was set to A (Large Tables) for below tables:

ETXDCJ, LTEX, X_ZZSUMMA, CE4HCOC_ACCT, Z**, Z**

1.3       Setting Deletion Scenario to “Array-Delete”

Reason:

The receiver system is a multi-client system, and the users in other clients cannot be locked while data is being deleted from the receiver client. Consequently, you cannot use a deletion method that involves dropping tables. If you do not want to lock the other clients, you need to change the deletion scenario to overall array-delete (O) at package level.

Solution:

It is recommended to use activity ‘Change Deletion Scenario at Package Level – Optional’ in phase ‘System Analysis phase to switch the deletion scenario to:

  • O (Overall ‘Array Delete’ technique)
  • F (Full table scan by checking out non-relevant entries)

1.4       Setting the status of an activity manually

Reason:

Status of the activity “Calculate Access Plans (Simulation)” does not change and status is RED even though activity is successful as shown by logs

Solution:

Set the status to GREEN by using the option “Manual Status setting”

1.5       Performance Issues

During System Analysis and Data Transfer phase of TDMS package, performance issues have been noticed in Sender system due to the recompilation of the existing objects.

All activities listed below are performance-intensive in relation to the relevant data volume and data distribution (that is, they require a considerable part of the system resources such as CPU or memory). This applies to the application server and also to the database server. For this reason, you should ensure that sufficient resources (for example, ST06) are available on the relevant system before you start an activity, and that these resources are not currently being used by other system activities (transactions, background processing). If these activities are started in such a situation, performance problems will almost certainly occur.

  • Filling header tables in System Analysis phase

This activity consists of individual activities that run in the background on the sending system (BTC process).Which individual activity is performance-critical depends on the applications used and the rate of utilization. The performance is basically determined by the accesses to the database (SELECT on large tables). You also have to expect long runtimes if this activity is executed repeatedly. In this case, the header tables that were created before must be deleted, which again extends the runtime.  

Affected Systems: Sender

  • Data selection

This activity mainly uses the sender system, where it runs via an RFC call (for each conversion object) in a DIA process or BTC process. The runtime is basically determined by a SELECT on the affected table and, therefore, by the dataset and a possible selection group. Note that the affected table is accessed without restriction on the selection fields. A selection group is restricted to the fields in the assignment only if it was assigned.

Affected Systems: Sender

  • Deleting data in the receiver system

This activity starts a sub activity in the background (BTC process) on the receiver system for almost every table that is to be deleted. Each sub activity logs on to the central system using RFC, and repeatedly transfers its processing status. For this, a process dialog on the central system is temporarily occupied. This means that for each active BTC process in the receiver system, a process dialog is also occupied in the central system. This may result in a bottleneck. However, intense DB accesses mainly increase the load on the receiver system.  

Affected Systems: Receiver, Central

  • Data transfer

This activity utilizes all three systems. The control and possible conversion of the client runs on the central system in several BTC processes. These use RFC to call the sending system and the receiver system, where one process dialog is occupied for each call. The performance on the sending system and receiver system is mainly determined by the database. In the central system, the application server (CPU) is utilized almost exclusively. Previous experiences have shown that the sending system has the largest load. This is due to the fact that the complete dataset must be read and filtered in the sending system, and only a reduced amount of data must be written to the receiver system.

Affected Systems: Sender, Receiver, Central

  • Deleting entries in help header tables after transfer

This activity runs on the sending system in a BTC process. The runtime is basically determined by the size of the TDMS "header tables".

Affected Systems: Sender

1.6       Error in Generating Conversion Objects in Start Data Selection

Reason:

First run of Start Data Selection finished with status error where 12 objects had failed

The error as seen from logs is:

This error comes mainly due to an interruption in RFC as can be seen in long text of the logs i.e.

During Second run of Data Selection, the status was manually set to complete by user ‘.....’ i.e.

Hence, data selection for the 12 tables never took place and hence there is no way Data Transfer can take place for these tables.

Also, there is one object for which nothing has been run possibly because the table was added later on for transfer i.e.

Solution:

Run the troubleshooter for Start Data Selection:

Here, execute the option ‘Define Technical Settings’ under ‘Change Technical Settings for Conversion Objects’ option i.e.

Specify current package number if not automatically taken and execute:

Here,

Change Size Category to Large as indicated below:

Save and then Activate Settings

Once done, go back to the previous screen

Here, run ‘Automatic Repair’ under ‘Troubleshooting for Conversion Objects’

As the drop down for “Troubleshooting for Conversion Objects” is not available, please execute the report 'CNV_MBT_TS_PCL_DTL_REPAIR' which is the exact same report run by the troubleshooter.

When activity is successful Start Data Selection again for the aborted objects.

1.7       Recommendations

  • Implement SAP Note 890797 "SAP TDMS - required and recommended system settings".
  • Provide at least eight batch processes and eight dialog processes for each system (or System role) in a transfer.   
  • In the receiver system, DB archive logs and SAP system parameter REC/CLIENT should be disabled to improve writing performance.
  • Customer-defined tables (and also SAP tables that were not selected for reduction) with a large dataset are not necessarily recognized for reduction, and therefore should be checked again 
  • Monitoring the utilization of Application and Database servers should be done to check system load
7 Comments
Labels in this area