SAP for Utilities Blogs
Discover insights and practical tips to optimize operations, reduce costs, and deliver reliable energy with SAP technology. Contribute your own blog post!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

This document was originaly created in January 2013 by Kosana Avinash Reddy. Since the screenshots were not visible, I decided to rewrite the document, extend it with comments and include the screenshots, for the reference and easier use of others who need the procedure.

Objective


The main objective to load legacy data into SAP using EMIGALL distributed mode is to speed up the process when there is big amount of data to import. EMIGALL distributed mode also reduce efforts in scheduling jobs and easy to monitor them. Document is divided into following four topics:

  1. Converting Legacy file into Binary format (SAP format)
  2. Scheduling Distributed Import Jobs
  3. Changing scheduling parameters
  4. Monitoring Distributed Import Jobs

1. Converting Legacy File

     1.1 Go to transaction EMIGALL

     1.2 Double click on Migration Object for which we want to process import file

     1.3 Click Data Import

     1.4 In data import screen, go to menu: Utilities -> Convert migration file

     1.5 Enter Input directory (remember to put “\” at the end of input directory path)

     1.6 Enter input file name (including file extension)


     *Note: Input directory and file name should not be long. Sometimes, long file path and file name will cause system not to recognize file.


     1.7 Enter output directory (remember to put “\” at the end of input directory path)

     1.6 Enter output file name (including file extension)

     1.8 Check Migration company and migration object values

     1.9 After entering values, go to menu: Program à Execute in Background


     Note: Remember the output file path and file name. This will be used in schedule distributed import job.

     1.10 In Background Print Parameters, Press OK


    

     1.11 Press Immediate

    

     1.12 After scheduling the job Immediately, press save

   

     1.13 Go to Transaction SM37 and monitor job till it finishes

2. Schedule Distributed Import

     2.1 Use Transaction EMIGMASSRUN to launch Administration & Monitoring Screen

     2.2 Press Create button

   

     2.3 Enter Company and enter Migration Object

     2.4 Press OK

    

     2.5 Enter Migration Path & file name (Output file path & file name which was used earlier to convert input file)

     2.6 Enter Error File (with file extension “err”

     2.7 Enter Input file (generated) with ‘&’. ‘&’ will be replaced by sequence number when file split happens

     2.8 Enter Error file (generated) with ‘&’.

     2.9 Commit Interval: This should be maintained only if this field is enabled for input

     2.10 Press Analyse button adjacent to file name. This will list of import statistics

   

     2.11 Compare import statistics with legacy file statistics (if shared)

     2.12 Validate number of data types (if there is a relationship). For example in device groups for Meter and Module – Every one record, will have two device data types.

     2.13 After Analysis, go back to main screen

     2.14 Define number of Background Work Processes which we intent to use

     2.15 Enter Mass Import file size

     2.16 Save the Distributed Import identification


     Note: Based on mass import file size, main file will be split into multiple files for parallel processing. This is the number of the master data records per file.

     2.17 Press Import Run


     2.18 Choose option – Start Later

     2.19 Change Start Date and Time well in future

     2.20 Press Distributed Import

     2.21 Distributed import job will be released

3. Change Job Scheduling Parameters

     3.1 Go to transaction SM37

     3.2 List jobs which were released

    

     3.3 Choose the job whose status is “Released”    

     3.4 Go to menu: Job -> Change

     3.5 Press Step

     3.6 Place your cursor on the step and press change icon


     Note: Job is currently scheduled with user who has created distributed import job identification

     3.7 Change User to “MIGRATION” (or user id which has been setup for migration purpose)

     3.8 Press Save to save step

     3.9 Verify user for job is changed to “MIGRATION”

     3.10 Go back to main screen

     3.11 Press Start condition

     3.12 Press Immediately

     3.13 Press Save

     3.14 Press Save on job main screen

     3.15 Distributed Import job is now started with “MIGRATION” User id

4. Monitoring Job

     4.1 Use transaction EMIGMASSRUN

     4.2 Press “List of Distributed jobs” button and select job with your identification

     4.3 Go to Data for last import for statistics and monitor import job

     4.4 Go to Statistics for import runs for further details

     4.5 Double click on line item for further analysis

Top kudoed authors