cancel
Showing results for 
Search instead for 
Did you mean: 

SAP BEST PRACTICES for DATA MIGRATION

Former Member
0 Kudos

Hi folks,

I am in the process of implementing SAP Best Practices for Data Migration to migrate DATA between two SAP systems. Currently my Data Services 3.2 is running on a Redhat LINUX server with ORACLE 11g. I am following the quick guide( as mentioned in Note 1559332 - SAP Best Practices for Data Migration Solution V3.32) to install the Best Practices. As our DS is running on LINUXORACLE( as opposed to the platform WinMySQL in the guide) , I am following the manual installation setup in the document. I have following questions:

1)Does my platform for Data Services(LINUX + ORACLE) supported for Best Practice Data Migration Implementation?

2)There is a location mentioned in the document as "C:\Migration" where all the downloaded files should be placed. The same location was also referred as $G_Path in the Global Variables. Shall I move the related files(except the atl files which can be imported through the DS Designer tool from any local Windows Server) which will be used by the Data Services to the LINUX Server(to location like \d32adm\BOBJ\Migration) where DS is running?

3)Where will be the actual connection to the source system will be defined? The TARGET System details can be defined in the Data Store DS_SAP.

I am in the process of understanding the concept and configuration

Thanks,

Sandeep

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Sandeep,

Re: the file location for the content: given that you're going to be migrating data into an already-in-production SAP system, you'll probably be wanting to skip the load of the default AIO (All-in-One) content -- all those .TXT files you're asking about. Your target system already has all the config that the customer wants, and you really don't want to muck-up all your LKP lookup mapping tables with default AIO content.

Re: if you can use Linux & Oracle: don't see why not, but note that you can get a free copy of SQL Server from Microsoft -- the BPDM kit works fine on SQL Server Express -- in case you run into problems.

Re: the source system: the BPDM kit doesn't make any assumptions about source systems. You could be migrating data from Joe's Database System, or a collection of flat files, or JD Edwards, or who-knows-what. Thus, there's no canned support for any particular source system. At the beginning of each "mapping" dataflow, there's an Excel stub (an Excel datasource pointing to a non-existent Excel file) which maps to a query transform, as its source, and contains mappings for the "best practices" fields for an AIO implementation. Following that there's a query which expands the schema to all the fields in the iDoc segment (all the ETL is organized by iDoc segments, more or less). In the implementation I'm finishing up (I've been the tech lead for a data migration for a Fortune 1000 company, SAP > SAP using the BPDM kit), we've written lots of ETL to create "feed tables" to replace the Excel stubs. We've also generally thrown away the initial mapping query and gone right to the "all fields" query, because the customer's needs weren't at all accommodated by a stripped-down AIO set of fields. I have to say that most of the business rule encoding has gone into the ETL which creates the feed tables. The BPDM kit ETL really just assembles iDocs, which is nice as far as it goes, but takes care of perhaps 5% of your work, unless you're satisfied with asking your end users to supply you with feed tables themselves, one per iDoc segment. If you are, fine, but I imagine they'll wonder why they don't just use their old LSMW routines given that they'll end up doing 95% of the work anyway. I wasn't satisfied to take that approach, and we've wrote the whole migration, source tables > iDocs, in DS ETL code.

A couple more things: they make a big deal, in the kit, about the lookup table maintenance application (a little .JSP-based web app) and the system of LKP lookup tables. (You'll find a slew of LKP tables in the "staging database," which, by default, is called AIO_BPDM_IDOC, a non-intuitive name if there ever was one.) In my experience, the number of fields where the business rules were simple enough to be accommodated by a these simple key mapping table were, perhaps, one in ten. Generally, the rules needed to be implemented upstream of where the code attempts to translate values (in the "enrichment" dataflows), and we turned-off the LKP table-based translations. They were, actually, more trouble than they were worth.

Secondly: the system, out of the box, assumes you've got a blank target to migrate data into. When migrating vendor data, for instance, it will generate new numbers for your incoming vendors, based on number ranges established for certain account groups. (See the MGMT_NUMBER_RANGES and MGMT_NUMBER_ALLOCATION tables, and the ENR_ASSIGN_OBJECT_NUMBER custom function.) BUT IT HAS NO KNOWLEDGE OF THE VENDORS YOU'VE ALREADY GOT. It will, by default, blithely assign new numbers that will, most likely, collide with your existing numbers, and when you send those iDocs into your target, you'll UPDATE your existing vendors. (Ditto Customers, Materials, etc.) Out of the box, this part of the code is completely unsuitable for a migration of data into an already-in-production target. I needed to rather dramatically re-engineer these tables and all the related code to make sure that new numbers are always unused in the target. Tricky business.

Finally: most of the work is in source data analysis and coming up with the set of data transformation rules to get one set of SAP data to work in a new SAP system. This is mostly a matter of the business deciding how to take the source data and twist and turn it to make it work given how they do business. How should we structure their vendors? How should we deal with their plants, warehouses, and storage locations? Do we want to make their BOM structure flatter or deeper? Etc. The BPDM kit per se isn't going to help you with any of that -- it's not a magic bullet. It's nice to have the ETL to make iDocs, and iDocs are a cool way to put data in. But all the heavy lifting happens outside the kit.

Best wishes,

Jeff Prenevost

Former Member
0 Kudos

Hi Jeff,

Thanks a lot for the detailed explanations.

Regards,

Sandeep

Answers (0)