Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
former_member183249
Active Participant


I was having a scenario where i need to handle more than 10000 values in Value Mapping which was very tedious task. Entering large number of values manually in ID was not possible otherwise it would end in months. Then i tried Value Mapping Replication (VMR) interface available in Basis component but it was also not that efficient. Then i also tried uploading with NWDS directly by creating CSV file for Value Mapping but it fails when we have any "," (comma) in key or Value.

So this option was not helpful for me.

Instead of this we were knowing that if we are doing lookups in any database or file server it will hit our interface's execution time if number of lookups are more.

Then i thought why we can't handle these much values in ESR only in any file format and directly read values from those file which will be much quicker than any other option. So i was having only one option where i can upload any kind of file in SAP PI i.e. Imported Archive. Imported archive is normally used for JAVA or XSLT mappings. But it provides an option to upload files in .zip format that gave me a loophole from where we can upload any kind of file  in ESR after zipping the files together.

It was amazing when i got success to do lookup among >50K values in few milliseconds. So i thought to share this new concept with all of you because i searched whole SCN and SAP documents to handle this problem and i returned empty hand.

I will explain step by step procedure to handle any number of values in Key- Value pair format in ESR and can easily do a lookup through a small UDF.

Step 1: Create text files containing key - Value pair separated by space or "=" as shown in below screenshot

 



Step 2: Now create all the files that you want to create for lookup in text format and zip it together.

( I was having a requirement where i need to transform for 20 EDI segments incoming input values into its standard actual values as shown in above figure.          So i created different file for different segment. If you want you can merge all files together and upload single text file.)

I created 21 files as per the requirement and zipped those together as below:



Now i have a zipped file that contains all key-value pair in it. Now we can upload this file in imported Archive without any issue :smile: :smile:

Step 3: Create an Imported Archive as ValueMapping  and import .zip file into it.

 



Now you can see all text files in imported Archive as shown in below screenshot.





Click on any file , you can see the content of the file as below:



Now Save and Activate your imported Archive.

Step 4: Now assign your imported archive in your message mapping in Function tab under Archive used as shown below:

 



Step 5: Now we will create a simple UDF which will take two input  values first value will be key against which i want description and second input of UDF will be file in which i want to lookup values.

(If you creating one file then you can pass only one input to UDF and directly write file name in UDF.)



Step 6: Copy paste the below code in your UDF:

 

//public String FileLookup(String key, String filename, Container container) throws StreamTransformationException{

 

String returnString="";

try {

InputStream lookupStream = this.getClass().getClassLoader().getResourceAsStream(filename);

InputStreamReader reader = new InputStreamReader(lookupStream);

BufferedReader buffer = new BufferedReader(reader);

String read;

while((read=buffer.readLine()) != null){

String temp = read.substring(0,key.length());

if(key.equals(temp)){

returnString = read.substring(key.length()+1,read.length());

if (  read.substring(key.length()+1,read.length()) != "00"){

int num = Integer.parseInt( read.substring(key.length()+1,read.length()));

num = num+2;

returnString = Integer.toString(num);

}    }

}

} catch (Exception e) {

returnString = e.getMessage();

}

return returnString;

//}

Step 6 : Now we will create one more UDF for trimming fixed extra description that will always come when we are using lookup code.

There will be only one input for this UDF. We will pass the output of the Lookup UDF into it and it will give actual output to us. If you are a bit confused                you will get clear picture once you do display queue on these udf.

Below is the code for trimValue UDF. ( Input Parameter of UDF : value )

if(value.length() >0){

String str ="";

str = value.substring(19,value.length()-1);

return str;

}else{

return "";

}

Now our UDFs are ready for testing :smile: :smile:

Step 7 : Now i will pass a key that is available in DE_365.txt file and our output will be actual value against this key.

I have shown every input and output using display queue that will explain everything clearly and now you can understand why i wrote trimvalue function :smile:

 



Now we can compare the key value available in our text file:

 



 

This will never hit performance and execution time of message mapping as we are maintaining the lookup files in ESR as an ESR object.

Value Mapping has a constraint over the length of target field (i.e. it can't be more than 300 ) but here you can pass more than that as we are maintaining the values in text file.

 

Hopefully this will solve most of the problems related to large Value Mapping data maintenance. You can upload millions of data without much effort in ZIP format.


 

 

Note: I would suggest to go for VMR or NWDS upload through CSV file under value mapping if you have data which may change frequently and you don't have length issues in target, otherwise you will have to reimport changes to production System every time you make changes even in a single field.

 

 

5 Comments
Labels in this area