CRM and CX Blogs by Members
Find insights on SAP customer relationship management and customer experience products in blog posts from community members. Post your own perspective today!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

Hi Hybris Marketing Community,

in this blog I just want to show, how you can easily enhance Rapid Data Load content that SAP provides in order to use it for data provisioning to two separate clients in Hybris Marketing.

As I was dealing with two separate clients in our system I had to do some slight changes, as the pre-delivered content from SAP does NOT cover this aspect (of having multiple clients) by default.

I will demonstrate necessary changes on the Job_Google_Data_Load_ODATA Job in SAP Data Services. By the way, of course you can use other middleware / enterprise bus solutions ... but for testing purpose, as I was kind of lazy of course I relied on SAP rapid deployments to get fast results.

Basic problem is that by default the server url that Data Services will call or Hybris Marketing is listening is something like

https://<host>:<port>/sap/bc/ui5_ui5/sap/cuan_shell/index.html

For multiple clients you need to change the url to something like this

https://<host>:<port>/sap/bc/ui5_ui5/sap/cuan_shell/index.html?sap-client=xxx


The idea behind this enhancement is to provide parameters in Data Services to add the suffix for the necessary url in the respective client, this leads to the question - How do we get the ?sap-client=xxx at the end of the URL?


First try or assumption is to simply change the URL in the global parameters of the job, right?

This won't work - why: For every webservice call to Hybris Marketing you have to do two calls. First one is a get call to receive a valid token from the SAP gateway. Right after you can use this token in the header of a post call to upload desired data.

First call will sent be to the URL you can see in the image above - so this one will also work with the client-enhancement (e.g. http://<host>:<port>/sap/opu/odata/sap/CUAN_IMPORT_SRV/?sap-client=xxx). So far, no problem.

Now, what will happen - the post call actually needs to be sent to http://<host>:<port>/sap/opu/odata/sap/CUAN_IMPORT_SRV/ImportHeaders. So if we would just add the client-enhancement in the global variable section, you will see that the second post call would be sent to http://<host>:<port>/sap/opu/odata/sap/CUAN_IMPORT_SRV/?sap-client=xxx/ImportHeaders. This call would be invalid - as the entity is not known.

So we need a way to get the client-enhancement at the end of our call.

Prerequisites - setting the scene

First of all you need to have access to a Data Services Server and to the respective Data Services Designer. So in this blog I assume you're already dealing with the Data Services for one client - so we just want to enhance it for the second one.

Otherwise you have to deal with the configuration scripts on the Rapid Data Load site : SAP Service Marketplace - rapid data load for SAP hybris Marketing

We will start at the point, you have the packages, the server and you're already logged in at the Data Service Designer. You should see something like the picture below:

On bottom left in your local repository - choose the Rapid Data Load Package and the job you want to enhance - this is similar for all jobs you find in SAP package. We will use Google.

Step 1 - Enhance globale variables

We are going to provide the data for the client via the replacement parameters, therefore as the first step we will add two global variables. For this purpose, open the job, as on the image above - go to "Extras" and then to "Variables"

Next screen is the one with the variables - just add two on the bottom - named is as you like, I would recommend $G_ODATA_CLIENT_PARAM and $G_ODATA_CLIENT. At the end, we will use this variables to enhance the URL that the webservice or Data Services will call.

Both variables should have values according the image below:

Don't miss the inverted comma! The $G_ODATA_CLIENT_PARAM is actually constant - only necessary for enhancement of URL. $G_ODATA_CLIENT is of course the client. The split of this two variables just makes it easier for later use.

Step 2 - Restrict search term selection per client

Data Services will call data via API from the respective medium (Facebook, Google, Twitter, etc.) - and then load this data to HANA tables and respective Hybris Marketing tables. By default the customized search terms and communication media and such is persisted in the main (CEI) schema in HANA, might be called SAPABAP1, in tables SMI_SRCHQRY and SMI_CHNLDEF.

If you open the condition CON_MAP_GOOGLE_DATA and then drill into the data flow DF_Map_Google_Data you can see both tables as the source of truth for the whole process.

As you can see there is no distinction per client at this point - you can see each channel with respective search terms. Both tables will be joined in the next step with Query Transformation Qry_Search_Tasks. And there is the first problem - this query transformation will just use any search term, not matter which client.

And the problem is just the following statement - double click on the object Qry_Search_Tasks and you should see the image below

Open the tab "Where" and you find the first point where we need to do a simple adjustment. Enhance the where-clause with shown statement to restrict the lookup for search terms only to the client we provided in our globale parameters.

At this step it should also be clear why I split the variables - I just needed the pure client integer value, without any string.

Now we're prepared, that the Data Services will just use search terms for the client we will also upload data to.

Step 3 - Prepare the upload URL

So we prepared the globale parameters, that we can provide client information and we restricted the lookup for search terms to a certain client. Last but not least - we still need to adjust the URL that should be called in the respective client. So that our data will be used in the right one.

Proceed to the last condition CON_Enrich_Google_Data. Double Click and you will see two data flows - DF_Enrich_Google_Data_ODATA and DF_Enrich_Uncleansed_Google_Data_ODATA. The steps that follow have to be done for both data flows individually. I will just show for the first one - as the second is then analogeous.


Open data flow DF_Enrich_Google_Data_ODATA


Step 3a) Expose the variables in the structures and create inbound variables for the upload script


Open Query Transformation Qry_Google_Data - you should see the next image


Right click on - could by any - ODATA_PASSWORD variable in the output schema and choose "New output column"


You can choose "Add below" or whatever you like.

Add a column for ODATA_CLIENT_PARAM and repeat the step for ODATA_CLIENT. Your result should be similar to this one ...

I used varchar(50) for the ODATA_CLIENT_PARAM and int for ODATA_CLIENT variable.

Now we still need to assign values to the two columns - we will assign our global variables. Just activate one column and then add the global variable via the assignment section below the schema editor. It should look like this


Please pay attention for the arrow in front of the column name - after you entered or chose the globale variable you still need to click the button "Apply" to assign the variable.

Repeat the step for the ODATA_CLIENT column as well, of course use globale varibale $G_ODATA_CLIENT for this purpose.

Now close this editor and proceed or switch to the following query transformation - Qry_ConvertGender. Also double click to open.

Then just enhance the output schema on the right side as you did before. It should look like the image below


Then simply drag and drop the columns from the left to the same named ones on the right. Result ...


Last step in this section is to populate the variables to the upload script. So next stop is "LoadDataToSystem".


First double click on the user defined function. You will see the import parameters for the scripting block.


We need to add two import variables, so that we can use our parameters in the script itself.

So add both variables as I did - you can see the result on the following image.


You have to double click in the first empty row below the last variable - enter the inport variable name and then assign the variable from the schema (you can choose from the pull down menue - you can only choose your variable if you didn't miss a step before 🙂 ).


So we are prepared - just need to adjust the script a bit.


Step 3b) Adjust the upload script to adjust the odata url

Close the variable editor for LoadDataToSystem and right click on the object - then choose the user-defined editor

Now you should see the script for uploading data to the system. Select "Python-Expression-Editor"

Last steps for success - we need to adjust three sections. First one should start at about line 125. We need to make our import parameters available in the script. Add those two lines:

Now we can use our parameters to finally enhance the url to call backend ODATA services.

First we need to enhance GET call - we should find this at about line 188 - "response = opener.open(cuan_url);"

Adjust like this

Almost done - ok as you might thought right at the start, this could easily be done be just adjusting the global parameter - but the last thing to do is to enhance the following post call in the right way.

At about line 195 you should find "import_headers_url = cuan_url + ("/" if cuan_url[-1] != "/" else "") + "ImportHeaders";"

Because it is necessary to have the client parameter at the end of the url, and the solution provides the final url like this - we had to do the whole adjustments.

Change the line the way I did

I also added this line to see the actual url that has been called in the trace.

Don't forget to repeat steps for the second data flow!

Save the whole thing and give it a try. Change the client via the global parameters of the job.

Step 4 - Actually you need separate data sources

We didn't talk about separate data sources - in our example we are using the same data source for each client in the Data Services. I would recommend to use separate data sources for each client and just copy the job (after adjustment). The reason behind this is, that data will be stored in the database schema that is referred by the data source. Data Services are using the persisted data during data cleansing - so if using one data store for two clients might cause inconstencies and some misbehaviour.

So community, hope this helps anyone who's also dealing with multiple clients.

Cheers.

Timo

1 Comment