1 2 3 4 Previous Next

SAP CRM: Marketing

57 Posts

    Sometimes we need to export the target group to a file, here this blog introduces several ways to achieve it.

 

  1. If the size of target group is small (generally less than 500), you can export them via the ‘export to spreadsheet’ button on the header of the Target group item page.

  2016-09-06_13-52-06.png   

    The initial value for the parameter in ‘No. of Members Displayed’ is 100. If the amount of TG is over than 100, please increase the value based on the BP amount, or the export file will only import the first 100 data. Please be careful that, if you set the value as a large number, it may affect the performance in target group scenario. In general, I suggest users that not set the value over than 500.   


    2. If the target group has large amount BP, we recommend that you could export them to application server. From application side, there is no restriction on the number of business partners.Picture 2.png   

    To do this, please click the ‘Export to file’ button on target group page.Then it will schedule a background job and save the file to application server. The file is saved in the logical file path MARKETING_FILES, which you define in Customizing. You can download it from there using report CRM_MKTTG_FEXP_SHOW_FILE.

   

    Regarding how to view the downloaded file in application server and how to maintain the file path, please check note 2085288 for details:


    2085288 - Download large Target group entries using Export to Server option in Segmentation   


    You could enhance the related coding in BADI CRM_MKTTG_SEG_MEM_EX if necessary. The detailed documents can be found in customizing path: CRM--Marketing--Business Add-Ins (BAdIs): BAdI: Define Display and Export for Target Group Members list.  


    Some known bug fixed notes for this part(till CRM 7.0EHP3):


     1822079           BADI CRM_MKTTG_SEG_MEM_EX is called too many times

     1710754           BAdI for navigation to BP(different view) from Target group.

     1937689           Dump when opening a target group with out relationship

      1880119            MKTSEG: Dump when using BAdI CRM_MKTTG_SEG_MEM_EX   


    SAP Help Document:Follow-Up Processing of Target Groups Without Campaign Reference   


    3.Sometimes for end users, they do not have the authority to access the folder in application server. Here as a supplement, you could consider use ‘File Export’ function in campaign to extract data from target group.Picture 3.png    The file-export function enables you to create a file with a list of business partners and corresponding specified attribute values. For this function, you need to create a ‘Dummy’ campaign and set the communication medium as ‘File Export’. Then after executing the campaign, the file will be attached in attachments block.

2016-09-06_16-13-45.png

  • The specified attribute values, which you want to export can be defined in ‘File Export Form’.
  • Two file formats are supported in standard: CSV, XML. You could define the related parameters in following customizing path:

 

    CRM—Marketing—Campaign Execution—Define Communication Medium/Define File Export Variants

 

    I am not aware there is any BP amount restriction here, but I recommend not include over 30.000 per target group during Campaign Execution process.

  

    Some known How-to documents:


   1940722 - CSV File in Campaign does not open with Microsoft Excel

   1741781 - Campaign Execution: Technical IDs in File Export Header


    The SAP help document: File Export

    Hopes that this document can be helpful to you, please let me know if any suggestion/confusion.


    Thanks and best regards,


    Kevin

Recently my client asked me to setup a demo in SAP CRM Loyalty Management for following scenario: Sales reps create sales orders in CRM Web Client (not Interaction Center) for their customers. Based on the order value, customers enrolled in the company bonus program can earn points. If they have enough, they can use the points to buy products.

Here I share my experience setting up the demo. The scenario covers the sales part only.

 

SAP CRM Loyalty Management integrates with sales order management. Minimum requirement is EHP 1 and the business function CRM_LOY_PROD.

Three integration scenarios are available: earning points, redeeming points and buying points. The latter is more for partnership management and isn't part of this blog.

 

 

Product Master

 

To start with it is required to maintain point information at material master. Enter here the point type and point value per scenario (earn, redeem, buy) one unit of this product is worth. Please note, 0 points are maintained at my demo products because I will calculate the points based on the order value. Only the points used in redemption scenario are fix.

 

1.png

 

The assignment block "Point Information" is available after you have added the product set type REWARD_PRO_LOY to the product category (transaction code COMM_HIERARCHY).

 

2.png

 

 

Customizing Steps

 

SAP delivers two transaction types (LTAA and LTAR) you can use for point accrual and redemption in sales scenarios. Because I wanted to make modifications to both types, I made copies (ZLTA and ZLTR).

They are ordinary sales transaction types, except that the transaction classification A (accrual) and R (redemption) respectively is set. Additionally the channel GUI was added such that the transaction is available to create in Web Client.

 

3.png

 

Next the membership ID field must be visible in the UI of both transaction types. Use the UI configuration tool in Web UI to add the field to your transactions' layouts. The membership ID is a standard field at the pricing set of the header. It comes with a search value help and defaults the member ID with the sold-to party.

By the way, in Interaction Center the membership ID is automatically filled when you identify an account by membership information (for example, in business role IC_LOY_AGENT).

 

4.png

 

Last configuration step is to let the system know what member activities it should create as follow up of a sales order. Maintain for each of the scenarios the activity category and type. You can define the (system) status the sales order must have in order to trigger member activity creation. If required, you can define a processing delay (in days), the system should wait to process the created member activities. Leaving it empty would process the activities immediately.

 

5.png

 

 

Setup of Loyalty Program and Rules

 

I have a loyalty program based on the Super Buy scenario. The program has one rule containing two rows to process activities of types PRODUCT and PRODUCT_REDEEM.

The rule for point acrual contains a calculation formula. Remember, I want the points to be defined by the order value, so I take the amount into account and multiply it by 30 (fix factor, 1 EUR = 30 points).

 

6.png

 

The second rule row for the redemption case just uses the loyalty points and point type from the member activity since it is filled by the system based on the product master information maintained above.

 

7.png

 

Create Sales Order and Earn Points

 

Enough configuration. Now let's see that in action. I create a new sales order for loyalty accrual (ZLTA), enter my sold-to party (i. e. the member). Using the value help I can search and select the membership of that customer. Unfortunately, there is no determination like for contact persons or organizational data (at least I haven't seen one).

 

8.png

 

Enter the products for that order and the quantities. You see the columns points and point type (both hidden by default) are determined from the product master. Since we maintained 0 points for the earning scenario, no points are determined.

 

9.png

 

Save the order. If you now check the customer's point account, you'll see that nothing happened there. This is because the order has still the open status. This complies to the customizing we did for the member activities previously.

Edit the order and set the status to completed.

 

10.png

 

Check the membership. 4 member activities (according to 4 sales order items) have been created and processed.

 

11.png

 

In the point account 4 new point transactions (all from 22.07.2016 12:31) have been created.

 

12.png

 

 

Create Sales Order and Redeem Points

 

Now let's do the same for point redemption orders. In this case, the spent points are determined from the material master.

 

14.png

 

By the way, if the point balance isn't sufficient to cover the current order, the system raises an error message.

 

15.png

 

2 member activities were created after the order has been set to completed and saved. Besides the order information also the points to be redeemed were saved in the activity.

 

16.png

 

Since the member activities were processed immediately, point transactions were created in the point account accordingly.

 

17.png

 

That's it. Feedback or opinions are welcome.

 

 

In my past implementations of ATPM & CRM-analytics, working closely with business, it was clear that a LIVE P&L produced the biggest business value. I often toyed with the idea in my head that BW-IP based planning solution to calculate LE (Latest Estimate) would be ideal. Most customers I know series of Infocubes+multiprovider + overnight staging to arrive at LE.

 

Well  CBP does just that among other things. In general it allows you to look at the impact of Promotions on P&L from Retailer and Manufacturer’s standpoint instantly.

 

I have been working on creating an internal demo on our CBP, CRM + BW on HANA system. I was part of the testing group at Waldorf for CBP2.0 & thought would start a blog. This is the first in a series of blog I plan to publish more of these if I see interest / comments posted against this.

 

What is CBP?

It is a Real-time collaborative solution that allows Account manager to increase sales and profitability. All user interaction is from CRM-UI.

 

What does it do?

Allows you to Maintain (among other things)

·         Targets & master data

·         Manufacturer and Retailer target

·         Create Buyers

·         Planning hierarchy (freely defined)- saved online to BW

·         Assortment (similar to CRM listings)

 

Enables Planning

·         Volume Planning –Baseline, Sell-out etc

·         Price planning – list price, shelf price etc.

·         Non-promotional and promotional P&L

·         Roll-up promotions into LE

 

Track KPIs (new UI6 tiles)

·         Internal targets

·         External targets

·         ROI, GSV % etc.

 

 

What do you need to install?

Software

BW 7.4 (has to be BW on HANA)

CRM EHP 3

 

Add-ons

CBP-addon on CRM

ATPM-addon on CRM

CBP-addon on BW

 

What CRM objects are prerequisite for CBP?

·         Account hierarchy

·         ECC product hierarchy (BAdIavailable to use CRM product hierarchy)

·         Product categories

·         Position

·         Territory management (BAdIavailable to implement without territory management)

 

I will keep this one very short.

 

Author: Arvind Bhaskar has been working with CPG companies for a long time. His main focus areas are BW on HANA, ATPM, CBP & associated BW-IP based planning application

 

crm.jpg

Each day, more and more businesses seek to improve their functioning through implementation of CRM software. For those who don’t know what is being talked about here, CRM or Customer Relationship Management is the latest business model, which relies heavily on technology to improve, organize and automate client and customer interaction. The main reason for adoption of this model is the increased demand for customer satisfaction, which has become a major business trend in the 21st century.

 

How CRM integration affects your business, totally depends on the type of functions you have chosen. Hence, it is important to take several things in consideration, before you make the final choice. Listed below are some of the most important things that you must know about before choosing CRM software for your business.

 

Analyze Your Business

 

The best way to know what area of your business requires CRM implementation is by analyzing your business needs. The first mistake most business owners make is by choosing CRM software without understanding their business requirements. Even the most sophisticated internet tool won’t be able to help your business if your employees aren’t able to utilize them for their work related tasks. For example, installing banking CRM software for a retail business related to footwear will certainly make no difference and will only prove to be a waste of funds and technology. Hence knowing what your employees can use is a major factor that will affect your choice.

 

How does it benefit the customer?

 

The first and foremost reason why CRM was implemented in the first place was to provide a better quality of service to the consumer. It should be already clear that the Customer Relationship Management model always keeps the customer in center and thus all your business decisions should be made towards the best interest of the customers. To choose the perfect CRM software for your organization you must first analyze the problems that the consumers are facing and how will CRM implementation solve that particular problem. If your CRM implementation and the use of advanced technology don’t benefit the customer in any way, there is no reason why you should continue with it.

 

Provider’s reputation matters

 

While CRM implementation will save you money in the longer run, we must also know that it doesn’t come cheap. Even before you plan a budget for CRM software, make sure that the provider you are buying it from, is reputed and renowned for their services. You do not want to spend your entire CRM budget into buying substandard software that is totally ineffective at organizing your day to day activities in your company. Hence, it is highly important that you do proper research for the best CRM software providers for your organization.

 

Plan a budget

 

As mentioned above, CRM software isn’t cheap and a separate budget needs to be created in order to carefully implement the technology in the areas that need most attention. Much before you think about that type of CRM software you need, you must know what area of your business needs to be improved with CRM integration. Planning your budget will not just highlight the areas of concern but will also ensure that you do not implement valuable technology in unnecessary places.

 

Scalability of the software

 

With careful implementation of the CRM model, it is certain that your business will grow. But it is also important to choose CRM software that adapts according to the changing requirements of your business. Regardless of their areas of influence, your managerial decisions such as choosing CRM software must always be made after considering their scalability and usability. There is no point investing in complex software that your existing employees cannot understand. At the same time, there is no use of buying software that won’t be able to cater to your changing business needs as you grow from a small business into a brand.

 

You certainly won’t go choosing random employees for your organization, then why choose a CRM solution without knowing its uses and implications. Therefore, make sure that you follow the above instructions to choose the best CRM software for your organization.

Ankita Sastry

Web Crawler and Scraper

Posted by Ankita Sastry Feb 10, 2016

Use


You can scrape websites and blogs and store its content in the Social Intelligence tables using a python script. You use this stored information to analyze the sentiments and draw further conclusions on the same.

 

System Details


These details need to added in the script :

  1. Server
  2. Port
  3. Username
  4. Password
  5. Schema
  6. Client

 

Prerequisites


  • You have installed Python 2.7.
  • You have installed the following modules using Pip:
  • google
  • pyhdb
  • urllib2
  • httplib
  • urlparse
  • validators

 

How the Python Script Works


When the script is run, you are asked to enter a search term. Based on the entered search term, the system returns the top three results from Google Search using the Google module. The system stores the result links in the Top_Results.txt file. These top three sites are crawled and the data from it is scraped and stored in the SOCIALDATA table. Further, the links found in these sites are also scraped and stored in the SOCIALDATA table.

 

Steps


     1. Copy the below script into your desired location

import urllib2
import httplib
import re
import sys
import pyhdb
import random
import datetime
import string
from google import search
from urlparse import urlparse
import validators
import os
################### To be Filled before executing the script #########################
# HANA System details
server = ''
port =
username_hana = ''
password_hana = ''
schema = ''
client = ''
######################################################################################
# Function to fetch the top results from google search for the passed search term. 
def top_links(searchterm):
     top_res = open('Top_Results.txt','w')
# The number of results fetched can be changed by changing the parameters in the below call.
     for url in search(searchterm, num = 3, start = 1 ,stop = 3):
          print url
          top_res.write(url)
          top_res.write('\n')
# Function to scrape the content of a specific website. This is acheived using regex functions.
def scrape(resp,searchterm):
# Check if the link is a valid one or not.
     pattern = re.compile(r'^(?:http|ftp)s?://')
     mat = re.match(pattern, resp)
     if(mat== None):
          print 'Nothing there'
     else:
          try:
               response = urllib2.urlopen(resp)
# Write the response body into a file called crawled.txt
               html = response.read()
               file1 = open('crawled.txt','w')
               file1.write(html)
               file1.close()
               f1 = open('crawled.txt','r').read()
               f2 = open('regex.txt','w')
# Since the main content of any website is stored in the body of the html, we extract and store only that part of it.
               res1 = re.search('(<body.*</body>)',f1, flags = re.DOTALL)
               if res1:
               print 'Found'
# Further the unnecessary tags are removed, like the script, style tags etc.
               scripts = re.sub(r'(<script type="text/javascript".*?</script>)|(<script type=\'text/javascript\'.*?</script>)|(<script>.*?</script>)','',res1.group(0),flags = DOTALL)
               next = re.sub(r'|(<style.*</style>)|(.*//.*)|(^/*.*?\*\))','',scripts, flags=re.DOTALL)
               n1 = re.sub(r'<style.*</style>','',next,flags = re.DOTALL)
               f2.write(n1)
               f3 = open('regex.txt','r').read()
  #parse through the file removing html tags and other necessary characters and store in a file called Scraped.txt
               f4 = open('Scraped.txt','w')
               res3 = re.sub(r'<.*?>|</.*?>','',f3)
               spaces = re.sub(r'\s\s+','\n',res3,flags = re.DOTALL)
               f4.write(spaces)
# The final scraped content is stored in a file called 'Scraped_Final.txt'
               lines = [line.rstrip('\n') for line in open('Scraped.txt')]
               f5 = open('Scraped_Final.txt','w')
     
               for i in lines:
                    if(len(i) > 10):
                         f5.write(i)
               file_scraped = open('Scraped_Final.txt','r').read()
               print 'Scraped'
# This content is then inserted into the Database
               insert_into_db(file_scraped,searchterm)
          else:
               print 'No match'
# Error Handling
     except urllib2.HTTPError as e:
          print e.code,' Skipping..'
  # print e.read()
     except urllib2.URLError as e:
          print e.reason
     except httplib.HTTPException as e:
          checksLogger.error('HTTPException')
# Function to extract the internal links in each website. 
def get_links(base_url, scheme):
     print 'Base url',base_url
     f1 = open('crawled.txt','r').read()
# All the link tags and anchor tags are found and the links are extracted from them
     links = re.findall('(<a.*?>)',f1,flags = re.DOTALL)
     links2 = re.findall('(<link.*?>)',f1,flags = re.DOTALL)
     li = open('li1.txt','w')
     tmp_list = []
     tmp_list1 = []
     for j in links:
          if not j in tmp_list:
               tmp_list = j
               tmp_list1.append(j)
               li.write(j)
               li.write('\n')
               for k in links2:
                     if not k in tmp_list:
                         tmp_list = k
                         tmp_list1.append(k)
                         li.write(k)
                         li.write('\n')
     f5 = open('li1.txt','r').read()
     links1 = re.findall('(href=\'.*?\')',f5,flags=re.DOTALL)
     links5 = re.findall('(href=".*?")',f5,flags=re.DOTALL)
     li2 = open('li2.txt','w')
     list1 = []
     list2 = []
     for i in links1:
          if not i in list1:
               list1 = i
               reg1 = re.search('\'.*\'',i)
               if reg1:
                    reg2 = re.sub(r'\'','',reg1.group(0))
                    list2.append(reg2)
                    li2.write(reg2)
                    li2.write('\n')
     for m in links5:
          if not m in list1:
               list1 = m
               reg1 = re.search('".*"',m)
               if reg1:
                    reg2 = re.sub(r'"','',reg1.group(0))
                    list2.append(reg2)
                    li2.write(reg2)
                    li2.write('\n')
     print 'Opening Links'
     li4 = open('Links.txt','w')
     list3 = []
# Handle relative URLs as well by adding the base url of the website.
     with open('li2.txt','r') as f12:
          for line in f12:
               if not line in list3:
                    rel_urls = re.sub(r'^/\.','',line, flags = re.DOTALL)
                    if( ((re.match(r'^#',line)) == None) or ((re.match(r'^/\.',line)) == None)):
                         rel_urls = re.sub(r'^//',scheme ,line,flags = re.DOTALL)
                         rel_urls = re.sub(r'^(/)',base_url+'/',line,flags = re.DOTALL)
                         list3.append(rel_urls)
                         li4.write(rel_urls)
     final_list = []
     li5 = open('Links_Final.txt','w')
# Check if the formed URL is valid using the python module 'Validators'.
     with open('Links.txt','r') as f:
          for line in f:
               if not line in final_list:
                    if(validators.url(line) is True):
                         final_list.append(line)
                         li5.write(line)
                    else:
                         print 'Removing invalid urls..'
     print 'Links extracted'
# Return the list of links.
     return final_list
# Function to get the current date time and format it.
def getCreatedat():
     current_time = str(datetime.datetime.now())
     d = current_time.split()
     yymmdd = d[0].split("-")
     hhmmss = d[1].split(".")[0].split(":")
     createdat = yymmdd[0] + yymmdd[1] + yymmdd[2] + hhmmss[0] + hhmmss[1] + hhmmss[2]
     return createdat
# Function to get the UTC date time and format it.
def get_creationdatetime_utc():
     current_time = str(datetime.datetime.utcnow())
     d = current_time.split()
     yymmdd = d[0].split("-")
     hhmmss = d[1].split(".")[0].split(":")
     creationdatetime_utc = yymmdd[0] + yymmdd[1] + yymmdd[2] + hhmmss[0] + hhmmss[1] + hhmmss[2]
     return creationdatetime_utc
# Function to insert the scraped content into the FND Tables.
# Ensure that you have WRITE privileges in the HANA system.
def insert_into_db(sclpsttxt,searchterm):
     socialmediachannel = 'CR'
     dummy_createdat = '20151204'
     creationdatetime = str(datetime.datetime.now() )
     creationdatetime_utc = get_creationdatetime_utc()
# The connection to the system is made with the appropriate credentials
     connection = pyhdb.connect(host=server, port=port, user=username_hana, password=password_hana)
     cursor = connection.cursor()
     socialdatauuid = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(32))
     socialpost = ''.join(random.choice(string.digits) for _ in range(16))
     language = 'EN'
     createdbyuser = username_hana
     createdat = getCreatedat()
     sclpsttxt = sclpsttxt.decode('ascii','replace')
     sclpsttxt = sclpsttxt.replace("'","\"")
     socialposttext = sclpsttxt
     creationusername = username_hana
     socialpostactionstatus = '3'
# socialposttype = 'Blog'
     values ="'"+client+"','"+socialdatauuid+"','"+socialpost+"',\
     '"+language+"','"+socialmediachannel+"','"+createdbyuser+"',\
     '"+creationdatetime+"','"+"','"+"',\
     '"+"','"+"','"+"',\
     '"+"','"+"','"+socialpostactionstatus+"',\
     '"+"','"+creationusername+"','"+"',\
     '"+searchterm+"','"+createdat+"','"+socialposttext+"',\
     '"+creationdatetime_utc+"','"+"','"+"',\
     '"+"','"+"'"
# The SQL query is formed by entering the necessary values.
     sql = 'Insert into ' + schema + '.SOCIALDATA values(' + values + ')'
     try:
# Execute the sql query
          cursor.execute(sql)
          print 'Stored successfully\n\n'
     except Exception, e:
          print e
          pass
# Commit and close the connection
     connection.commit()
     connection.close()
def main():
     print 'Enter the search term'
     searchterm = raw_input()
# The top N results from google search are fetched for the specified searchterm.
     top_links(searchterm)
     with open('Top_Results.txt','r') as f:
          for line in f:
               print 'Content',line
# The content of these links are scraped and stored in the DB
               scrape(line,searchterm)
               line_ch = line.rstrip()
               n = urlparse(line_ch)
               base_url = n.scheme + '://' + n.hostname
               scheme = n.scheme
               links = ''
# Further, the links inside each of the Top results are found and scraped similarly
               links = get_links(base_url,scheme)
               if(not links):
                    print 'No internal links found'
               else:
                    for i in links:
                         pattern = re.compile(r'^(?:http|ftp)s?://')
                         mat = re.match(pattern, i)
                         if(mat!= None):
                              print 'Link url',i
# We call the scrape function in order to scrape the internal links as well
                              scrape(i,searchterm)
     print 'Scraping done.'
# Once the scraping and storing is done, the files created internally are deleted. Only the file 'Top_Results.txt' persists, since the user can change it according to need.
     if os.path.isfile('li.txt'):
          os.remove('li1.txt')
     if os.path.isfile('li2.txt'):
          os.remove('li2.txt')
     if os.path.isfile('Links.txt'):
          os.remove('Links.txt')
     if os.path.isfile('Links_Final.txt'):
          os.remove('Links_Final.txt')
     os.remove('crawled.txt')
     os.remove('regex.txt')
     os.remove('Scraped.txt')
     os.remove('Scraped_Final.txt')
if __name__ == '__main__':
     main()

     2. Edit the script to enter your SAP HANA user credentials in the function insert_into_db()

     3. Open the command prompt of that location

     4. Run the python script as shown below:

      Screenshot1.png

     5. Once the script is run, it inserts data into the database as required. The screenshot of the same is given below:


     Screenshot2.png

Note

  • Based on your requirements, you can modify the number of results you want to receive. You can do this by changing the values in the top_links()function.
  • If you want to scrape a custom list of websites, you have to add these links in the Top_Results.txt file. Then, you have to comment out the call to the function top_links().

Data Mining and changing Marketing strategies:

Once during my onsite assignment on a project for a big telecom giant in USA, I got the Opportunity to interact with one of its executive director.  He was using an executive dashboard with some very nice looking charts. I discussed with him about the dashboard and the piece of information which will be generated as our project’s goal. He informed me about the few sets of number and asked me to find out the way to put out a pattern which could help his Business in making an informed decision.

It’s very common now to hear about the KPIs (Key performance indicatrs), Importance of Metrics, their measurements etc. People normally overuse these terms for Business intelligence.   This is so much important to businesses as they can’t run without the future forecasts and these measurements.Someone has summarized the business philosophy:

  • If you can’t measure something, you really don’t know much about it.
  • If you don’t know much about it, you can’t control it.
  • If you can’t control it, you are at the mercy of chance.

 

This sums up the importance of data mining, and how measured data  translates into information, which finally outcomes the knowledge. This knowledge is what is used to run any business.

Over the last decade the technology has played the major role in defining the Marketing strategies. For example In one of the SAP CRM Service marketing assignment at a Big Auto Giant, we have derived the Target group or audience for e-campaign using their historical transactions The forecast or prediction that these customers would need the specific service was purely based on the data collected from their last transactions. This is the case of linear marketing or as per the best practices.

With changing times the customer is more informed with multi channels availability for information and collaborations on social sites. The marketing strategy for businesses are required to embrace these changes and have to now work in the proactive mode rather that reactive, as based on their historical transactions.

Now we have to work with customer intent during their browsing on our retail websites or visiting our store or querying at the Call Center. Requirement is to drive instant insight across lines of business, connect with business and social networks, and plug into the Internet of Things in real time?

There are many digital real time solutions available in market with the newer technologies. It’s not only the requirement but mandatory reflux to change the marketing strategies. The information derived needs to be implemented in marketing real-time (at the moment) and this information of customer intent, could change the buying decisions.

Our marketing strategies need to be enabled with these newer technologies which can derive Information in real time. SAP has enabled the SAP C4C solution and HANA platform as the new generation product which market is embracing. SAP Hybris provides us the e-marketing enablement. It also has the advance audience targeting and detection which could realize these real time marketing scenarios.

I am an Old School Marketer by Trade but...This is completely new to me. As I look forward to the journey yet I remain a little afraid as it is out of my comfort zone. It's a good thing I guess I do enjoying learning new things. As I have gone through and read along I understand a great deal of the general concepts but am at the same time in total unfamiliar territory. I've been doing Marketing, Web Design, Writing, and so on since 1999! You'd think I should already know this. Trust me we are constantly learning...and if Adventurous enough we occasionally walk upon uncharted territory and actually learn something that will also enhance my overall long term journey into the beyond of what we stated way back with as what I like to call "The HTML Alphabet"!

 

I will take it "One Step At The Time". Excited to learn yet fearful at the same time is a true mix of emotions. It would even be safe to say that the next few days will be done slow with extreme caution and carefulness! This is a short note here as I am eager to begin my journey with a little reading and seeking out Informative Videos that will break down the unfamiliarity of this intriguing but intimidating subject matter. It'll be fun! As nervous as I may feel it will also be exciting to move into something new. In my next post I promise to tell you all about what my discovery is and is not! Till next time.

 

Laurie Bullard (Reeal)

Newbie Student

 

P.S. I have much to learn but can Guarantee I will and improve my content and knowledge of where to and where not to Blog pertaining to a subject. Honestly as of right now I do not know the answers yet. As I do learn and receive my "Certification" I hope I get the opportunity to share what I have learned to better enhance the capabilities of others as well.

Disclaimer


This tutorial is intended as a guide for the creation of demo/test data only. The sample script provided is not intended for use in a productive system.


Purpose


The following tutorial explains a way of generating demo data for the Gigya related database tables in SAP Business Suite Foundation.

Following are the tables:

SMI_USR_ACCOUNT

SMI_USR_CRTFCT

SMI_USR_EDCTN

SMI_USR_FAVORITE

SMI_USR_IDENTITY

SMI_USR_LIKE

SMI_USR_PATENT

SMI_USR_PBLCTN

SMI_USR_PHONE

SMI_USR_PROFILE

SMI_USR_SKILL

SMI_USR_WORKEXP


The pre-installed Python Interpreter from the SAP HANA client is used to execute a Python script from SAP HANA Studio.

To run the script, you will also need to make a few customizing and configuration settings in order to use the Pydev Plugin in SAP HANA Studio.


Prerequisites


Make sure that the following prerequisites are met before you start out :

• Installation of SAP HANA Studio and SAP HANA Client
Install SAP HANA Studio and SAP HANA Client and apply for a HANA user with Read, Write and Update authorization for foundation database tables listed in the previous section.

 

Setup

1. Configuring Python in SAP HANA Studio Client
  

Python version 2.6 is already embedded in SAP HANA client, so you do not need to install Python from scratch. To configure Python API to connect to SAP HANA, proceed as follows.
       

1. Copy and paste the following files from C:\Program Files\SAP\hdbclient\hdbcli to C:\Program Files\SAP\hdbclient\Python\Lib

                a. _init_.py
                b. dbapi.py
                c. resultrow.py


2. Copy and paste the following files from C:\Program Files\SAP\hdbclient toC:\Program\Files\SAP\hdbclient\Python\Lib

                a. pyhdbcli.pdb
                b. pyhdbcli.pyd

          
Note:

      

In Windows OS, by default the installation path is C:\Program Files\SAP\.. for a 64 bit installation SAP HANA Studio and SAP HANA Database client

 

If you opted for a 32 bit Installation, the default path is C:\Program Files(x86)\sap\..


2. Setting up the Editor to run the file

2.1. Install Pydev plugin to use Python IDE for Eclipse

            

The preferred method is to use the Eclipse IDE from SAP HANA Studio. To be able to run the python script, you first need to install the Pydev plugin in SAP HANA Studio.
                  

                    a. Open SAP HANA Studio. Click HELP on menu tab and select Install New Software
                    b. Click the button Add and enter the following information

               gnip1.jpg
                       Name : pydev

                       Location : http://pydev.org/updates


                   c. Select the settings as shown in this screenshot.

                   gnip2.jpg

                       d. Press Next twice

                         e. Accept the license agreements, then press Finish.

                         f. Restart SAP HANA studio.


 

2.2. Configure the Python Interpreter

 


In SAP HANA studio, carry out the following steps:
     a. Select the menu entries Window -> Preferences

     b. Select PyDev -> Interpreters -> Python Interpreter

     c. Click New button, type in an Interpreter name. Enter in filed Interpreter Executable the following executable file C:\Program Files\hdbclient\Python\Python.exe. Press OK twice.


2.3. Create a Python project


In SAP HANA Studio, carryout the following steps:

     a. Click File -> New -> Project, then select Pydev project

     b. Type in a project name, then press Finish

     c. Right-click on your project. Click New -> File, then type your file name, press Finish.


Customizing and Running the Script


1. Customizing the python script


Copy and paste the below provided code into the newly created python file. Enter the values for the below parameters in the file.

     a. server – HANA server name (Ex : lddbq7d.wdf.sap.corp)

     b. port – HANA server port

     c. username_hana – HANA server username

     d. password_hana – HANA server password

     e. schema – schema name

     f. client – client number

    g. count - number of users for which the records shall be created

 

import sys, dbapi
from time import strftime
from random import randint, choice
#Returns prefix + ndigits
def randomN(prefix, ndigits):
    range_start = 10**(ndigits-1)
    range_end = (10**ndigits)-1
    return prefix + str(randint(range_start, range_end))
def get_patent_pub_name():
    part1 = choice(['Decomposistion', 'Sel-focusing', 'Ground-based', 'Process', 'Method', 'System', 'Apparatus'])
    part2 = choice(['of', 'and', 'for', 'in'])
    part3 = choice(['Carbon dioxide', 'Oxygen', 'Nitrogen', 'Hydride', 'Peroxide', 'Ultraviolet radiation', 'Light' ,'molecule'])
    part4 = choice(['conversion', 'generation', 'mixture', 'container', 'dispenser'])
    return ' '.join([part1, part2, part3, part4])
# def random_date(start, end):
#     return start + timedelta(seconds=randint(0, int((end - start).total_seconds())))
server = 'lddbbfi.wdf.sap.corp'
port = 30215
username_hana = ''
password_hana = ''
schema = 'SAPBFI'
client = '001'
#This is the number of users for which records shall be created
count = 5
hdb_target = dbapi.connect(server, port, username_hana, password_hana)
cursor_target = hdb_target.cursor()
profile_sql = 'upsert ' + schema + '.SMI_USR_PROFILE(CLIENT, DATAPROVIDERNAME, USERIDINDATAPROVIDER, FIRSTNAME, LASTNAME, NICKNAME, PHOTOURL, PROFILEURL, AGE, GENDER, BIRTHDAY, BIRTHMONTH, BIRTHYEAR, COUNTRY, STATE, CITY, ADDRESS, BIO, THUMBNAILURL, ZIP, PROXYEMAIL, LANGUAGE, HONORS, PROFESSIONALHEADLINE, INDUSTRY, SPECIALITIES, RELIGION, INTERESTEDIN, RELATIONSHIPSTATUS, HOMETOWN, FOLLOWERSCOUNT, FOLLOWINGCOUNT, USERNAME, NAME, LOCALE, ISVERIFIED, USERTIMEZONE, EDUCATIONLEVEL) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?) with primary key'
identity_sql = 'upsert ' + schema + '.SMI_USR_IDENTITY(CLIENT, DATAPROVIDERNAME, USERIDINDATAPROVIDER, COUNTER, SOCIALMEDIACHANNEL, SOCIALUSER, ISLOGINIDENTITY, NICKNAME, ISALLOWEDFORLOGIN, ISEXPIREDSESSION, LASTLOGINTIMESTAMP_UTC, PHOTOURL, THUMBNAILURL, FIRSTNAME, LASTNAME, GENDER, AGE, BIRTHDAY, BIRTHMONTH, BIRTHYEAR, EMAIL, COUNTRYCODE, STATE, CITY, ZIP, PROFILEURL, PROXIEDEMAIL, ADDRESS, LANGUAGES, PROFESSIONALHEADLINE, BIO, INDUSTRY, SPECIALITIES, RELIGION, POLITICALVIEW, INTERESTEDIN, RELATIONSHIPSTATUS, HOMETOWN, FOLLOWERSCOUNT, FOLLOWINGCOUNT, USERNAME, LOCALE, ISVERIFIED, USERTIMEZONE) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?) with primary key'
account_sql = 'upsert ' + schema + '.SMI_USR_ACCOUNT(CLIENT, DATAPROVIDERNAME, USERIDINDATAPROVIDER, USERIDSIGNATURE, SIGNATURETIMESTAMP_UTC, SOCIALMEDIACHANNEL, ISUSERREGISTERED, USERREGSTRDTIMESTAMP_UTC, ISUSERACCOUNTVERIFIED, USERACCNTVERIFIEDTIMESTAMP_UTC, ISUSERACCNTACTIVE, ISUSERACCNTLOCKEDOUT, INFLUENCERRANK, LASTLOGINLOCATION_COUNTRYCODE, LASTLOGINLOCATION_STATE, LASTLOGINLOCATION_CITY, LASTLOGINLOCATION_LATITUDE, LASTLOGINLOCATION_LONGITUDE, OLDESTDATAUPDATEDTIMESTAMP_UTC, ACCOUNTCREATEDTIMESTAMP_UTC, REGISTRATIONSOURCE) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?) with primary key'
patent_sql = 'upsert ' + schema + '.SMI_USR_PATENT(CLIENT, PATENT_UUID, TITLE, DATAPROVIDERNAME, USERIDINDATAPROVIDER, SUMMARY, PATENTNUMBER, PATENTOFFICE, STATUS, PATENTDATE, PATENTURL) values (?,?,?,?,?,?,?,?,?,?,?) with primary key'
education_sql = 'upsert ' + schema + '.SMI_USR_EDCTN(CLIENT, EDU_UUID, DATAPROVIDERNAME, USERIDINDATAPROVIDER, SCHOOL, SCHOOLTYPE, FIELDOFSTUDY, DEGREE, STARTYEAR, ENDYEAR) values (?,?,?,?,?,?,?,?,?,?) with primary key'
workexp_sql = 'upsert ' + schema + '.SMI_USR_WORKEXP(CLIENT, WORK_UUID, DATAPROVIDERNAME, USERIDINDATAPROVIDER, COMPANY, COMPANYID, WORK_TITLE, COMPANYSIZE, WORK_STARTDATE, WORK_ENDDATE, WORK_INDUSTRY, ISCURRENTCOMPANY) values (?,?,?,?,?,?,?,?,?,?,?,?) with primary key'
favorite_sql = 'upsert ' + schema + '.SMI_USR_FAVORITE(CLIENT, FAV_UUID, DATAPROVIDERNAME, USERIDINDATAPROVIDER, TYPE, NAME, CATEGORY) values (?,?,?,?,?,?,?) with primary key'
skill_sql = 'upsert ' + schema + '.SMI_USR_SKILL(CLIENT, SKILL_UUID, DATAPROVIDERNAME, USERIDINDATAPROVIDER, SKILL, SKILL_LEVEL, SKILL_YEARS) values (?,?,?,?,?,?,?) with primary key'
phone_sql = 'upsert ' + schema + '.SMI_USR_PHONE(CLIENT, PHONE_UUID, DATAPROVIDERNAME, USERIDINDATAPROVIDER, PHONETYPE, PHONENUMBER) values (?,?,?,?,?,?) with primary key'
publication_sql = 'upsert ' + schema + '.SMI_USR_PBLCTN(CLIENT, PBLCTN_UUID, DATAPROVIDERNAME, USERIDINDATAPROVIDER, PBLCTN_TITLE, PBLCTN_SUMMARY, PUBLISHER, PUBLICATIONDATE, PUBLICATIONURL) values (?,?,?,?,?,?,?,?,?) with primary key'
like_sql = 'upsert ' + schema + '.SMI_USR_LIKE(CLIENT, LIKE_UUID, DATAPROVIDERNAME, USERIDINDATAPROVIDER, NAME, CATEGORY, ID, LIKECREATIONTIMSTAMP_UTC) values (?,?,?,?,?,?,?,?) with primary key'
cert_sql = 'upsert ' + schema + '.SMI_USR_CRTFCT(CLIENT, CERT_UUID, DATAPROVIDERNAME, USERIDINDATAPROVIDER, CERT_NAME, AUTHORITY, CERT_NUMBER, CERT_STARTDATE, CERT_ENDDATE) values (?,?,?,?,?,?,?,?,?) with primary key'
channel_list = ['TW', 'FB','BLOG']
men_names = ['Mohan', 'Suresh', 'Salman', 'Nivin', 'Jayasurya', 'Vijay', 'Prabhas', 'Fahad', 'Fazil', 'Asif', 'Prithviraj', 'Muhammed', 'Shankar', 'Rajni', 'Ajith', 'Surya', 'Kamal']
women_names = ['Mamta', 'Kavya', 'Sindhu', 'Shriya', 'Trisha', 'Tabu', 'Simran', 'Meena', 'Asin', 'Kareena', 'Vidya', 'Sonakshi', 'Aiswarya', 'Preity', 'Namita', 'Sherin', 'Shamna', 'Miya' ,'Sruthy']
countrycodes = ['IN', 'DE', 'FR', 'US', 'CH', 'IT', 'RU']
professionalheadlines = ['Data mining Expert', 'Career consultant', 'Programming Guru', 'Final word in English Grammar', 'Wildlife Explorer', 'Geologist', 'Writer, Director', 'Singer, Actor', 'Expert Sculptor', 'Master in Physics', 'Astronomy Rockstar', 'Social Science Guru']
for i in range(count):
    dataprovidername = 'GIGYA'
    useridindataprovider = guid = randomN('_guid_', 29)
    counter = '1'
    socialmediachannel = choice(channel_list)
    socialuser = str(randint(111111111111111111, 999999999999999999))
    isloginidentity = 't'
    gender = choice(['1', '2'])
    if gender == '1':
        firstname = choice(men_names)
        lastname = choice(women_names)
        nickname = firstname[:3].lower() + '_' + choice(['star', 'therock', 'blazing', 'ismyelf', 'rocks', 'theking', 'kingest', 'royal', 'crazy', 'rider', 'fiery']) + str(randint(222, 999))
    else:
        firstname = choice(women_names)
        lastname = choice(men_names)
        nickname = firstname[:3].lower() + '_' + choice(['star', 'barbie', 'blazing', 'ismyelf', 'rocks', 'thequeen', 'queenest', 'royal', 'crazy', 'beauty', 'girl']) + str(randint(222, 999))
    isallowedforlogin = 't'
    isexpiredsession = 'f'
    lastlogintimestamp_utc = 0
    photourl = 'http://www.' + socialmediachannel.lower() + '.com/photo/' + socialuser
    thumbnailurl = 'http://www.' + socialmediachannel.lower() + '.com/thumbnail/' + socialuser
    age = str(randint(18, 90))
    birthday = ''
    birthmonth = ''
    birthyear = ''
    email = nickname + '@' + choice(['gmail', 'yahoo', 'mail', 'hotmail']) + '.com'
    countrycode = choice(countrycodes)
    state = 'test'
    city = 'test'
    zip = str(randint(2222222, 9999999))
    profileurl = 'http://www.' + socialmediachannel.lower() + '.com/' + socialuser
    proxiedemail = 'test'
    address = 'test'
    languages = 'test'
    professionalheadline = choice(professionalheadlines)
    bio = 'test'
    honors = 'test'
    industry = 'test'
    specialities = 'test'
    religion = 'test'
    politicalview = 'test'
    interestedin = choice(['1', '2'])
    relationshipstatus = ''
    hometown = 'test'
    followerscount = str(randint(0, 500))
    followingcount = str(randint(0, 500))
    username = firstname + socialuser
    locale = choice(['en_US', 'en_UK', 'en_IN'])
    isverified = choice(['t', 'f'])
    usertimezone = 'test'
    educationlevel = 'test'
    profile_record = (client, dataprovidername, useridindataprovider, firstname, lastname, nickname, photourl, profileurl, age, gender, birthday, birthmonth, birthyear, countrycode, state, city, address, bio, thumbnailurl, zip, proxiedemail, languages, honors, professionalheadline, industry, specialities, religion, interestedin, relationshipstatus, hometown, followerscount, followingcount, username, username, locale, isverified, usertimezone, educationlevel)
    cursor_target.execute(profile_sql, profile_record)
    identity_record = (client, dataprovidername, useridindataprovider, counter, socialmediachannel, socialuser, isloginidentity, nickname, isallowedforlogin, isexpiredsession, lastlogintimestamp_utc, photourl, thumbnailurl, firstname, lastname, gender, age, birthday, birthmonth, birthyear, email, countrycode, state, city, zip, profileurl, proxiedemail, address, languages, professionalheadline, bio, industry, specialities, religion, politicalview, interestedin, relationshipstatus, hometown, followerscount, followingcount, username, locale, isverified, usertimezone)
    cursor_target.execute(identity_sql, identity_record)
    useridsignature = 'test1'
    signaturetimestamp_utc = '123'
    isuserregistered = choice(['t', 'f'])
    userregstrdtimestamp_utc = '123'
    isuseraccountverified = choice(['t', 'f'])
    useraccntverifiedtimestamp_utc = '123'
    isuseraccntactive = choice(['t', 'f'])
    isuseraccntlockedout = choice(['t', 'f'])
    influencerrank = str(randint(0, 101))
    lastloginlocation_countrycode = countrycode
    lastloginlocation_state = 'test'
    lastloginlocation_city = 'test'
    lastloginlocation_latitude = '123'
    lastloginlocation_longitude = '123'
    oldestdataupdatedtimestamp_utc = '123'
    accountcreatedtimestamp_utc = '123'
    registrationsource = 'test'
    account_record = (client, dataprovidername, useridindataprovider, useridsignature, signaturetimestamp_utc, socialmediachannel, isuserregistered, userregstrdtimestamp_utc, isuseraccountverified, useraccntverifiedtimestamp_utc, isuseraccntactive, isuseraccntlockedout, influencerrank, lastloginlocation_countrycode, lastloginlocation_state, lastloginlocation_city, lastloginlocation_latitude, lastloginlocation_longitude, oldestdataupdatedtimestamp_utc, accountcreatedtimestamp_utc, registrationsource)
    cursor_target.execute(account_sql, account_record)
    num_of_patents = randint(1, 5)
    for i in range(num_of_patents):
        patent_uuid = randomN('patent_id', 12)
        title = get_patent_pub_name()
        summary = 'This patent is about the ' + title
        patentnumber = str(randint(222222222, 999999999))
        patentoffice = 'Patent office-' + countrycode
        status = choice(['Awarded', 'Submitted', 'Under scrutiny', 'Declined', 'Application received'])
        patentdate = ''
        patenturl = 'https://www.' + patentoffice + '.com/patents/' + patentnumber
        patent_record = (client, patent_uuid, title, dataprovidername, useridindataprovider, summary, patentnumber, patentoffice, status, patentdate, patenturl)
        cursor_target.execute(patent_sql, patent_record)
     
        pblctn_uuid = randomN('publctn_id', 12)
        pblctn_title = title
        pblctn_summary = choice(['A work on ', 'A write up on ', 'Book about ', 'Article: ', 'Book: ']) + title
        publisher = choice(['Mondadori', 'Bonnier', 'ThomsonReuters', 'Harper Collins', 'Oxford', 'Wiley', 'O\'reily', 'Shogakukan', 'Informa', 'Simon & Schuster', 'Pearson', 'Saraiva', 'Sanoma', 'Cambridge University Press'])
        publicationdate = ''
        publicationurl = 'https://www.' + publisher.replace(' ', '') + '.com/' + pblctn_title.replace(' ', '')
        publication_record = (client, pblctn_uuid, dataprovidername, useridindataprovider, pblctn_title, pblctn_summary, publisher, publicationdate, publicationurl)
        cursor_target.execute(publication_sql, publication_record)
               
        cert_uuid = randomN('cert_id', 12)
        cert_name = title
        cert_number = str(randint(23423423,345345345))
        authority = choice(['Mondadori', 'Bonnier', 'Harper Collins', 'Wiley', 'O\'reily', 'Shogakukan', 'Informa', 'Pearson', 'Saraiva', 'Sanoma', 'Cambridge University'])
        cert_startdate = ''
        cert_enddate = ''
        cert_record = (client, cert_uuid, dataprovidername, useridindataprovider, cert_name, authority, cert_number, cert_startdate, cert_enddate)
        cursor_target.execute(cert_sql, cert_record)
    edu_uuid = randomN('edu_id', 12)
    school = choice(['PES Institute of Technology', 'Bangalore University', 'IIT Madras', 'NIT Calicut', 'Government Engg college, Thrissur', 'VIT','MIT', 'MSRIT', 'RVCE', 'UVCE'])
    schooltype = choice(['Engineering', 'Technical Education', 'Higher studies', 'Advanced studies'])
    fieldofstudy = choice(['Computer Science', 'Electronics and Communication', 'Civil engineering', 'Mechanical Engineering', 'Electrical Engineering', 'Production engineering'])
    degree = choice(['B.Tech', 'MS', 'M.Tech', 'BS', 'B.Sc', 'M.Sc'])
    startyear = str(randint(2000, 2010))
    endyear = str(int(startyear) + 4)
    education_record = (client, edu_uuid, dataprovidername, useridindataprovider, school, schooltype, fieldofstudy, degree, startyear, endyear)
    cursor_target.execute(education_sql, education_record)
    work_uuid = randomN('work_id', 12)
    company = choice(['SAP Labs India', 'IBM', 'CISCO', 'Microsoft', 'Google', 'Yahoo', 'Housing', 'Wipro', 'Infosys', 'TCS'])
    companyid = randomN(company[:3], 9)
    work_title = choice(['Senior developer', 'Developer Associate', 'Programmer', 'Coder', 'Hacker', 'Software Engineer', 'Data expert', 'Web developer', 'System programmer', 'UI Expert', 'Quality Assurance', 'Knowledge Management', 'Architect', 'Team Lead'])
    companysize = str(randint(5000, 500000))
    work_startdate = ''
    work_enddate = ''
    work_industry = 'Software'
    iscurrentcompany = choice(['X', ''])
    workexp_record = (client, work_uuid, dataprovidername, useridindataprovider, company, companyid, work_title, companysize, work_startdate, work_enddate, work_industry, iscurrentcompany)
    cursor_target.execute(workexp_sql, workexp_record)
    num_of_skills = randint(1, 10)
    for i in range(num_of_skills):
        skill_uuid = randomN('patent_id', 12)
        skill = choice(['Algorithms', 'Analytics', 'Android', 'Applications', 'Blogging', 'Business', 'Business Analysis', 'Business Intelligence', 'Business Storytelling', 'Content Management', 'Content Marketing', 'Content Strategy', 'Data Analysis', 'Data Analytics', 'Data Engineering', 'Data Mining', 'Data Science', 'Data Warehousing', 'Database Administration', 'Database Management', 'Digital Marketing', 'Hospitality', 'Human Resources', 'Information Management', 'Information Security', 'Legal', 'Leadership ', 'Management', 'Marketing', 'Market Research', 'Media Planning', 'Microsoft Office Skills', 'Mobile Apps', 'Mobile Development', 'Network and Information Security', 'Newsletters', 'Online Marketing', 'Presentation', 'Project Management', 'Public  Relations', 'Recruiting', 'Relationship Management', 'Research', 'Risk Management', 'Search Engine Optimization', 'Social Media', 'Social Media Management', 'Social Networking', 'Software', 'Software Engineering', 'Software Management', 'Strategic Planning', 'Strategy', 'Technical', 'Training', 'UI / UX', 'User Testing', 'Web Content', 'Web Development', 'Web Programming', 'WordPress', 'Writing'])
        skill_level = choice(['Beginner', 'Medium', 'Advanced', 'Expert'])
        skill_level_years_dict = {'Beginner': 0, 'Medium': 4, 'Advanced': 10, 'Expert': 20}
        skill_years = skill_level_years_dict[skill_level]
        skill_record = (client, skill_uuid, dataprovidername, useridindataprovider, skill, skill_level, skill_years)
        cursor_target.execute(skill_sql, skill_record)
     
    fav_uuid = randomN('work_id', 12)
    type = ''
    name = choice(['Eminem', 'Metallica', 'Led Zeppelin', 'Mother Jane', 'Avial', 'Lamb of God', 'Nirvana'])
    category = 'Music'
    favorite_record = (client, fav_uuid, dataprovidername, useridindataprovider, type, name, category)
    cursor_target.execute(favorite_sql, favorite_record)
    like_uuid = randomN('like_id', 12)
    type = ''
    name = choice(['Eminem', 'Metallica', 'Led Zeppelin', 'Mother Jane', 'Avial', 'Lamb of God', 'Nirvana'])
    category = 'Music'
    id = randomN('id', 7)
    likecreationtimstamp_utc = '123'
    like_record = (client, like_uuid, dataprovidername, useridindataprovider, name, category, id, likecreationtimstamp_utc)
    cursor_target.execute(like_sql, like_record)
    phone_uuid = randomN('work_id', 12)
    phonetype = choice(['mobile', 'telephone'])
    phonenumber = str(randint(9132323154, 9947931930))
    phone_record = (client, phone_uuid, dataprovidername, useridindataprovider, phonetype, phonenumber)
    cursor_target.execute(phone_sql, phone_record)
hdb_target.commit()
print('Done pushing data for ' + str(count) + ' users into ' + server + '!')


 

2. Run the script from your editor


3. Checking the Results in the database tables.

The script randomly chooses values for various fields from a specified set of values. For example:

countrycode will be chosen randomly from the list ['IN', 'DE', 'FR', 'US', 'CH', 'IT', 'RU'].

These lists can be modified as per the requirement for the demo.

Over and out!


Related Blog posts:

Demo Social and Sentiment data generation using Python script

http://scn.sap.com/community/crm/marketing/blog/2015/01/12/demo-social-and-sentiment-data-generation-using-python-script

For a particular Trade Promotion, accruals are calculated as per “accrual methods “configured.

The Funds Management application provides  accrual management capabilities, which means that accrual calculations can be done within the SAP CRM system and sent to SAP ERP where the amount is posted in SAP ERP Financials.

The Accrual Calculation job can use various reference data types, depending on what is defined in Customizing. Examples include sales volumes (SAP ERP), trade promotion management (TPM) planning data, or funds data. The accrual calculation results are stored within the accrual staging area.

 

In Accrual Posting it is possible to schedule an accrual posting run in the batch processing framework to post the accrual results as fund postings, which are transferred to SAP ERP financials as accounting documents

 

Below diagram explains configuration set up linkage for Accrual method for particular Trade promotion and spend types.

 

 

1.jpg

 

Configuration Path

  1. SPRO -> Customer Relationship management --> Fund management --> Accruals --> Accrual Calculation Method

 

2.jpg

3.jpg

4.jpg

 

 

Below are the overview of the six accrual methods delivered in SAP CRM Trade Promotion standard. However, it is possible to configure alternative accrual calculation methods on a project basis/requirement.

4.5.JPG

 

 

The information of accrual method   can be seen in Fund usage in Trade Promotion. Pls. refer below screenshot for the same.

 

5.jpg

Introduction

 

'Analyze Sentiments' is a Fiori app that helps you perform Sentiment Analysis on the topics that interest you. To learn more about the app, please go check out these links:

 

 

Quick integration of Sentiment Analysis powered by Analyze Sentiments into your app

Ready to get your feet wet?!

 

Here are a few steps to add a chart control into a UI5 control that supports aggregations (like sap.m.List, etc) and to connect the OData service to this chart.

When you run the app, you will be able to see nice little charts added into each item in the aggregation that shows sentiment information.

 

Follow these steps to quickly integrate Sentiment Analysis capability into your already existing UI5 app:

 

1) Insert the chart into the appropriate location in your app. In the sample code below, the chart is embedded into a custom list item:

<List id="main_list" headerText="Vendors">
  <items>
        <CustomListItem>
            <HBox justifyContent="SpaceAround">
                  <ObjectHeader title="playstation" />
                  <viz:VizFrame vizType="bar" uiConfig="{applicationSet:'fiori'}" height="250px" width="250px"> </viz:VizFrame>
            </HBox>
        </CustomListItem>
  ...

2) In the controller code on initialization, add the following code to fill data in the chart that we added into the UI in the previous step:

 

//Get the reference to the Odata service
var oModel = new sap.ui.model.odata.ODataModel("http://localhost:8080/ux.fnd.snta/proxy/http/lddbvsb.wdf.sap.corp:8000/sap/hba/apps/snta/s/odata/sntmntAnlys.xsodata/", true);
//Get the reference of the control where you want the charts embedded
var oList = this.getView().byId("main_list");
//This code gets the Subjectname from the control in which the chart is going to get embedded. You can see that the subjectname is extracted from the Title of the items in the list
for (var i = 0; i < oList.getItems().length; i++) {
    var oChart = oList.getItems()[i].getContent()[0].getItems()[1];
    var sItemName = oList.getItems()[i].getContent()[0].getItems()[0].getTitle();
//Now we set the data for each item in the list as per the subject that we extracted from the listitem.
    oModel.read('/SrchTrmSntmntAnlysInSoclMdaChnlQry(P_SAPClient=\'' + self.sSAPClient + '\')/Results', null, ['$filter=SocialPostSearchTermText%20eq%20\'' + sItemName + "\' and " + "SocialPostCreationDate_E" + " ge datetime\'" + '2014-06-14' + '\'' + '&$select=Quarter,Year,SearchTermNetSntmntVal_E,NmbrOfNtrlSoclPostVal_E,NmbrOfNgtvSocialPostVal_E,NmbrOfPstvSocialPostVal_E'], false, function(oData, oResponse) {
        oChart.setVizProperties({
            interaction: {
                selectability: {
                    mode: "single"
                }
            },
            valueAxis: {
                label: {
                    formatString: 'u'
                }
            },
            legend: {
                title: {
                    visible: false
                }
            },
            title: {
                visible: false
            },
            plotArea: {
                dataLabel: {
                    visible: true
                },
                colorPalette: ['sapUiChartPaletteSemanticNeutral', 'sapUiChartPaletteSemanticBad', 'sapUiChartPaletteSemanticGood']
            }
        });
        var oChartDataset = new sap.viz.ui5.data.FlattenedDataset({
            measures: [{
                name: "Neutral",
                value: '{NmbrOfNtrlSoclPostVal_E}'
            }, {
                name: "Negative",
                value: '{NmbrOfNgtvSocialPostVal_E}'
            }, {
                name: "Positive",
                value: '{NmbrOfPstvSocialPostVal_E}'
            }],
            data: {
                path: "/results"
            }
        });
        oChart.setDataset(oChartDataset);
        var oDim1 = new sap.viz.ui5.data.DimensionDefinition({
            name: "Year",
            value: '{Year}'
        });
        var oDim2 = new sap.viz.ui5.data.DimensionDefinition({
            name: "Quarter",
            value: '{Quarter}'
        });
        var oDataset = oChart.getDataset();
        oDataset.addDimension(oDim1);
        oDataset.addDimension(oDim2);
        var oChartModel = new sap.ui.model.json.JSONModel(oData);
        oChart.setModel(oChartModel);
        oChart.setVizProperties({
            valueAxis: {
                title: {
                    visible: true,
                    text: "Mentions"
                }
            },
            categoryAxis: {
                title: {
                    visible: true,
                    text: "Quarter"
                }
            }
        });
        var feedValueAxis = new sap.viz.ui5.controls.common.feeds.FeedItem({
            'uid': "valueAxis",
            'type': "Measure",
            'values': ["Neutral", "Negative", "Positive"]
        });
        var feedCategoryAxis = new sap.viz.ui5.controls.common.feeds.FeedItem({
            'uid': "categoryAxis",
            'type': "Dimension",
            'values': [new sap.viz.ui5.controls.common.feeds.AnalysisObject({
                    'uid': "Year",
                    'type': "Dimension",
                    'name': "Year"
                }),
                new sap.viz.ui5.controls.common.feeds.AnalysisObject({
                    'uid': "Quarter",
                    'type': "Dimension",
                    'name': "Quarter"
                })
            ]
        });
        oChart.addFeed(feedCategoryAxis);
        oChart.addFeed(feedValueAxis);
    }, function() {
        sap.m.MessageBox.show("Odata failed", sap.m.MessageBox.Icon.ERROR, "Error", [
            sap.m.MessageBox.Action.CLOSE
        ]);
    });
}

PS: Depending on how you add the chart into your app, the above chunk of code will have to be adjusted to get the subjectname and pass it to the chart.

 

In the above sample code, you can find that the chart in each custom list item is bound to data in a loop. If you have added the chart in a similar control with an aggregation, you would have to modify the lines highlighted above to get the list control and to get the chart reference and searchterm.

 

 

What else can you do with the Analyze Sentiments Odata services?

Here’s some more information on our existing Odata services for Analyze Sentiments and some ideas how you can use it in your apps.

 

Collection

What information it gives out

SocialMediaChannelsQuery

List of Channels (code and name)

SocialPostSearchTermsQuery

List of Searchterms (code and name)

SrchTrmSntmntAnlysInSoclMdaChnlQry

List of (number of mentions, number of positive, negative & neutral mentions, ‘netsentiment value’) for a searchterm given out in daily/weekly/monthly/quarterly/yearly period granularity

SrchTrmSntmntAnlysSclPstDtlsQry

List of socialposts for a searchterm in a period

SrchTrmSntmntTrendInSoclMdaChnlQry

Net sentiment trend in percentage for a searchterm over a specified period.


PS: The last three services retrieves data for all subjects when filter is not applied on searchterms.

 

 

Calculations used:

 

Net sentiment  = P - N

P = sum of weights of positive posts. The weight could be +1(good), +2(very good)

N = sum of weights of negative posts. The weight can be -1(bad), -2(very bad)

 

Net sentiment trend percentage = (Net sentiment in last n days – Net sentiment in previous n days) / Net sentiment in previous n days.

 

So on the whole, we have the following information:

i) Number of positive, negative, neutral, total mentions about a Subject

ii) Net sentiment about a subject

iii) Net sentiment trend about a subject which is a percentage.

 

Here are some sample ways in which the external apps can right away start using our Odata assets:

 

Use

Control that can be used

Collection to be used

Show the numbers (total, positive, negative neutral mentions or netsentiment) related to a subject

Label

SrchTrmSntmntAnlysInSoclMdaChnlQry

Show the socialposts related to a subject

Table, list, etc

SrchTrmSntmntAnlysSclPstDtlsQry

Show the net sentiment trend of a subject

Label

SrchTrmSntmntTrendInSoclMdaChnlQry

Show chart/graph with the numbers over a period

Chart

SrchTrmSntmntAnlysInSoclMdaChnlQry

 

 

 

Related links:

One of the most overlooked aspects of contact management is the relationship between the contacts in your database and your sales process. It has been my experience that most companies develop their marketing databases with contact information independent and blind to their sales processes.

With more than 5.6 people on average being reported to now be involved in a purchase decision for a solution, you can’t develop a good database without first understanding how you sell.

A critical first step in helping customers through the buyer's journey is to understand who you need to communicate with along the way. Understanding the roles and how decisions get made supporting specific solutions and business processes is a prerequisite for developing the right kind of marketing database.  For example, if you sell complex solutions that require engagement with economic and technical buyers, then the contacts in your database needs to support these types of roles.

I once marketed to a very specialized audience to a defined number of accounts that could only purchase our solutions, if their companies met very specific purchasing criteria. While I was able to find resources for the specialty titles I was seeking, I was not able to meet my second objective of locating these titles for the accounts and criteria we were targeting.

In this case, I ended up developing a custom database using a marketing intern, leveraging an online contact repository by pulling contacts against predefined criteria. While this custom database required some initial development effort, our program responses, leads, and opportunity conversions grew exponentially.  We were now able to target and reach the roles we needed to reach, in the accounts where we needed to do business.

As, you begin to evaluate future contact list purchases, do so from the perspective of addressing your white space and gaps in roles supporting your sales processes. As you do, I'm confident you will begin to view your contacts in your marketing database in an entirely new manner while further appreciating its ultimate power.

To learn more about SAP's in-memory database, SAP HANA, and SAP solutions for Big Data, I invite you to click on the following link.

 

 

Regards,

 

 

Harry E. Blunt

Director, North America Industry Field Marketing
SAP America, Inc.

3999 West Chester Pike 
Newtown Square, PA 19073
M: 302-740-8293

E: harry.blunt@sap.com

With future innovation and sales success tied so closely to the delivery of relevant and personalized customer experiences, companies must get closer and more intelligently connected with their customers while paying greater attention to the user experience. To meet these objectives, companies must develop a holistic framework for managing customer intelligence and their different sales channels while differentiating their offerings through flexible solution delivery models.

 

Competing successfully in the digital economy requires an “always on,” integrated approach for capturing and leveraging customer intelligence. Intelligence should be leveraged throughout all parts of the organization and needs to be visible and relevant at the point of a customer's transaction or engagement. By strategically combining transactional, qualitative, and social data with analytics and BigData, companies can better understand opportunities for future innovation while engaging with customers more personally by becoming much more prescriptive around audience targeting and messaging.

 

Because customers expect personalized relevant experiences regardless of the channels from where and how they engage, all organizations must have a holistic picture of customer engagement supported by a sound strategy focused on Omni-channel commerce. Providing customers with a unified and intelligently connected user experience grows customer relationships, and captures customer intelligence previously undetected by unifying previously disconnected, non-visible and fragmented customer experiences. Companies can dramatically improve their customers’ user experience and loyalty by offering personal, intelligently connected experiences over multiple channels of engagement and commerce.

 

Product and Software Innovators can draw closer to their customers by moving from consumption based purchase models to solution and subscription based models. While there are significant benefits with moving to solution sales and recurring revenue streams like subscriptions, there are also associated added complexities impacting how these new solutions need to be developed, communicated and delivered to the market.  Operationally, moving toward solution and subscription based business models impacts how solutions, are developed, orders get configured and ultimately how revenue is captured and realized. To fully capitalize on these new emerging opportunities for selling solutions and subscriptions, you need to have operational and billing systems that accommodate and support a large degree of custom order configuration and business requirements flexibility, extending from product development through solution delivery.

 

You can learn more about SAP solutions for Customer Engagement and Commerce and how leading manufacturing and software companies are providing differentiated value to their customers by accessing these complimentary resources

 

Warmest regards,

 

Harry E. Blunt

Director, North America Industry Field  Marketing
SAP America, Inc.

3999 West Chester Pike 
Newtown Square, PA 19073
M: 302-740-8293

E: harry.blunt@sap.com

While social selling, customer experience, and buying personas, grab the marketing headlines, I would like to pause and pay homage to the often ignored but equally important North American Industry Classification System (NAICS).

 

It’s my assertion that the use of NAICS codes and their predecessor SIC codes are some of the most misunderstood, and underutilized FREE resources available to marketers and business people.

 

Many of the challenges faced by marketers around content consumption and messaging with relevance can be greatly addressed by doing a better job with industry and audience segmentation prior to audience engagement. Whether you are marketing to the Fortune 1000 or an addressable market size over a million customers, those thousands or millions of customers do not all share the same business characteristics. The key to effective messaging and getting your message to the right people is reaching people where they “live” by targeting and messaging based on finding those unique characteristics.

 

Understanding and incorporating NAICS codes into your target audience strategy is a critical first step in setting future winning audience engagement tactics. Those six digit codes buried among all your other fields of customer data truly do matter.

 

To help you appreciate the magnitude and importance of these differences, I have attached a little light reading. It contains 508 pages of individual NAICS code descriptions from the US Census Department. As you will see, sub-sectors that are operating within the same general industry act and behave very differently. While a chemical provider of chlorine and a paint company operates under the same general classification of chemicals, the way they manufacture products and sell products is very different.

 

If you want another proof point for taking a more granular approach to audience targeting, consider that there are over 10,000 plus active associations and 1000’s of specialty trade journals. These associations, trade publications, and associated websites and social communities are successfully reaching audiences at the sub-sector level with very defined special interest. They continue to thrive and prosper in their niche markets because the content they provide and issues they address while being niche is extremely relevant to their audiences. These associations, niche trade publications and social communities are successfully reaching their target audiences where they “live.”

 

Marketing program returns naturally improve for companies when their content reaches and resonates with their intended audiences. Having well-defined audience segmentation, aided by the NAICS, is a good first step, companies can take to ensure they develop the right messages heard and then acted upon by the right people

 

To learn how SAP can help you engage more personally with your customers through Big Data  and  Omni-channel commerce, please check out the following complimentary resources.



Harry E. Blunt


Director, North America Industry Field Marketing
SAP America, Inc.

3999 West Chester Pike 
Newtown Square, PA 19073
M: 302-740-8293

E: harry.blunt@sap.com

If you ever read a murder mystery or watched a criminal investigation TV show, a common plot element is to start the story with a found dead body often referred to affectionately as “John Doe.”The balance of the show or book typically then focuses on a protagonist working toward uncovering John Doe’s anonymity and how and who ultimately caused this person’s demise.

 

The Investigator rarely just jumps in trying to solve the crime and more often than not the investigation starts with an autopsy of the body. With additional information gleaned from the autopsy, the protagonist then begins to solve the mystery. As the protagonist gathers more information about this dead body, the anonymous dead body quickly evolves into an identified person with distinct attributes leading to the point when the crime finally gets solved.

 

For marketers, working with incomplete and unrefined responder data is a bit like working with an anonymous dead body. If all a marketer knows about a potential prospect or responder is the person’s name, company and even email, there is not much a marketer can do to engage effectively with that individual. Like, the process of an initial autopsy, a marketer has to try to define and characterize this first responder to the best of his or her ability from the outset. Otherwise, there is no logical place from where to potentially engage with this individual with future activities.

 

Successful target marketing must begin with basic contact hygiene. Missing contact details like contact titles, emails, phone numbers and industry NAICS codes and supporting descriptions must be continuously appended and updated within databases. While it’s not always possible to have complete responder contact data initially, companies must make data hygiene a priority to ensure contact information is complete as possible prior to future use for targeting and analysis.

 

Once a customer record is defined with uniquely definable attributes, you can then move forward with identifying and segmenting future audience targets by creating responder profiles based on specific responder behaviors.None of this can take place until contact data is defined and managed as uniquely identifiable attributes. Two aspects of a customer’s record that make it potentially unique is a person’s title, and their Industry NAICS code. The third aspect relates to leveraging and tracking responder behavior, but you can’t move successfully to step three without first having a person’s title and correctly defined NAICS code and supporting description. Otherwise, you run the risk of jumping to conclusions based on inaccurate or incomplete data. To illustrate the point, let me provide a fictitious example. Suppose that for the last six months you have been running a marketing campaign on Big Data, In that campaign you always included responders from prior related activities including those responders without complete contact records. For the last six months any marketing activity you had related to Big Data you continually push to Big Data program responders

 

At the conclusion of the campaign, while participation has been steadily increasing you have seen marginal movement having responders converting to leads and then having leads moving into opportunities. As a postmortem, you decide to do some additional data hygiene around your responders with incomplete contact records. With a more comprehensive responder profile, this is what you find. A large percentage of the newly defined responders came from Life Sciences and particularly Medical Device companies with titles focused on quality and regulatory operations compliance. Having this new insight, you take some time and do some research on the topic of Big Data in the Medical Device community and discover that Big Data is actively being showcased as an issue and opportunity. You then tweak your programming and messaging and focus more on Medical Device operationally focused buying centers, during the second half of the year, and both leads and opportunities substantially improve.

 

While this is a fictitious example, here is the important point. Until, you’re able to develop a more comprehensive responder profile, there is little you can do to move confidently forward with meaningful future engagement and analysis. Target audiences and responders are just “anonymous dead bodies” until they can be characterized and grouped into uniquely definable attributes.

 

And, while data hygiene and segmentation rarely share the same reverence as “improving customer experience.” Like the “murder mystery autopsy,” they are critical disciplines for marketers first to master to ensure relevant audience conversations and future business opportunities.

 


Harry E. Blunt

Director, North America Industry Field Marketing
SAP America, Inc.

3999 West Chester Pike 
Newtown Square, PA 19073
M: 302-740-8293

E: harry.blunt@sap.com

SAP Canada hosted two events last week focusing on the theme of running simple and innovating business processes in a complex global world. They featured fascinating success stories from SAP customers and an eye-opening presentation from TED fellow and Complexity Scientist Eric Berlow on embracing complexity to come up with innovative answers to big data challenges.

Eric kicked off proceedings with his intriguing perspective on how we can leverage the explosion of data to build a ‘data-scope’ which allows us to connect the dots and see simple patterns that are invisible to the naked brain. He calls this ‘finding simple’ from complex causality and multi-dimensionality, a theory that can be applied across digital media and business strategy. He closed by saying that businesses need to focus on being intelligently simple rather than merely simplistic. In other words, IT has to be distilled down to offer real business insights, rather than simplified down to nothing at all.

/profile/1axpIkoxXcWZGPRWFo6pCw/documents/F6yd2yCDZVpqWjKQGR6JHg/thumbnail?max_x=850&max_y=850

Positioning SAP as the go-to intelligent business simplifier

Snehanshu Shah, Vice President, HANA Centre of Excellence, SAP Global, moved the conversation to the cost of growing complexity in business. He introduced SAP’s S/4HANA business suite as the on-premise and cloud solution designed to give organizations the freedom and drive to innovate their business processes. By taking the core functionality of R/4, simplifying it and applying the streamlined Fiori interface, S/4HANA delivers less hardware at lower cost while providing faster answers. This is business intelligence and analytics at the fingertips of every line of business.

Sam Masri, Managing Principal, Industry Value Engineering, SAP Canada, continued the discussion by calling complexity business’s most intractable challenge today – a rising view across many industries. While so many enterprises are throwing a vast portion of their IT spend at keeping the lights on, they’re missing out on achieving a lower TCO of up to 22% by failing to invest more in innovation.

Sharing our customers’ success in simplification and innovation

The event was rounded off with some valuable insights into how some of SAP’s key customers are using our software to run simple. First, Albert Deileman and Jason Leo of the Healthcare of Ontario Pension Plan (HOOPP) told us of their search for a ‘pixie dust’ solution to the organization’s complexity challenges. For them, the SAP HANA Enterprise Cloud (HEC) solution is all about simplicity with results; real-time data speed and agility, perfect replication, rapid queries and modelling, and faster time to market.

John Harrickey from CSA Group was next up, telling us how the transition from ERP to HEC fuelled the company’s growth and expansion into Europe and Asia. It has enabled better employee engagement by mobilizing applications and improving insights, and better customer engagement by enhancing collaborating and responsiveness. He explained how the company has been able to build HANA upon existing SAP technology to create a seamless experience with high functionality, resulting in reduced complexity and a more productive business.

Wally Council of HP Converged Solutions spoke of the company’s need to flip IT investment from keeping the lights on to funding innovation. He brought up the point that the operational complexity challenges faced by huge and geographically-diverse modern enterprises have to be tackled head-on by a major rethink of key business processes and user experiences.

To top it off, our own Mike Golz, CIO, SAP Americas, reminded us that SAP itself runs SAP, and remains our first and best reference customer.

The richly-attended talks were held in the Four Seasons Hotel in Toronto on March 12 and Flames Central in Calgary on March 10. If you would like to find out more about SAP’s Simplify to Innovate initiative and the S/4HANA business suite, please visit the event landing page and www.sap.com/HANA. The presentations from the event are available on SAP Canada’s JAM page.

Actions

Filter Blog

By author:
By date:
By tag: