1 2 3 Previous Next

SAP CRM: Marketing

40 Posts

I want to provide an overview about possible decimal issues in BPS Planning used in the CRM Marketing scenario. There are some known issues related to decimal settings in PBS planning. This blog should provide information about the design of the decimal validation and how to set up the planning layout correctly. Furthermore this should contain a collection of solutions for known issues.

 

When looking the the planning layout created for a trade promotion in CRM we can see a key figure in the plannning layout defined with 2 decimals.

planning layout .jpg

tpm planning layout2.jpg

I will take these example to explain the design.

 

General Settings

 

When setting up the planning layout the following 4 level dependencies need to be considered.

 

1. UPX Layout Definition

2. BPS0 Customizing

3. Key Figure

4. Data Element

 

When the planning layout is rendered the first level that is considered is the UPX Layout Definition. In transaction UPX_MNTN the number of decimals can be defined:

upx_mntn bonus display.jpg

  upx_mntn kf22.jpg

The decimals places set in the UPX layout defines the number of decimals displayed in the planning layout. This number is for display reasons only.

 

On the second level there is the BPS0 Customizing. This is the first level that defines how the key figures are stored. That means key figures are rounded to the number of decimals defined in BPS0 and stored as the rounded value.

bps0 dec.jpg

For data consistency reasons the number of decimals defined in UPX_MNTN must be smaller or equal to the number of decimals defined in BPS0. Otherwise an error will be raised.

 

If there are no decimals defined in BPS0 the same rule is valid for the key figure definition in RSD1.

rsd1 key fig.jpg

If there are no decimals defined in the key figure details the data element for the key figure is considered.

rsd1 key fig data element.jpg

rsd1 key figure data element2.jpg

The decimals defined in the UPX_MNTN are considered for displaying the key figures, whereas the decimals defined in the levels below BPS0 are considered for calculations and storing the values. You should not have more decimals in layout than what you can actually save in the database. The general rule is the following:

 

 

No of display decimals <= No of decimals used for calculation


Please refer to the following KBA for further information about the dependencies between the different levels:

 

1936500 - Enter key figure 0,000 with a valid format (2 decimal places)

 

Percentage based key figures

 

What needs to be considered for percentage based key figures?

tpm laoyut percentage.jpg

The number of displayed decimals is taken from the UPX_MNTN settings as well.

upx_mntn percentage.jpg

This is similar to any other key figure definition. The difference is the way the system stores the percentage values. Depending on the parameter UPX_KPI_KFVAL_PERC_CONV_TO_BW the percentage value is stored as divided by 100. A value of 10% is therefore stored as 0,01. This requires the settings for the percentage key figure to have 2 more decimals defined in BPS0 than in UPX_MNTN not to lose precission.

bpd0 percentage.jpg

This is documented in the following SAP note:

 

1407682 - Planning services customizing for percentage key figures

 

With the parameter UPX_KPI_KFVAL_PERC_CONV_TO_BW set the percentage key figure values is stored in BW as 10 for 10%. If the parameter is set the above decimal setting is not required. Information about the UPX_KPI_KFVAL_PERC_CONV_TO_BW parameter in UPC_DARK2 table is available in the following SAP note:

 

1867095 - Planning Services Customizing Flags in the UPC_DARK2 Table

 

There are some known issues for percentage key figures, those are solved with the following SAP notes:

 

1523793 - Wrong rounding of percentage key figures with classic render

1370566 - Rounding error for Percentage Key Figures

 

Currency key figures

 

Since most currencies use 2 decimals per design there should not be any issues for the most currencies. However there are some known issues for exceptional currencies, so currencies with other than 2 decimal places such as JPY. In case of issues with those currencies the following SAP notes are required in the system:

 

2106896 - Decimal issues in Planning Layouts when working with exceptional currencies

2099874 - Missing conversion for exceptional currencies in UPX_KPI_KF_PLEVEL_READ2
2021933 - Use decimals settings from BPS when Enh Layout is set to 0

1962963 - Planning Layout issues with exceptional currencies with more than two decimals

1535708 - Plan data for currencies without decimals

 

Using master and dependent profiles

 

When using master and dependent profiles the decimal settings need to be exactly the same for the key figures in the master and the dependent profiles. It is the master profile that is synchronized and that is rendered for calculating the key figures. Therefore the key figures hold the values with decimals from the master profile. However for display reasons the rendering happens for the displayed profile, so for the depending profile. Therefore the decimal settings need to be in sync in the master and the dependent profiles.

 

Known issues

 

There are some known issues that are corrected with the following SAP notes:

 

2106896 - Decimal issues in Planning Layouts when working with exceptional currencies

2085223 - Decimals issue in Planning Layouts rendered with the class CL_UPX_LAYOUT_RENDER

2080064 - Incorrect error message for UPX key figure decimal settings

1817554 - ASSIGN_DECIMALS_TOO_HIGH when synchronizing occurs

 

The blog will be updated on a regulary basis. If you find any information missing please let me know.

Introduction


Social data harvesting connector enables harvesting posts, write ups, and social user data from different social media channels such as Facebook, Twitter, Wikipedia, Blogs and so on through DataSift.

 

In the first release of social data harvesting connector, the approach was to fetch social data from different social media channels.

 

In the latest release of SP02, the main approach is to consider the consent from the social user and take the appropriate action accordingly on the social posts during harvesting. The configuration for consent handling and related actions is configurable in SAP Business Suite system.

 

Social User Consent Handling function is available only when the business function FND_SOMI_CM is switched on.

 

A quick list of new features and enhancements include:

 

  • Social User Consent Handling during data harvesting and taking the appropriate action accordingly on the social posts

        The consent types which are supported in the connector are as follows:

                                         

             - No Consent Required, Store Anonymously

             - No Consent Required, Store Complete User Information

 

  • Enhanced DataSift Mapper files to fetch the data from Facebook Pages (Managed Source)

 

  • Updated DataSift Mapper file with fields provided by DataSift to fetch the data from channel Facebook public

 

 

Release Information


The new features of release SP02  is available from release SAP Business Suite Foundation 7.47 SP 06 (SAP_BS_FND 747) onwards.

 

 

Implementation


You should have the software component SAP SOMI DS CONT. You should download the component from Software Download Center in SAP Service Market Place. You must have a valid license/API key from DataSift.

 


To access the Software Download Center from the SAP Service Marketplace homepage, service.sap.com, choose SAP Support Portal → Software Downloads → Software Download Center.

 

To search for the software component SAP SOMI DS CONT, proceed as follows:


  • Select Search for Software Downloads in the left navigation bar.
  • Search for the software component SAP SOMI DS CONT 1.0
  • Download the latest SP- SP02 for SAP SOMI DS CONT 1.0

 

The Installation Guide for the Social Data Harvesting connector can be found at https://websmp110.sap-ag.de/instguides -> SAP In-Memory Computing -> SAP Customer Engagement Intelligence -> Installation Guide Social Data Harvesting Connector


For detailed documentation, refer the attached pdf in the SAP Note 2079650.


Note: The updated help portal documentation is available only after SAP Business Suite Foundation 7.47 SP07 release to customer.


SAP Trade Promotion Optimization (TPO)

Recently I was involved in SAP TPO Proof of Concept (PoC) for a TOP FMCG company in US region. This project I believe may be one of its kinds for exploring SAP TPO capabilities and predict accurate volume and lift which involved Modern Trade POS data. We received last 3 years POS data with Account planning, promotion and sales data. I wanted to share learnings & highlighting few features of SAP TPO.

 

Background: Research trends indicate trade promotion related spends accounts for 8-12 % of overall turnover of a CPG company and up to 60% of CPG marketing budgets for stimulating channel demand. While trade promotion spending as a percentage of marketing budgets has increased dramatically, the inefficiency of trade promotion represents the "number-one concern" among manufacturers. Yet, there is little visibility into where this spending actually goes, or how effectively it increases revenues, expands market share, or creates brand awareness among consumers.  With millions of dollars being spent to stimulate demand a marginal improvement in the fund allocation and recalibration of promotion processes could have a disproportionate impact on sales uplift and promotion ROI. SAP TPO uses advanced analytical constructs like optimization, predictive analytics; What-if analysis can provide significant visibility into the effectiveness of this trade promotion spends. The information attained can provide insights in terms of sales uplift contributions and can help in optimizing the same in the face of many real world constraints during the fund allocation process.

What is Trade Promotion Optimization?

TPO assists CPG manufacturer strategically to optimize the trade spending across their total product portfolio. Trade Promotion Optimization is an approach that uses business rules, constraints, and goals to mathematically create a trade calendar that can meet all of these requirements. Optimization is helpful for strategic questions, such as “what combination of promotional events (feature price, frequency, timing & depth of deal allowances) will meet or beat my revenue and/or profit goals and still stay within my trade promotion budget?” Right TPO models can also solve for ratio mix of revenue, volume and/or profitability, as well as profit contribution for both the manufacturer and retailer. SAP TPO enables trade marketing and sales teams to leverage advanced predictive modeling to suggest optimal price and merchandising decisions based on goals and objectives, or to assess revenue, volume and profitability.


SAP TPO: It’s a SAPCRM Add-on, which comprises a forecasting and modeling engine. The TPO science is dependent on DMF. SAP TPO enables users to understand the demand baseline (Sell out baseline) prediction. SAP TPO predicts the regular volume, revenue, profit margin etc. for manufacturer and planning account for agreed duration.

SAP CRM: Supports all processes involving direct customer contact throughout the entire customer relationship life cycle- from market segmentation, sales lead generation and opportunities to post sales and customer service. Includes business scenarios such as account and trade promotion management.

SAP DSiM: Demand data is loaded into DSiM system which harmonizes the data as per the original master data system (ERP). SAP delivers few methods to harmonize the Syndicated (Market Research) / POS / external data.

SAP BW: Receives harmonized data from DSiM and send it to DMF system for demand modelling.

SAP DMF: Demand Management Foundation provides predictive demand driven forecasts and optimization simulations for all promotion planning across channels and customer segments. In DMF you can do model and forecast for set of customers, channels and markets.  By using demand data, DMF systems help to forecast and optimize the predictions as per the requirement. It’s a science engine, which transforms historical demand data into models for generating forecasting & optimization. SAP TPO uses ‘Bayesian ‘science techniques. A forecast run is created for each call of science system (DMF). The forecast run can be used to see the parameters and results of each prediction that adds to the what-if scenario.


Data: Historical data plays a major role in TPO. Prediction / forecasting results of SAP TPO depend on historical data. SAP TPO supports mainly POS, Syndicated (market research) or internal data that can be uploaded into DMF directly or through DSiM. DSiM harmonizes the data based on your primary data (product hierarchies in ERP).


Analytics: Historical sales and promotion data is used for building predictive models which is used for planning future promotions. Bayesian Hierarchical Modeling (BHM) techniques are used for building these models. BHM not only consider the individual product and markets behavior while modeling instead it also considers the learning from category or brand sales trend. The main advantage with BHM is that it provides better accuracy even with small data sets and the accuracy can be further improvised by correctly specifying settings for priors factors like price, promotional lift etc.,

Accurate promotional uplift could be derived by correctly specifying the demand patterns of promotional sales in different days of the week.

Predictive models not only captures the impact of factors like price, holiday, distribution and sales trend but it also provides a flexibility to capture the impact of dynamic demand behavior of the product by classifying them into various homogeneous groups based on their demand pattern.

SAP TPO has inbuilt analytics which is visible from CRM TPO screen.


User Interface / Integration options:

  • TPO integrated in Trade promotion planning without additional assignment block
  • TPO integration assignment block
  • Promotion optimization can be created independently of any trade promotion (prediction & simulation are also available)

TPO Forecast types controls whether to predict, SAP TPO has 2 types of forecasts

What-if Analysis forecast types

  • Prediction: It analyses past promotions performance for a given price and promotional vehicles (like displays, features, price reduction, and multi-buys) and predicts one outcome in line with trend.
  • Simulation: Most of times, the challenge is not just getting results but getting them with in constraints, what can be a best option in such case. Simulation, in addition to price and promotional vehicles can also consider objectives like profit optimization & sales volume optimization and more importantly constraints like trade spending limits and forecast multiple optimal scenarios. The best suitable one can be chosen after analyzing all scenarios.

What-if Analysis results: SAP TPO presents forecasted results in intuitive graphical dashboards which makes it easy to view and compare different forecast outcomes in a single view. As of version TPO 2.0, it will depict forecasted results in 5 dashboards with a different perspective in each. On one dashboard the user has the option to change 'trade spends' and see impact instantly.  More dashboards can be added through enhancements. These dashboards not only present data but also the insights.  This can reduce the strain of going through various details on each forecast scenario to make a choice.


Dashboards: SAP TPO screen has got few dashboards like Basic analysis( provides with key figures like Volume uplift, non -Promo Revenue, Promo revenue, retailer revenue) , Volume decomposition (Provides volume uplift with respect to base demand, tactic lift, price lift, seasonality, holiday, cannibalization) , win-win assessment(Promo margin and promo profit). SAP TPO Agreement screen has got few dashboards like weekly review (base line & total volume) Price & Volume decomposition, Profit and loss.

Integration with SAP TPM: SAP TPO is tightly integrated with SAP TPM. Few additional assignment blocks, fields and buttons are provided. Assignment blocks like Promotion causals, What-if analysis, and optimization scenario etc.,


Learnings: Data quality is the most important and critical element of any forecast as it will influence the forecast results. It is essential to have complete and accurate data without gaps. When external data like syndicated or research data is used it is crucial to check if they are true or close representative of retailers being used in all required locations.

One important  lesson learnt from experience is do not underestimate how much effort it will take to source, clean, format and load the data.

Within SAP TPO, each forecast has a forecast confidence indicator, which represents system model confidence in the forecast and is based on past data.

Suggest having an exercise called “KNOW BUSINESS INSIGHTS” which will generate business insights for any organizations. SAP DSiM on HANA can help you here.


Conclusion: SAP TPO can be implemented on its own as a standalone tool but implementing in conjunction with SAP TPM can realize the true potency of each other. SAP TPO can plan promotion strategy and TPM can execute it smoothly with its integration to other processes i.e. funds management & claims management.

SAP TPO requires consultants with DMF knowledge getting experienced people is a challenge. It would be really helpful to have statisticians to model and improve the models based on external factors.


Disclaimer :

 

This tutorial is intended as a guide for the creation of demo/test data only. The scripts provided in this blog are not intended for use in a productive system.

 

Purpose :

 

This blog explains harvesting of historical twitter data through GNIP. The pre-installed Python Interpreter from the SAP HANA Client is used to execute Python scripts from the SAP HANA Studio. The different scripts (discussed later) are used to harvest historical twitter data from GNIP and store the useful data into Business Suite Foundation tables SOCIALDATA and SOCIALUSERINFO.

 

 

Prerequisites :

 

Make sure the following prerequisites are met before you start.

  • Installation of SAP HANA Studio and SAP HANA Client
    Install SAP HANA Studio and SAP HANA Client and apply for a HANA user with read, write and update authorization for foundation database tables SOCIALDATA and SOCIALUSERINFO.
  • Create a GNIP account
  • Enable Historical PowerTrack API in your GNIP account
    For harvesting historical data, the Historical PowerTrack API should be enabled for you account. If it is not already enabled for your account, contact you account manger to do the same.

 

Setup :

 

For initial setup and configuration refer the blog http://scn.sap.com/community/crm/marketing/blog/2014/09/29/twitter-data-harvesting-adapter-using-python-script-for-gnip

 

 

Code:

 

Harvesting historical twitter data from GNIP involves 3 basic steps :

 

1. Request Historical Job : You can request a Historical PowerTrack job by making a HTTP POST requests to the API endpoint. You will need to include a fromDate, toDate, rules and some additional metadata in the JSON POST body.

 

To request a Historical PowerTrack job, send a request to the following URL with you user credentials and a POST body similar to the following example.

https://historical.gnip.com/accounts/<GNIP_ACCOUNT_NAME>/jobs.json

 

create_job.py

 

import urllib2
import base64
import json
UN = '' # YOUR GNIP ACCOUNT EMAIL ID
PWD = ''
account = '' # YOUR GNIP ACCOUNT USER NAME
def get_json(data):
    return json.loads(data.strip())
def post():
    url = 'https://historical.gnip.com/accounts/' + account + '/jobs.json'
    publisher = "twitter"
    streamType = "track"
    dataFormat = "activity-streams"
    fromDate = "201410140630"
    toDate = "201410140631"
    jobTitle = "job30"
    rules = '[{"value":"","tag":""}]'
    jobString = '{"publisher":"' + publisher + '","streamType":"' + streamType + '","dataFormat":"' + dataFormat + '","fromDate":"' + fromDate + '","toDate":"' + toDate + '","title":"' + jobTitle + '","rules":' + rules + '}'
    base64string = base64.encodestring('%s:%s' % (UN, PWD)).replace('\n', '')
    req = urllib2.Request(url=url, data=jobString)
    req.add_header('Content-type', 'application/json')
    req.add_header("Authorization", "Basic %s" % base64string)
    
    proxy = urllib2.ProxyHandler({'http': 'http://proxy:8080', 'https': 'https://proxy:8080'})
    opener = urllib2.build_opener(proxy)
    urllib2.install_opener(opener)
    try:
        response = urllib2.urlopen(req)
        the_page = response.read()
        the_page = get_json(the_page)
        print 'Job has been created.'
        print 'Job UUID : ' + the_page['jobURL'].split("/")[-1].split(".")[0]
    except urllib2.HTTPError as e:
        print e.read()
        
if __name__=='__main__':
    post()











 

The above code creates a job and receives a UUID for the job created in a JSON file along with other details.

The UUID is used to run the following scripts.

  • accept_reject.py
  • get_links.py
  • monitor_job.py


Note : put the above UUID in the 'url' parameter in each of the above scripts before executing them.

 

2. Accept/Reject a Historical Job : After delivery of the estimate, you can accept or reject the previously requested job with a HTTP PUT request to the job URL endpoint. To accept or reject the estimate, send a request to the following URL with your user credentials and a POST body (example below) updating the status to accept (or reject).

https://historical.gnip.com/accounts/<GNIP_ACCOUT_NAME>/publishers/twitter/historical/track/jobs/<uuid>.json

 

The following is an example of a valid POST body:

{
"status": "accept"
}

















 

accept_reject.py

 

import urllib2
import base64
import json
uuid = ''
account = ''
url = 'https://historical.gnip.com:443/accounts/'+account+'/publishers/twitter/historical/track/jobs/'+ uuid + '.json'
UN = '' # YOUR GNIP ACCOUNT EMAIL ID
PWD = ''
account = '' # YOUR GNIP ACCOUNT USER NAME
def get_json(data):
    return json.loads(data.strip())
def accept_reject():
    
    choice = 'accept' # Switch to 'reject' to reject the job.
    payload = '{"status":"' + choice + '"}'
    
    base64string = base64.encodestring('%s:%s' % (UN, PWD)).replace('\n', '')
    req = urllib2.Request(url=url, data=payload)
    req.add_header('Content-type', 'application/json')
    req.add_header("Authorization", "Basic %s" % base64string)
    req.get_method = lambda: 'PUT'
    
    proxy = urllib2.ProxyHandler({'http': 'http://proxy:8080', 'https': 'https://proxy:8080'})
    opener = urllib2.build_opener(proxy)
    urllib2.install_opener(opener)
    
    try:
        response = urllib2.urlopen(req)
        the_page = response.read()
        the_page = get_json(the_page)
        uuid = the_page['jobURL'].split("/")[-1].split(".")[0]
        print 'Job has been accepted.'
        print 'UUID : ' + uuid
    except urllib2.HTTPError as e:
        print e.read()
    
if __name__=='__main__':
    accept_reject()
















 

If the job has not been created successfully before running this script. you will get the below error message

'{"status":"error","reason":"Invalid state transition: Job cannot be accepted"}'.

 

After accepting the job, you can monitor its status by sending a request to the following URL with your user credentials.

https://historical.gnip.com/accounts/<GNIP_ACCOUNT_NAME>/publishers/twitter/historical/track/jobs/<uuid>.json

 

A request to the above URL gives us a JSON file containing "percentComplete" field along with other details about the running job. This field can be used to check whether the job has been completed or not.

 

monitor_job.py

 

import urllib2
import base64
import json
uuid = ''
account = ''
url = 'https://historical.gnip.com:443/accounts/'+ account +'/publishers/twitter/historical/track/jobs/' + uuid + '.json'
UN = '' # YOUR GNIP ACCOUNT EMAIL ID
PWD = ''
account = '' # YOUR GNIP ACCOUNT USER NAME
def get_json(data):
    return json.loads(data.strip())
class RequestWithMethod(urllib2.Request):
    def __init__(self, url, method, headers={}):
        self._method = method
        urllib2.Request.__init__(self, url, headers)
    def get_method(self):
        if self._method:
            return self._method
        else:
            return urllib2.Request.get_method(self)
def monitor():
    base64string = base64.encodestring('%s:%s' % (UN, PWD)).replace('\n', '')
    
    req = RequestWithMethod(url, 'GET')
    req.add_header('Content-type', 'application/json')
    req.add_header("Authorization", "Basic %s" % base64string)
    
    proxy = urllib2.ProxyHandler({'http': 'http://proxy:8080', 'https': 'https://proxy:8080'})
    opener = urllib2.build_opener(proxy)
    urllib2.install_opener(opener)
    
    try:
        response = urllib2.urlopen(req)
        the_page = response.read()
        the_page = get_json(the_page)
        print the_page
        print 'Statu Message : ' + str(the_page['statusMessage'])
        print 'Percent Complete : ' + str(the_page['percentComplete']) + ' %'
        if 'quote' in the_page:
            print 'Estimated File Size in Mb : ' + str(the_page['quote']['estimatedFileSizeMb'])
            print 'Expires At : ' + str(the_page['quote']['expiresAt'])      
    except urllib2.HTTPError as e:
        print e.read()
            
if __name__ == '__main__':
    print url
    monitor()












 

3. Retrieve Historical Data : When the job has completed, GNIP will provide a dataURL endpoint containing a list of file URLs that can be downloaded in parallel. To retrieve this list of data files, send a HTTP GET request to the following URL with your user credentials :

https://historical.gnip.com/accounts/<GNIP_ACCOUNT_NAME>/publishers/twitter/historical/track/jobs/<uuid>/results.json

 

get_links.py

 

import urllib2
import base64
import json
uuid = ''
account = ''
url = final_url = 'https://historical.gnip.com/accounts/'+ account +'/publishers/twitter/historical/track/jobs/'+ uuid +'/results.json'
UN = '' # YOUR GNIP ACCOUNT EMAIL ID
PWD = ''
account = '' # YOUR GNIP ACCOUNT USER NAME
def get_json(data):
    return json.loads(data.strip())
def get_links():
    base64string = base64.encodestring('%s:%s' % (UN, PWD)).replace('\n', '')
    req = urllib2.Request(url)
    req.add_header('Content-type', 'application/json')
    req.add_header("Authorization", "Basic %s" % base64string)
    
    proxy = urllib2.ProxyHandler({'http': 'http://proxy:8080', 'https': 'https://proxy:8080'})
    opener = urllib2.build_opener(proxy)
    urllib2.install_opener(opener)
    try:
        response = urllib2.urlopen(req)
        the_page = response.read()
        the_page = get_json(the_page)
        print 'Total URL count : ' + str(the_page['urlCount']) + '\n'
        for item in the_page['urlList']:
            print item
    except urllib2.HTTPError:
        print 'Job is not yet delivered.'
if __name__=='__main__':
    get_links()














 

The above code returns a list of URLs using which data can be downloaded. For each URL returned by get_links.py, we can run get_data.py to get the data from that URL and store the data into SOCIALDATA and SOCIALUSERINFO tables.

 

Note : Copy the links returned(one at a time) by get_links.py to the 'url' variable before executing get_data.py.

 

get_data.py

 

import urllib2
import zlib
import threading
from threading import Lock
import sys
import ssl
import json
from datetime import datetime
import calendar
import dbapi
from wsgiref.handlers import format_date_time
CHUNKSIZE = 4*1024
GNIPKEEPALIVE = 30
NEWLINE = '\r\n'
# GNIP ACCOUNT DETAILS
url = ''
HEADERS = { 'Accept': 'application/json',
           'Connection': 'Keep-Alive',
            'Accept-Encoding' : 'gzip' }
server = ''
port = 
username_hana = ''
password_hana = ''
schema = ''
client = ''
socialmediachannel = 'TW'
print_lock = Lock()
err_lock = Lock()
hdb_target = dbapi.connect(server, port, username_hana, password_hana)
cursor_target = hdb_target.cursor()
class procEntry(threading.Thread):
    def __init__(self, buf):
        self.buf = buf
        threading.Thread.__init__(self)
    def unicodeToAscii(self, word):
        return word.encode('ascii', 'ignore')
    
    def run(self):
        for rec in [x.strip() for x in self.buf.split(NEWLINE) if x.strip() <> '']:
            try:
                jrec = json.loads(rec.strip())
                with print_lock:
                    res = ''
                    if 'verb' in jrec:
                        verb = jrec['verb']
                        verb = self.unicodeToAscii(verb)
                        # SOCIALUSERINFO DETAILS
                        socialUser = jrec['actor']['id'].split(':')[2]
                        socialUser = self.unicodeToAscii(socialUser)
                        socialUserProfileLink = jrec['actor']['link']
                        socialUserProfileLink = self.unicodeToAscii(socialUserProfileLink)
                        socialUserAccount = jrec['actor']['preferredUsername']
                        socialUserAccount = self.unicodeToAscii(socialUserAccount)
                        friendsCount = jrec['actor']['friendsCount']
                        followersCount = jrec['actor']['followersCount']
                        postedTime = jrec['postedTime']
                        postedTime = self.unicodeToAscii(postedTime)
                        displayName = jrec['actor']['displayName']
                        displayName = self.unicodeToAscii(displayName)
                        image = jrec['actor']['image']
                        image = self.unicodeToAscii(image)
                        
                        # SOCIALDATA DETAILS
                        socialpost = jrec['id'].split(':')[2]
                        socialpost = self.unicodeToAscii(socialpost)
                        createdbyuser = socialUser
                        creationdatetime = postedTime
                        socialpostlink = jrec['link']
                        creationusername = displayName
                        socialpostsearchtermtext = jrec['gnip']['matching_rules'][0]['value']
                        socialpostsearchtermtext = self.unicodeToAscii(socialpostsearchtermtext)
                        
                        d = datetime.utcnow()
                        time = d.strftime("%Y%m%d%H%M%S")
                        
                        creationdatetime_utc = datetime.strptime(postedTime[:-5], "%Y-%m-%dT%H:%M:%S")
                        creationdatetime_utc = creationdatetime_utc.strftime(("%Y%m%d%H%M%S"))
                        
                        stamp = calendar.timegm(datetime.strptime(creationdatetime[:-5], "%Y-%m-%dT%H:%M:%S").timetuple())
                        creationdatetime = format_date_time(stamp)
                        creationdatetime = creationdatetime[:-4] + ' +0000'
                        
                        if verb == 'post':
                            socialdatauuid = jrec['object']['id'].split(':')[2]
                            socialdatauuid = self.unicodeToAscii(socialdatauuid)
                            socialposttext = jrec['object']['summary']
                            socialposttext = self.unicodeToAscii(socialposttext)
                            res = socialUser + '\t'  + socialUserAccount + '\t' + str(friendsCount) + '\t' + str(followersCount) + '\t' + postedTime + '\t' + displayName + '\t' + displayName.upper() + '\t' + socialUserProfileLink + '\t' +image
                        elif verb == 'share':
                            socialdatauuid = jrec['object']['object']['id'].split(':')[2]
                            socialdatauuid = self.unicodeToAscii(socialdatauuid)                            
                            socialposttext = jrec['object']['object']['summary']
                            socialposttext = self.unicodeToAscii(socialposttext)
                            res = socialposttext + '\t' +socialUser + '\t'  + socialUserAccount + '\t' + str(friendsCount) + '\t' + str(followersCount) + '\t' + postedTime + '\t' + displayName + '\t' + displayName.upper() + '\t' + socialUserProfileLink + '\t' +image
                        print(res)
                        sql = 'upsert ' + schema + '.SOCIALUSERINFO(CLIENT, SOCIALMEDIACHANNEL, SOCIALUSER, SOCIALUSERPROFILELINK, SOCIALUSERACCOUNT, NUMBEROFSOCIALUSERCONTACTS, SOCIALUSERINFLUENCESCOREVALUE, CREATIONDATETIME, SOCIALUSERNAME, SOCIALUSERNAME_UC, SOCIALUSERIMAGELINK, CREATEDAT) values(?,?,?,?,?,?,?,?,?,?,?,?) with primary key'
                        cursor_target.execute(sql, (client, socialmediachannel, socialUser, socialUserProfileLink, socialUserAccount, friendsCount, followersCount, creationdatetime, displayName, displayName.upper(), image, time))
                        hdb_target.commit()
                            
                        sql = 'upsert ' + schema + '.SOCIALDATA(CLIENT, SOCIALDATAUUID, SOCIALPOST, SOCIALMEDIACHANNEL, CREATEDBYUSER, CREATIONDATETIME, SOCIALPOSTLINK, CREATIONUSERNAME, SOCIALPOSTSEARCHTERMTEXT, SOCIALPOSTTEXT, CREATEDAT, CREATIONDATETIME_UTC) VALUES(?,?,?,?,?,?,?,?,?,?,?,?) WITH PRIMARY KEY'                    
                        cursor_target.execute(sql, (client, socialdatauuid, socialpost, socialmediachannel, createdbyuser, creationdatetime, socialpostlink, creationusername, socialpostsearchtermtext, socialposttext, time, creationdatetime_utc))
                        hdb_target.commit()
            except ValueError, e:
                with err_lock:
                    sys.stderr.write("Error processing JSON: %s (%s)\n"%(str(e), rec))
def getStream():
    proxy = urllib2.ProxyHandler({'http': 'http://proxy:8080', 'https': 'https://proxy:8080'})
    opener = urllib2.build_opener(proxy)
    urllib2.install_opener(opener)
    req = urllib2.Request(url, headers=HEADERS)
    response = urllib2.urlopen(req, timeout=(1+GNIPKEEPALIVE))
    decompressor = zlib.decompressobj(16+zlib.MAX_WBITS)
    remainder = ''
    while True:
        tmp = decompressor.decompress(response.read(CHUNKSIZE))
        if tmp == '':
            return
        [records, remainder] = ''.join([remainder, tmp]).rsplit(NEWLINE,1)
        procEntry(records).start()
if __name__ == "__main__":
    print('Started...')
    try:
        getStream()
    except ssl.SSLError, e:
        with err_lock:
            sys.stderr.write("Connection failed: %s\n"%(str(e)))
  













When you run the above script, data from the location specified by 'url' field will be downloaded and stored into SOCIALDATA and SOCIALUSERINFO tables.

 

References :

1. Gnip Support

2. Harvesting real time Tweets from GNIP into Social Intelligence tables using a Python Script : http://scn.sap.com/community/crm/marketing/blog/2014/09/29/twitter-data-harvesting-adapter-using-python-script-for-gnip

3. Harvesting Tweets into Social Intelligence tables using a Python Script : http://scn.sap.com/docs/DOC-53824

Disclaimer

This tutorial is intended as a guide for the creation of demo/test data only. The sample script provided is not intended for use in a productive system.

Purpose

The following tutorial explains a way of harvesting twitter data through GNIP. The pre-installed Python Interpreter from the SAP HANA client is used to execute a Python script from SAP HANA Studio. The script harvests the data from GNIP and extracts the useful data out of it and stores these details into Business Suite Foundation database tables SOCIAL DATA and SOCIALUSERINFO. Currently the script runs infinitely. If you want to stop harvesting the data, you can manually do it by stopping the execution of this script in the SAP HANA Studio. You can however modify the script to run for a specific period of time. To run the script, you will also need to make a few customizing and configuration settings in order to use the Pydev Plugin in SAP HANA Studio.

Prerequisites

Make sure that the following prerequisites are met before you start out :
• Installation of SAP HANA Studio and SAP HANA Client
Install SAP HANA Studio and SAP HANA Client and apply for a HANA user with Read, Write and Update authorization for foundation database tables   SOCIALDATA and SOCIALUSERINFO

Create a GNIP account

Data Stream configuration in your GNIP account
Create a data stream for a source (like Twitter, Facebook, etc…) in your GNIP account. Remember, using a data stream you can harvest data from only a single source. So you should have different data streams for different data sources. After creating a data stream, define the rules in the ‘Rules’ tab to filter the data that you are getting from GNIP. For writing the rules refer the link : http://support.gnip.com/apis/powertrack/rules.html

Setup
1. Configuring Python in SAP HANA Studio Client
   
Python version 2.6 is already embedded in SAP HANA client, so you do not need to install Python from scratch. To configure Python API to connect to SAP HANA, proceed as follows.
        
1. Copy and paste the following files from C:\Program Files\SAP\hdbclient\hdbcli to C:\Program Files\SAP\hdbclient\Python\Lib
                a. _init_.py
                b. dbapi.py
                c. resultrow.py

2. Copy and paste the following files from C:\Program Files\SAP\hdbclient to C:\Program\Files\SAP\hdbclient\Python\Lib
                a. pyhdbcli.pdb
                b. pyhdbcli.pyd
          
Note:
       
In Windows OS, by default the installation path is C:\Program Files\SAP\.. for a 64 bit installation SAP HANA Studio and SAP HANA Database client

If you opted for a 32 bit Installation, the default path is C:\Program Files(x86)\sap\..

2. Setting up the Editor to run the file
2.1. Install Pydev plugin to use Python IDE for Eclipse
             
The preferred method is to use the Eclipse IDE from SAP HANA Studio. To be able to run the python script, you first need to install the Pydev plugin in SAP HANA Studio.
                   
                    a. Open SAP HANA Studio. Click HELP on menu tab and select Install New Software
                    b. Click the button Add and enter the following information
               gnip1.jpg
                       Name : pydev
                       Location : http://pydev.org/updates

                   c. Select the settings as shown in this screenshot.
                   gnip2.jpg
                       d. Press Next twice
                         e. Accept the license agreements, then press Finish.
                         f. Restart SAP HANA studio.

2.2. Configure the Python Interpreter

In SAP HANA studio, carry out the following steps:
     a. Select the menu entries Window -> Preferences
     b. Select PyDev -> Interpreters -> Python Interpreter
     c. Click New button, type in an Interpreter name. Enter in filed Interpreter Executable the following executable file C:\Program Files\hdbclient\Python\Python.exe. Press OK twice.

2.3. Create a Python project

In SAP HANA Studio, carryout the following steps:
     a. Click File -> New -> Project, then select Pydev project
     b. Type in a project name, then press Finish
     c. Right-click on your project. Click New -> File, then type your file name, press Finish.

Customizing and Running the Script

1. Customizing the python script

Copy and paste the below provided code into the newly created python file. Enter the values for the below parameters in the file.
     a. URL – unique url for the datastream you have created in your GNIP account
          (For ex : 'https://stream.gnip.com/accounts/<GNIP_USERNAME>/publishers/<STREAM>/streams/track/dev.json')
     b. username_gnip – your GNIP account username
     c. password_gnip – your GNIP account password
     d. server – HANA server name (Ex : lddbq7d.wdf.sap.corp)
     e. port – HANA server port
     f. username_hana – HANA server username
     g. password_hana – HANA server password
     h. schema – schema name
     i. client – client number

import urllib2
import base64
import zlib
import threading
from threading import Lock
import sys
import ssl
import json
from datetime import datetime
import calendar
import dbapi
from wsgiref.handlers import format_date_time
from time import mktime
CHUNKSIZE = 4*1024
GNIPKEEPALIVE = 30
NEWLINE = '\r\n'
URL = ''
username_gnip = ''
password_gnip = ''
HEADERS = { 'Accept': 'application/json',
            'Connection': 'Keep-Alive',
            'Accept-Encoding' : 'gzip',
            'Authorization' : 'Basic %s' % base64.encodestring('%s:%s' % (username_gnip, password_gnip))  }
server = ''
port =
username_hana = ''
password_hana = ''
schema = ''
client = ''
socialmediachannel = ''
print_lock = Lock()
err_lock = Lock()
class procEntry(threading.Thread):
    def __init__(self, buf):
        self.buf = buf
        threading.Thread.__init__(self)
    def unicodeToAscii(self, word):
        return word.encode('ascii', 'ignore')
  
    def run(self):
        for rec in [x.strip() for x in self.buf.split(NEWLINE) if x.strip() <> '']:
            try:
                jrec = json.loads(rec.strip())
                with print_lock:
                    verb = jrec['verb']
                    verb = self.unicodeToAscii(verb)
                  
                    # SOCIALUSERINFO DETAILS
                    socialUser = jrec['actor']['id'].split(':')[2]
                    socialUser = self.unicodeToAscii(socialUser)
                    socialUserProfileLink = jrec['actor']['link']
                    socialUserProfileLink = self.unicodeToAscii(socialUserProfileLink)
                    socialUserAccount = jrec['actor']['preferredUsername']
                    socialUserAccount = self.unicodeToAscii(socialUserAccount)
                    friendsCount = jrec['actor']['friendsCount']
                    followersCount = jrec['actor']['followersCount']
                    postedTime = jrec['postedTime']
                    postedTime = self.unicodeToAscii(postedTime)
                    displayName = jrec['actor']['displayName']
                    displayName = self.unicodeToAscii(displayName)
                    image = jrec['actor']['image']
                    image = self.unicodeToAscii(image)
                  
                    # SOCIALDATA DETAILS
                    socialpost = jrec['id'].split(':')[2]
                    socialpost = self.unicodeToAscii(socialpost)
                    createdbyuser = socialUser
                    creationdatetime = postedTime
                    socialpostlink = jrec['link']
                    creationusername = displayName
                    socialpostsearchtermtext = jrec['gnip']['matching_rules'][0]['value']
                    socialpostsearchtermtext = self.unicodeToAscii(socialpostsearchtermtext)
                  
                    d = datetime.utcnow()
                    time = d.strftime("%Y%m%d%H%M%S")
                  
                    creationdatetime_utc = datetime.strptime(postedTime[:-5], "%Y-%m-%dT%H:%M:%S")
                    creationdatetime_utc = creationdatetime_utc.strftime(("%Y%m%d%H%M%S"))
                  
                    stamp = calendar.timegm(datetime.strptime(creationdatetime[:-5], "%Y-%m-%dT%H:%M:%S").timetuple())
                    creationdatetime = format_date_time(stamp)
                    creationdatetime = creationdatetime[:-4] + ' +0000'
                  
                    if verb == 'post':
                        socialdatauuid = jrec['object']['id'].split(':')[2]
                        socialdatauuid = self.unicodeToAscii(socialdatauuid)
                      
                      
                        socialposttext = jrec['object']['summary']
                        socialposttext = self.unicodeToAscii(socialposttext)
                      
                        res = client + '\t' + socialmediachannel + '\t' + socialUser + '\t'  + socialUserAccount + '\t' + str(friendsCount) + '\t' + str
(followersCount) + '\t' + postedTime + '\t' + displayName + '\t' + displayName.upper() + '\t' + socialUserProfileLink + '\t' +image
                      
                    elif verb == 'share':
                        socialdatauuid = jrec['object']['object']['id'].split(':')[2]
                        socialdatauuid = self.unicodeToAscii(socialdatauuid)
                      
                        socialposttext = jrec['object']['object']['summary']
                        socialposttext = self.unicodeToAscii(socialposttext)
                      
                        res = client + '\t' + socialmediachannel + '\t' + socialUser + '\t'  + socialUserAccount + '\t' + str(friendsCount) + '\t' + str
(followersCount) + '\t' + postedTime + '\t' + displayName + '\t' + displayName.upper() + '\t' + socialUserProfileLink + '\t' +image
                      
                    print(res)
                    hdb_target = dbapi.connect(server, port, username_hana, password_hana)
                    cursor_target = hdb_target.cursor()
                      
                    sql = 'upsert ' + schema + '.SOCIALUSERINFO(CLIENT, SOCIALMEDIACHANNEL, SOCIALUSER, SOCIALUSERPROFILELINK, SOCIALUSERACCOUNT,
NUMBEROFSOCIALUSERCONTACTS, SOCIALUSERINFLUENCESCOREVALUE, CREATIONDATETIME, SOCIALUSERNAME, SOCIALUSERNAME_UC, SOCIALUSERIMAGELINK, CREATEDAT) values
(?,?,?,?,?,?,?,?,?,?,?,?) with primary key'
                    cursor_target.execute(sql, (client, socialmediachannel, socialUser, socialUserProfileLink, socialUserAccount, friendsCount,
followersCount, creationdatetime, displayName, displayName.upper(), image, time))
                    hdb_target.commit()
                      
                    sql = 'upsert ' + schema + '.SOCIALDATA(CLIENT, SOCIALDATAUUID, SOCIALPOST, SOCIALMEDIACHANNEL, CREATEDBYUSER, CREATIONDATETIME,
SOCIALPOSTLINK, CREATIONUSERNAME, SOCIALPOSTSEARCHTERMTEXT, SOCIALPOSTTEXT, CREATEDAT, CREATIONDATETIME_UTC) VALUES(?,?,?,?,?,?,?,?,?,?,?,?) WITH PRIMARY
KEY'                  
                    cursor_target.execute(sql, (client, socialdatauuid, socialpost, socialmediachannel, createdbyuser, creationdatetime, socialpostlink,
creationusername, socialpostsearchtermtext, socialposttext, time, creationdatetime_utc))
                    hdb_target.commit()
            except ValueError, e:
                with err_lock:
                    sys.stderr.write("Error processing JSON: %s (%s)\n"%(str(e), rec))
def getStream():
    proxy = urllib2.ProxyHandler({'http': 'http://proxy:8080', 'https': 'https://proxy:8080'})
    opener = urllib2.build_opener(proxy)
    urllib2.install_opener(opener)
    req = urllib2.Request(URL, headers=HEADERS)
    response = urllib2.urlopen(req, timeout=(1+GNIPKEEPALIVE))
    decompressor = zlib.decompressobj(16+zlib.MAX_WBITS)
    remainder = ''
    while True:
        tmp = decompressor.decompress(response.read(CHUNKSIZE))
        if tmp == '':
            return
        [records, remainder] = ''.join([remainder, tmp]).rsplit(NEWLINE,1)
        procEntry(records).start()
if __name__ == "__main__":
    print('Started...')
    while True:
        try:
            getStream()
        except ssl.SSLError, e:
            with err_lock:
                sys.stderr.write("Connection failed: %s\n"%(str(e)))

2. Run the script from your editor


3. Checking the Results in the database tables SOCIALDATA and SOCIALUSERINFO.


Other blog posts on connecting Social Channels: 

 

Twitter connector to harvest tweets into Social Intelligence tables using Python script.

http://scn.sap.com/docs/DOC-53824


(If you find any mistakes or if you have any doubts in this blog please leave a comment)

I noticed very high interest in all topics that are related to marketing prospect functionality in SAP CRM.

On the other hand I have the impression that only a few companies actually started using marketing prospects in SAP CRM. With this they get the chance to measure the real value of marketing prospects for their business.

 

Perhaps using marketing prospects is considered as a big topic, or even a real mind shift? You go away from starting with prospect data that is already quite complete, in the sense of typically having a name and at least parts of the address, instead of starting typically with only an e-mail address.

 

I would like to encourage you to stop hesitating.

For sure you are infected by the market trends to bring your prospects as early as possible on a great "customer journey". Otherwise competitors could get them.

The pity is that most companies have a lot ot prospect data but don't use it yet. They don't address these prospects with marketing activities. As said such prospect data is far from being complete. That's why - from my perspective too often - it is just stored but never used.

 

What about starting with a first small set of such prospect data? Bring this data in the SAP CRM system, and define exactly the kind of lifecycle for them. - After a while measure your success with them, and then in case you are successful you can go ahead with the next set of prospect data, and increase the amount step-by-step.

 

Keeping my fingers crossed for your success!

 

Some additional information that could help to get this started:

For getting an overview of what is new in the marketing prospect area you can read SAP Note 1896854.

For understanding how to measure your success with marketing prospects you can read http://scn.sap.com/community/crm/marketing/blog/2014/03/07/how-to-measure-the-effectiveness-of-your-marketing-activities-with-prospects

The below configuration settings are required for a loyalty program to work in SAP CRM:

 

Create Loyalty Type

Transaction Path - IMG >> Customer Relationship Management >> Marketing >>
Loyalty Management >> Basic settings >> Define Loyalty Type

 

2 Assign Number Ranges for Loyalty Objects

Transaction Path – IMG >> Customer Relationship Management >> Marketing >> Loyalty Management
>> Basic Settings >> Assign Number Ranges

 

3. Loyalty Program Types Profile Definition

Transaction Path – IMG >> Customer Relationship Management >> Marketing >> Loyalty Management
>> Loyalty Programs >> Define Profiles for Loyalty Program Types

 

4. Partner Determination Procedure

Transaction Path – IMG >> Customer Relationship Management >> Marketing >> Loyalty Management
>> Basic Settings >> Assign Partner Determination Procedures

 

5. Date Calculation Procedure

Transaction Path – IMG >> Customer Relationship Management >> Marketing >> Loyalty Management
>> Basic Settings >> Define Date Calculation Procedures

 

6 Status Profile Assignment

Transaction Path – IMG >> Customer Relationship Management >> Marketing >> Loyalty Management
>> Basic Settings >> Define Status Profiles

 

7 Reward Rule Maintenance Templates

Transaction Path – IMG >> Customer Relationship Management >> Marketing >> Loyalty Management >> Basic Settings >> Define Templates for Reward Rule Maintenance

 

8 Condition Groups Mapping Definition

Transaction Path – IMG >> Customer Relationship Management >> Marketing >> Loyalty Management
>> Loyalty Programs >> Define Mapping for Condition Groups

 

9. Membership Settings

 

Transaction Path – IMG >> Customer Relationship Management >> Marketing >> Loyalty Management
>> Memberships >> Define Membership Settings

 

10. Definition point Qualification Type

Transaction Path – IMG >> Customer Relationship Management >> Marketing >> Loyalty Management
>> Membership Cards >> Define Point Qualification Type
s


There are some known issues related to survey transformation. This blog post should give an idea about how the transformation is working and contains a collection of common issues together with documented solutions. Any known issues are highlighted with the alert icon (), whereas each solution is highlighted with the information icon ().

 

The main settings and the related XML and XSLT files can be found in the survey repository in SAP GUI. This can be accessed in from the survey suite (transaction CRM_SURVEY_SUITE) only, using the survey repository button:

 

survey repository1.jpg

The survey repository contains the files for the style sheets (cascading style sheets), the static survey XSLTs, the parameter files (XML), as well as layout XMLs for the survey print function:

survey repository2.jpg

 

  • Cascading Style Sheets (CSS): defines survey format - colors, background, style, etc
  • Static Survey XSLTs: required for rendering the survey from the survey XML to the HTML
  • Parameter XMLs: contain parameters for URL scenario
  • Layout XMLs: contains layouts for print scenario

 

For the survey rendering the CSS style sheets and the Static Survey XSLTs are required. The transformation works the following way. The survey is stored as an XML. While presenting the survey, XML goes through the XSLT transformation as a result of which the survey HTML gets generated - this is happens in 2 steps:

  • Survey preparation (Build Time): the preparation step is done once for every survey. The first stylesheet (GenerateValues), extracts the default values from the survey and stores them in the survey vales template file. The second stylesheet (GenerateTemplate) generates the survey template.
    transformation1.jpg
  • Survey Execution (Run Time): The Survey Template processes the Survey Values Templates and generates the Survey HTML with the default values.
    transformation2.jpg

 

The rendering happens with the so called CATServer. The CATServer is responsible for transforming the survey XMLs to the HTML using the XSLT files. The CATServer is to be maintained in the Survey Repository:cat server admin.jpg

There are two versions of the CATServer available in the system - the active one is highlighted in the CATServer administration:

 

  • Internal CATServer(ABAP Based): This version makes use of the ABAP XSLT processor. Therefore, this version of the CATServer has some limitations with respect to functionality, performance and resource consumption (especially memory). Hence the JTS is the recommended CATServer with the performance point of view, however since the ABAP based CATServer is much more stable always recommend this one.
  • JTS: Java Based Version of the CATServer JTS is Java based and provides the same functional scope than the external Java version. This requires no external installation. In addition, it is low in resource consumption and shows a much better performance than the internal ABAP-based CATServer.                

 

Switching between the ABAP and the JTS CATServer can be done at runtime, there is no system restart required.

 

There are some known issues related to the CATServer used. If the survey can't be activated or can't be displayed due to the following error message raised in SAPGUI, this is usually a CATServer issues:

cat server transforermfactory.jpg

 

As mentioned the ABAP based CATServer is much more stable, therefore the ABAP based CATServer is definitly the one to recommend.

 

To understand how to switch between the CATServer versions consider the following SAP note:

 

857535 - internal CATServer: Setup Instructions

 

Using class CL_SURVEY_CATSERVER you can either execute the method SET_CATSERVER_ABAP or SET_CATSERVER_JTS. The parameter user must be left empty.

set_catserver1.jpg

setcatserver2.jpg

 

There was a change done to the framework recently. The BSP scenario is now migrated to the THTMLB framework.

 

There may be some survey transformation issues. When accessing the survey at runtime you may get the following error:

 

bee xml2.jpg

bee xml error.jpg


This error should be solved by implementing the following SAP notes:

 

1817152 - Migration to THTMLB framework for BSP scenarios

1842704 - Error: <bsp:bee>: (BEE XML) BSP extension <:*> is unknown

 

There may be various layout issues. The style sheet is not considered or issues with allignment of radio buttons.

 

Those issues should be solved with the following SAP notes - the solution includes 2 steps:

 

For all corrections that affect the XSLT file it is required to generate the affected survey again. This can either be manually or by using report CRM_GENERATE_SURVEYS. The report is delivered with the following SAP note:

 

1835143 - Delivery of report CRM_GENERATE_SURVEYS


As a general recommendation for any survey issues happening in the WEB UI I would suggest to search for the latest notes correcting any XSLT file. There are some known issues, since most of them are related to the XSLT file the issue may be solved with the latest corrections.


https://service.sap.com/sap/support/notes/1817152


I want to share some ideas and a brief overview about how the locking mechanism for the BPS Planning is supposed to work, how this may be analyzed and how to tackle possible locking conflicts. Solutions for known issues are labeled with . This blog is related to locking conflicts only so the BPS planning integration to CRM should already be set up properly.

 

The BPS Planning is integrated in CRM Marketing as key figure planning application. This provides a planning functionality that allows to measure the performance of an organization, a department, a project, or other by setting plan values for important business key figures. The planning is happens in the BW system using BPS Planning.

 

Whenever the CRM Marketing object such as marketing plans, campaigns or trade promotions are edited, the planning cube in BW sets a lock based on the defined characteristics selection. The idea of this blog post is to provide an overview about the locking mechanism.

 

The locking of the planning cube is required when either accessing the planning layout or at the time of saving the marketing plan object. Once the CRM Marketing Object is edited the lock on the planning is set immediately. If the planning in BW cannot be locked, the CRM Marketing object cannot be locked either. This design is to protect data integrity.

 

When accessing the planning layout this is opened in edit mode per default:

 

doc bps lock edit mode.jpg

 

Once the planning layout is opened, the planning cube is locked based on the selected characteristics. If any other user tries to access the same object, the planning layout is opened in display mode – hence no synchronization happens. The user is informed with the following error message:

 

Planning object is locked in BI; changes cannot be saved [CRM_KFP 009]

 

doc bps lock msg1.jpg

doc bps lock msg2.jpg

 

The lock entries are created for the characteristic selection only. When a user edits the planning layout for marketing object A, another user should be able to edit the planning for marketing object B.

 

There is a known error in BPS with BW Release 7.40. If a user wants to plan data for an info cube, no other user can access the same cube. This is caused by the fact that the system locks the entire cube, ignoring the characteristics selection. This locking conflict affects SAP_BW 7.40 from Support Package 2 to Support Package 5 inclusively. The locking conflict is solved either with SP 6 or with implementing the following SAP note:


1926227 - Lock conflicts in BW-BPS from Release 7.40

 


If the RFC destination to the BW is defined in customizing but not existing in SM59 the following error is raised:

 

System <RFC_DEST> cannot be reached [CRM_MKTPL 182]

system cannot be reached.jpg

In that case the marketing plan object cannot be edited. As the connection is defined in customizing system needs to access the BW system for data integrity reasons. If the BW system cannot be reached the marketing plan object cannot be locked.

 

Please refer to the following KBA:

 

2049597 - Error Message 'System cannot be reached raised' in Marketing Plan

 

Once a trade promotion is saved without having buying dates assigned, there is an issue that the trade promotion cannot be edited any more since system the planning can't be locked in BW due to the missing dates. This issue is solved with the following notes:


1866620  Editing Marketing Projects with empty PPG is not possible

1849922  Cannot edit a saved promotion without dates

1827884  Cannot edit a promotion without dates

 

How is the locking mechanism working?

 

 

When accessing the CRM Marketing Object CRM calls the BW interface with the OBJECT_ENQUEUE command. Using the RFC call to the BW function module UPX_KPI_API_XML the BW system is accessed:

doc bps lock1.jpg

CRM --> BW

doc bps lock debugging bw.jpg


If the object_enqueue call is successfully locking the characteristics selection in the planning cube the following lock entries are generated. Those can be tracked using the following path in the Easy Access menu in the BW system (transaction SM12):

 

SAP Menu => Tools => Administration => Monitor => Lock Entries => Header 1

 

doc bps lock lockentries.jpg

 

The lock entries are created for the RFC user that is defined in SM59 for the BW remote connection, so this is not necessarily the CRM online user.

 

There is no locking entry in SM12 but still planning is not possible due to an error – where is this coming from?

 

 

There is another known situation, where the user gets the following errors:

 

RFC error: Planning cannot be performed for real-time InfoCube 0CP_SLSCA; see long text. [CRM_KPI_PLA 205]
Planning Services Error [CRM_KPI_PLA 203]

 

doc bps lock msg 3.jpg

This is not a locking conflict but is caused by the definition of the InfoCube. The cube is obviously defined as exclusive read cube – once the cube is in read mode planning cannot be performed. The cube needs to be in write mode so planning can be performed. This is further documented in the following SAP note:


555849 - Definition of exclusive read cube

This is one of the weakness in the design of the CRM planning integration, caused by the very way BPS handles locking. The locks set by BPS will prevent to lock the same data in another BPS, or BW-IP session. Locks set by BPS will not prevent locking or updating of the data by core BW processes. For example, nothing prevents a BW administrator to change the mode of the InfoCube from Planning to Loading, while a user locks the same data by editing the Marketing Object.

 

 

There is a locking happening due to a data slice – what does this mean?

 

 

When accessing the planning layout the user gets the following errors:

 

RFC error: Data record is locked by data slice. [CRM_KPI_PLA 205]
Planning Services Error [CRM_KPI_PLA 203]

doc pbs lock data slice.jpg

 

The data slice is defined to the planning area, and is used to protect data of the InfoCube against changes. If the data slice overlap with the characteristic selection of the CRM Marketing Object planning cannot be performed. In general Data Slices are not supported from CRM planning integration perspective.

 

doc bps lock  data slice2.jpg

 

Further information about data slices can be found in the online.help:


http://help.sap.com/saphelp_nw70ehp1/helpdata/en/43/1c3d3f31b70701e10000000a422035/content.htm

 

If the data slice overlaps with the planning selection there is no other way than deactivating the same to enable BPS planning for the CRM Marketing object.

What if you could target each and every customer individually?

 

„Marketing today is working totally different compared to the last years“. Those were the opening words from Karina Herrmann on the SAP stage at CeBIT 2014. The SAP product manager outlined the challenge very nicely: “Many companies today have millions of customers. Capturing all interactions can lead to billions of records”. (source)

 

SAP’s answer for the future marketing is Customer Engagement Intelligence, a real-time solution that enables companies to instantly unlock sentiment and contact insights from both social media channels and company-internal sources to better target and influence prospects and customers in a variety of ways. With SAP Customer Engagement Intelligence you can analyse those billions of records in seconds and gain powerful insights into the customer’s behaviour.

 

Let’s have a look at a concrete example. A marketing manager wants to feel the pulse of the market regarding a new idea discussed over lunch with a colleague: a hybrid SUV. By using SAP Social Contact Intelligence, our marketing manager can drill down on the interests of millions of contacts, being customers, prospects or even anonymous contacts. Filters can be applied to limit the scope of the analysis to the US market, for example, and to the automotive industry only.

 

Segment-of-one Marketing Image1.png

 

The interests of all selected contacts are nicely displayed as a tag cloud (s. above figure). By selecting “SUV” and “Hybrid Car” we can focus our analysis only on those interests. The end result is a tailored target group that could be used in further marketing activities, like an email campaign. But we can go even further. By switching to the Sentiment Engagement tool, we can have a deep-drill on the qualitative dimension. We can understand what those people were doing on the social networks and on the company-owned forums. The solution allows our marketing manager to answer questions like: what is the overall sentiment on hybrid SUVs? what are people saying about hybrid cars? And the loop can be closed right here: you can pick individual posts and instantly react to them.

 

Segment-of-one Marketing Image2.png

 

But it is not only about the possibility of instantly reacting to a post.  SAP Customer Engagement Intelligence offers all the required information in order to know how to react. You can have a deep insight into each and every contact before engaging with them. Information such as contact level, company, role, activities and interests can be accessed directly within the tool.

 

Segment-of-one Marketing Image3.png

 

And it can become even more interesting: build-in predictive analytics algorithms can compute the buying propensity and assert how a target group will react to a particular product. The analysis can be run by marketing employees directly, without any support from IT.

 

If you want to learn more about SAP Customer Engagement Intelligence you can check out the solution overview page here or you can start a free cloud trial here.

 

Marketers are already familiar with dividing customers into target groups. Now it’s time for unique and individualized targeting.

Welcome to the future of marketing!

The Big Data Visualization Conundrum

This glorious age of big data is creating incredible opportunities for businesses to glean deeper and faster insights for more accurate and timely decision-making, thereby leading to improved customer experience and greater innovation.

 

Concomitant with this are several challenges. Organisations are overwhelmed by the volume, variety, and velocity (do check out Doug Laney’s original research note on the 3Vs of big data) of the data pouring into and across their operations. Businesses are barely able to store big data, leave alone understand it, or present it meaningfully. Traditional reporting-based BI tools are insufficient to unlock the value that big data represents, partly because they were never designed to analyse semi-structured or unstructured data in the first place.

BigDataVisualization.png

Data visualization enables organisations to assimilate raw data and present it in a way that generates the most value. I'm proposing 3Cs that good data visualization should empower viewers with - coherence, context, and cognition. (Consequently, I hope that someday I'll be as famous as Doug Laney! I also thought about correlation and causation, but there seems to be a raging debate regarding the relevance of those two). Pairing big data with data visualization discovery tools empowers business users to be self-reliant and not depend on enterprise IT to mine data, perform ad-hoc analysis, or create one-off reports, for them. Going ahead, this democratisation of BI will serve real-time insights to business users directly, leveraging the growing abundance of mobile devices, and bypassing the conventional batch-processed-reporting route.

 

For those interested in knowing more about making big data more meaningful, I would recommend these articles on Wired and Forbes magazines.


Introducing Pixelplots

Pixelplots, a data visualization technique, are high-density multivariate landscapes of big data that empower the discovery of insights, without any aggregation of data. Simple analytics (bar and pie charts) are easy-to-use (as long as one isn't using a 3-dimensional pie chart, for example, or using a format that is incongruous to the objective of the presentation) but present highly aggregated data, with a limited number of data values. Pixelplots do have a learning curve (just like Treemaps), but are invaluable when it comes to visualizing the big picture without forfeiting granularity - almost like a multi-focal lens. Their fundamental premise is to represent as many data objects as possible on an electronic display at the same time, by mapping each data object to a pixel. The number of pixels mapped is therefore the number of data objects being considered. Key attributes of a data object can be mapped to its corresponding pixel’s colour, or horizontal and vertical axis ordering.

 

There has been some academic interest in pixel-oriented visualization techniques in the past, but I am yet to hear about an actual implementation of a Pixelplot in any commercially available data visualization / BI discovery tool. The reason for my fervent interest in this is twofold. Firstly, being a data visualization buff, I am fascinated by how much the Pixelplot actually accomplishes by visualizing a huge set - while simultaneously representing multiple attributes - of data objects. Secondly, I believe that Pixelplots perfectly complement SAP HANA, and they address big data’s “volume” problem more effectively than any other visualization technique in existence today. Keep in mind that almost all analytics on conventional dashboards aggregate, sample, or sort and selectively pick out the data they represent, and never represent the entire data set on a single screen.

 

Moreover, Pixelplots leverage the ever-increasing pixel densities of modern electronic displays. Apple’s “Retina displays”, for example, already pack up to 5 million pixels into a 15” laptop screen. On regular desktop displays, a Pixelplot measuring just 960 x 600 pixels can represent 576,000 unique data objects. Mobile device pixel densities are typically even higher than desktops, and by this virtue, the Pixelplot is mobile-ready. I am hoping that you are sensing my excitement!

 

Visualizing Consumer Engagement

To understand Pixeplots better, let's meet our primary user persona, Cari Smith. Cari is an Online Marketer with a consumer electronics company called Cool Electronics (fictitious). Do note that this use-case for the Pixelplot focusses on Marketing within CRM - based on the choice of KPIs, they can be used in any industry or line of business.


Persona_CariSmith.png


The Consumer Life Cycle

Nate Elliot from Forrester authored this magnificent blog post on the “Marketing RaDaR” , where he presents a powerful alternative to Elias St. Elmo Lewis’ AIDA (Awareness - Interest - Desire - Action) funnel model, which has been used for years as a tool to structure an organisation's sales. He proposes a model based on a four-stage consumer life cycle (rather than a funnel) – consumers first discover a product or service, then explore it greater detail; next they buy the product or service, and after purchase they engage the company from which they bought, as well as with other consumers. Based on my own interactions with Marketing Analysts (through user interviews while working on a next generation consumer engagement innovations powered by SAP HANA), this resonates perfectly with their mental model and their abstracted perception of their consumer base.

 

The Top Marketing KPIs

What are the KPIs that are of interest to Cari Smith (our primary persona, just clarifying as I've been bandying around several names in this post)? While there are several interesting articles talking about the most important marketing KPIs, Avinash Kaushik’s article lists out a ladder of marketing metrics, with Customer Lifetime Value at the very top. By definition, CLV is the amount of revenue or profit a consumer generates over his or her entire lifetime. To be truly insightful, CLV should not be merely historical (summing up revenue earned from a consumer till date), but be predictive (project how much revenue can be realised from a consumer over their lifetime). As consumers become more digitally networked and businesses move towards a single system of record for all consumer data, another (orthogonal) metric that could add tremendous value is an aggregated social activity score, something like the Klout Score. Another important dimension could be the time spent – how much time have consumers been in a certain lifecycle stage?

 

Based on all that has been discussed above, here is (finally!) a mock-up of a Pixelplot:

PixelPlotBigDataVisualisationConsumerEngagement.png

 

  • At the highest level, Cari sees how many consumers are in each of the four life cycle stages. Note that these life cycle stages are customizable - this could be substituted with stages of a customer loyalty program, for example, or need not be a progression at all (simple categories).


  • Every pixel represents a unique consumer, and every consumer at any given point is in some stage of the life cycle.

 

  • The colour of every pixel lies along a monotonic (blue) range of shades, and represents Customer Lifetime Value. The darker the shade, the higher the CLV.

 

  • The pixel x-ordering maps to the time spent by consumers in that particular life cycle stage. The farther they are to the right, the longer time they have spent.

 

  • The pixel y-ordering maps to the social score of the consumers - the higher they are, the more social activity they've recorded.

 

  • A compact Frequency Distribution of the Customer Lifetime Value (click on a bar in the stacked chart to filter) is available at the top, and the Conversion Rate from one life cycle stage to the other (over a default interval which can be altered) is displayed below the Pixelplot.

 

  • This is a mock-up created using Adobe Illustrator - the Pixelplot may not look this "artistic" actually!


At a glance, Cari can view her entire consumer base and see how they are divided into life cycle stages - this is the big picture. She can instantly identify consumer clusters, for example, those who have a high lifetime value and are more engaged socially - now this is all about pattern recognition, a task we homosapiens naturally excel at (although the machines are catching up!). This kind of insight is very relevant for businesses to satisfy the digitally connected and socially networked consumers of today.

 

Here is an example of an insight that Cari might astutely glean from the Pixelplot:

"Aha! Here is a large group of consumers who have a high Customer Lifetime Value, are significantly engaged socially, and have been in the Explore phase for a while. I should create a Facebook or Twitter promotion to get them to buy!"


 

Focus & Context

We talked about the big picture, but the USP of the Pixeplot is that it visualizes data at the atomic level (sans aggregation), in this case, value-by-value, at the individual consumer level. Cari can click on an individual pixel (since this would test anybody's psycho-motor coordination, as pixels are fast becoming invisible in modern displays, I am proposing a focus+context interaction that converts the cursor into a zoomed-in matrix of 9 x 9 pixels) to go into the details of specific consumers. Our old friend Tom Whitman (from the Teched Demo we ran in 2013) makes a reappearance in the screen below. This ability to instantly drill-down to the atomic level helps Cari plan and run 1:1 marketing campaigns, or simply to understand who some of her typical consumers are:

PixelPlotBigDataVisualisationFocusContext.png

Filtering the data in the Pixelplot enables Cari to "thin" information effectively. She can choose what KPIs she wants to visualize, and also restrict the data set based on other attributes (demographics, channels, or loyalty). Altering the filters instantly reveals how many consumers match the filtered criteria:

PixelPlotFiltersSettings.png

 

Visual Segmentation

The idea of the Pixelplot isn't unique, but using it to demarcate market segments through direct manipulation, potentially is! Using algorithms like support vector machines we could automatically discover consumer segments and visualize them graphically, layered atop the Pixelplot. Alternatively, Cari could draw her own segments based on the insights she derives from the Pixelplot, either by using a pointing device or a stylus. Do note that this approach is very different from the traditional (rule-based) methods used to define segments - hence the term "visual segmentation". Cari sees her entire consumer base in one screen and is also able to identify patterns that either automatically emerge (based on the orthogonal metrics that are simultaneously visualized, like in the example above), or are arrived at by slicing and dicing (using the filters we talked about). For every segment (either suggested or defined), we could surface additional details through microcharts, all for better decision making. Cari could edit segments, or add a title/description for those that she wishes to retain. These are still early days - I am confident that the Pixelplot lends itself to several other exciting possibilities!

PixelplotVisualSegmentation.png

Conclusion

This was an introduction to Pixelplots and how they could be applied to visualize big consumer data, conduct analysis through slicing and dicing, and to define market segments visually (and directly!). Like I pointed out earlier, this was just one specific illustration and there is a lot more that can be done with them:

 

  • By incorporating panning and zooming, the Pixelplot can leverage the much-loved design principle of progressive disclosure -  zooming in reveals additional levels of detail about consumers progressively, akin to how online map applications (Google Maps, for example) work.

 

  • Using linking and brushing, selecting a certain consumer can reveal others who have similar behaviours and attributes - enabling a "look-alike" discovery of target consumers

 

A parting note - Pixelplots are not easy to implement, as pixels need to be ordered (they are not positioned absolutely as there might be instances of overlapping) in horizontal and vertical axes simultaneously, which needs a robust rendering algorithm to work efficiently behind the scenes. Also there might be performance issues at the UI layer to render such a vast data set. There is a workaround to this that I can think of - populate the pixels that are likely to correspond to higher / important values first. In the example above, this would mean that the darkest blue pixels (the consumers with the highest Customer Lifetime Value) appear first, followed subsequently by the lighter shades.

 

Thanks for reading, and do share your feedback and comments. I'd love to hear from you!

Dealing with prospects is a real challenge:

  • You often have only fragmented data of them
  • You often don’t know their interests, or you only have a little bit of information that can easily mislead you
  • Nowadays people quickly “run away” when feeling pushed - or even when feeling watched


On the other hand you have a lot of prospect data records available:

  • Prospects show interest on your website
  • They are active on Facebook, Twitter & Co


So certainly you want to reach them with the products of your company and win them as customers.


But how to measure the success of your marketing activities? How to really be able to follow-up on your actions you have taken so that you can improve them further?


First of all I would recommend to get the data of your prospects into SAP CRM

See also some blogs related to this topic:

http://scn.sap.com/community/crm/marketing/blog/2014/01/03/use-marketing-prospect-or-business-partner-for-storing-prospect-data-in-sap-crm

http://scn.sap.com/community/crm/marketing/blog/2014/02/24/why-and-when-should-i-store-data-of-my-social-contacts-in-sap-crm

 

Now I would like to explain how you can measure your success in a very simple way:


Phase 1: Set up

  1. Make sure that the responses of your prospects are tracked: Use the OData Service CRM_MKT_PROSPECT_ODATA to create
    interactions for your prospects per response.
  2. Make sure that for each of these responses the correct marketing campaign gets referenced in the response: Use the BAdI CRM_MKT_INTERACTION_OBJECT to implement the correct logic for referencing always the right campaign.
  3. Make sure that for each kind of response a score value is defined: Define this score value in Customizing -> Customer Relationship Management -> Master Data -> Business Partner -> Marketing Prospects -> Define Score for Interaction Objects. – Each response will then increase the score of the prospect.
  4. Decide at which score level you want to convert a prospect to an account (BP). - This is the time when you consider a prospect being converted to a customer.
  5. Decide also when you delete (= “give up”) a marketing prospect, for example after which time, after how many or which marketing campaigns at which score level.


Phase 2: Execution

  1. When evaluating the right target groups for your campaigns consider also the interactions (including the ones that reflect the responses). With doing this you can be more specific, and you can address really the right people in the right way. Because you not only consider their  more or less static master data but also their behavior and their reactions to your marketing activities.


Phase 3: Measurement

  1. Analyze based on your campaigns (now with detailed data about the number and kind of responses) and based on the “prospect-to-customer” conversion rate the overall effectiveness.


More details about what is offered in the area of nurturing and targeting of marketing prospects can be found in SAP Note 1896854.

Many companies not only analyze what people talk about their products, but started also engaging in a more interactive way with them. A conversational style has evolved. Even one-to-one interactions with the persons active on the corporate Facebook, Twitter & Co pages become normal, especially in the B2B business.

 

It is good to ideally have a friendly atmosphere and let your followers and social contacts communicate with you in the more modern pull-mode.

 

However you should still decide at the right point of time to get the data of your social contacts into SAP CRM. Otherwise you cannot easily track your marketing activities with them, neither you can measure your success.

 

How should you then be able to decide if your activities with your social contacts have a measurable effect at all?

 

Of course it doesn’t make sense to upload all social contacts into SAP CRM:

  • Not all of them are relevant for you
  • You are not sure if they are open for being addressed by additional marketing activities

 

Why not using the principles you already apply on the company’s web site also in the social world?

 

Let your social contacts decide at what point of time their data gets stored in the SAP CRM system; for example as soon as they show interest in a newsletter, demo or event. When taking this active step they can verify for themselves if they actually want to provide their data to you, and if they accept that a more focused marketing will probably start.

 

With this you can keep the friendly and interactive communication style you started with.

 

In any case you can benefit from the following when storing social contact data in SAP CRM: You can connect the activities of your social contacts in the social world with the (internal) data in SAP CRM, and by doing this you can get a more holistic view on them.

 

Go ahead as follows:

  • Get the data into SAP CRM, including their social user data, for example their Facebook accounts.
Tipp: You can create marketing prospects based on the social contact data, using the OData service CRM_MKT_PROSPECT_ODATA.
  • Get the social user IDs from the respective social network(s). These are technical IDs that identify uniquely your social contacts per network, and can be used to connect data from the social world with the internal CRM world.
Tipp: Use the report CRM_MKT_PROSPECT_SMI_UPDATE to update the social user data in SAP CRM with the IDs from the social network(s).

Recently in a customer project we got the requirement that customer needs the assignment block "Membership Activities" could only be visible for some user under a given period. The authorization for that user must be explicitly assigned by administrator with period clearly defined. For example, the user XXX could only be allowed to see that assignment block between 10:00AM ~ 10:30AM this Friday.


clipboard1.png

Since the SAP standard authorization concept could only support time based condition, we have to do some custom development:

 

1. We create a new UI component and put it to a new work center "Authorization Center".

clipboard2.png

We put this new work center to business role LOY_ADMIN so that only Loyalty administrator could be able to assign / delete time based authorization.

 

 

Admin could choose the user via search help and click Assign button to grant authorization.

clipboard3.png


we use a custom table to store the authorization detail. This is ok since in customer company, normal user could only use webclient UI to access CRM with no SAPGUI installed. The authorization could be deleted by admin at any time if needed.

clipboard4.png

2. for UI component LOY102H_MSH, we enhance the view controller below, add a post exit on method DETACH_STATIC_OVW_VIEWS to filter the view

CUMSHMA.LOY102H_MSH/MSHMemberActivities by checking the authorization. The technical implementation could be found here.

clipboard7.png

The current user and current time is compared with authorization detail stored in custom table. If no authorization,

the assignment block will be hidden with a warning message displayed.

clipboard8.png

This question is very typical, and the decision that needs to be taken is fundamental.

 

Following you find some hints that hopefully help to take this decision:

 

A) In case you deal with prospect data that contains only a few attributes, for example the e-mail address only, it is often not allowed to create such prospects as business partners in the CRM system. The CRM system itself can store such data, and it is also possible to configure the data exchange via CRM Middleware in a way that such prospects are not replicated to a connected SAP ERP. But usually organizational guidelines exist that need to be followed for creating a business partner in the CRM system. With an e-mail address only it is often not allowed to create a business partner.

In this case I would recommend to create such prospect data as marketing prospects in SAP CRM.

 

B) Besides it can make sense to explicitely separate the data of prospects where you don't know at all yet if you can start a real business relation with them; if they are interested at all. They perhaps only registered on the company's web site or talked about the company's products on Facebook or Twitter. But this doesn't mean that they have a quite serious interest in the products, or that they would even buy some of them at some point of time. In many countries the data privacy regulations are quite strict: It is not allowed to store data from persons, with which no business relation exist.

In this case I would recommend to create such prospect data as marketing prospects in SAP CRM, and convert them to business partners as soon as a real business relation starts.

 

C) In case the company deals with prospects in a very close and extensive way from beginning on, somehow not dealing with prospects that are not considered as being quite serious about their company, creating them as business partners in the CRM system can make more sense. Only business partners can be used in activities, leads, and opportunities, to mention some examples for business transactions that are based on the one-order framework of SAP CRM. So if such business transactions need to be created in SAP CRM for the prospects from beginning on you have to create such prospects as business partners in SAP CRM.

 

Some final remarks:

 

If the decision is taken to use marketing prospects for storing prospect data in SAP CRM, be aware of their short life-cycle: Either convert marketing prospects as soon as possible to business partners, or delete them if you can't establish a real business relation with them.

The purpose of marketing prospects is to support you in a very defined and controlled phase of time in your marketing activities.

 

I would recommend to define exactly what needs to be reached by marketing, how this can be reached, and in which time frame this needs to be reached.

This helps you to always have an overview of the prospects, and the company's success with these prospects.

Actions

Filter Blog

By author:
By date:
By tag: