1 2 3 4 Previous Next

SAP CRM: Marketing

49 Posts

For a particular Trade Promotion, accruals are calculated as per “accrual methods “configured.

The Funds Management application provides  accrual management capabilities, which means that accrual calculations can be done within the SAP CRM system and sent to SAP ERP where the amount is posted in SAP ERP Financials.

The Accrual Calculation job can use various reference data types, depending on what is defined in Customizing. Examples include sales volumes (SAP ERP), trade promotion management (TPM) planning data, or funds data. The accrual calculation results are stored within the accrual staging area.

 

In Accrual Posting it is possible to schedule an accrual posting run in the batch processing framework to post the accrual results as fund postings, which are transferred to SAP ERP financials as accounting documents

 

Below diagram explains configuration set up linkage for Accrual method for particular Trade promotion and spend types.

 

 

1.jpg

 

Configuration Path

  1. SPRO -> Customer Relationship management --> Fund management --> Accruals --> Accrual Calculation Method

 

2.jpg

3.jpg

4.jpg

 

 

Below are the overview of the six accrual methods delivered in SAP CRM Trade Promotion standard. However, it is possible to configure alternative accrual calculation methods on a project basis/requirement.

4.5.JPG

 

 

The information of accrual method   can be seen in Fund usage in Trade Promotion. Pls. refer below screenshot for the same.

 

5.jpg

Introduction

 

'Analyze Sentiments' is a Fiori app that helps you perform Sentiment Analysis on the topics that interest you. To learn more about the app, please go check out these links:

 

 

Quick integration of Sentiment Analysis powered by Analyze Sentiments into your app

Ready to get your feet wet?!

 

Here are a few steps to add a chart control into a UI5 control that supports aggregations (like sap.m.List, etc) and to connect the OData service to this chart.

When you run the app, you will be able to see nice little charts added into each item in the aggregation that shows sentiment information.

 

Follow these steps to quickly integrate Sentiment Analysis capability into your already existing UI5 app:

 

1) Insert the chart into the appropriate location in your app. In the sample code below, the chart is embedded into a custom list item:

<List id="main_list" headerText="Vendors">
  <items>
        <CustomListItem>
            <HBox justifyContent="SpaceAround">
                  <ObjectHeader title="playstation" />
                  <viz:VizFrame vizType="bar" uiConfig="{applicationSet:'fiori'}" height="250px" width="250px"> </viz:VizFrame>
            </HBox>
        </CustomListItem>
  ...

2) In the controller code on initialization, add the following code to fill data in the chart that we added into the UI in the previous step:

 

//Get the reference to the Odata service
var oModel = new sap.ui.model.odata.ODataModel("http://localhost:8080/ux.fnd.snta/proxy/http/lddbvsb.wdf.sap.corp:8000/sap/hba/apps/snta/s/odata/sntmntAnlys.xsodata/", true);
//Get the reference of the control where you want the charts embedded
var oList = this.getView().byId("main_list");
//This code gets the Subjectname from the control in which the chart is going to get embedded. You can see that the subjectname is extracted from the Title of the items in the list
for (var i = 0; i < oList.getItems().length; i++) {
    var oChart = oList.getItems()[i].getContent()[0].getItems()[1];
    var sItemName = oList.getItems()[i].getContent()[0].getItems()[0].getTitle();
//Now we set the data for each item in the list as per the subject that we extracted from the listitem.
    oModel.read('/SrchTrmSntmntAnlysInSoclMdaChnlQry(P_SAPClient=\'' + self.sSAPClient + '\')/Results', null, ['$filter=SocialPostSearchTermText%20eq%20\'' + sItemName + "\' and " + "SocialPostCreationDate_E" + " ge datetime\'" + '2014-06-14' + '\'' + '&$select=Quarter,Year,SearchTermNetSntmntVal_E,NmbrOfNtrlSoclPostVal_E,NmbrOfNgtvSocialPostVal_E,NmbrOfPstvSocialPostVal_E'], false, function(oData, oResponse) {
        oChart.setVizProperties({
            interaction: {
                selectability: {
                    mode: "single"
                }
            },
            valueAxis: {
                label: {
                    formatString: 'u'
                }
            },
            legend: {
                title: {
                    visible: false
                }
            },
            title: {
                visible: false
            },
            plotArea: {
                dataLabel: {
                    visible: true
                },
                colorPalette: ['sapUiChartPaletteSemanticNeutral', 'sapUiChartPaletteSemanticBad', 'sapUiChartPaletteSemanticGood']
            }
        });
        var oChartDataset = new sap.viz.ui5.data.FlattenedDataset({
            measures: [{
                name: "Neutral",
                value: '{NmbrOfNtrlSoclPostVal_E}'
            }, {
                name: "Negative",
                value: '{NmbrOfNgtvSocialPostVal_E}'
            }, {
                name: "Positive",
                value: '{NmbrOfPstvSocialPostVal_E}'
            }],
            data: {
                path: "/results"
            }
        });
        oChart.setDataset(oChartDataset);
        var oDim1 = new sap.viz.ui5.data.DimensionDefinition({
            name: "Year",
            value: '{Year}'
        });
        var oDim2 = new sap.viz.ui5.data.DimensionDefinition({
            name: "Quarter",
            value: '{Quarter}'
        });
        var oDataset = oChart.getDataset();
        oDataset.addDimension(oDim1);
        oDataset.addDimension(oDim2);
        var oChartModel = new sap.ui.model.json.JSONModel(oData);
        oChart.setModel(oChartModel);
        oChart.setVizProperties({
            valueAxis: {
                title: {
                    visible: true,
                    text: "Mentions"
                }
            },
            categoryAxis: {
                title: {
                    visible: true,
                    text: "Quarter"
                }
            }
        });
        var feedValueAxis = new sap.viz.ui5.controls.common.feeds.FeedItem({
            'uid': "valueAxis",
            'type': "Measure",
            'values': ["Neutral", "Negative", "Positive"]
        });
        var feedCategoryAxis = new sap.viz.ui5.controls.common.feeds.FeedItem({
            'uid': "categoryAxis",
            'type': "Dimension",
            'values': [new sap.viz.ui5.controls.common.feeds.AnalysisObject({
                    'uid': "Year",
                    'type': "Dimension",
                    'name': "Year"
                }),
                new sap.viz.ui5.controls.common.feeds.AnalysisObject({
                    'uid': "Quarter",
                    'type': "Dimension",
                    'name': "Quarter"
                })
            ]
        });
        oChart.addFeed(feedCategoryAxis);
        oChart.addFeed(feedValueAxis);
    }, function() {
        sap.m.MessageBox.show("Odata failed", sap.m.MessageBox.Icon.ERROR, "Error", [
            sap.m.MessageBox.Action.CLOSE
        ]);
    });
}

PS: Depending on how you add the chart into your app, the above chunk of code will have to be adjusted to get the subjectname and pass it to the chart.

 

In the above sample code, you can find that the chart in each custom list item is bound to data in a loop. If you have added the chart in a similar control with an aggregation, you would have to modify the lines highlighted above to get the list control and to get the chart reference and searchterm.

 

 

What else can you do with the Analyze Sentiments Odata services?

Here’s some more information on our existing Odata services for Analyze Sentiments and some ideas how you can use it in your apps.

 

Collection

What information it gives out

SocialMediaChannelsQuery

List of Channels (code and name)

SocialPostSearchTermsQuery

List of Searchterms (code and name)

SrchTrmSntmntAnlysInSoclMdaChnlQry

List of (number of mentions, number of positive, negative & neutral mentions, ‘netsentiment value’) for a searchterm given out in daily/weekly/monthly/quarterly/yearly period granularity

SrchTrmSntmntAnlysSclPstDtlsQry

List of socialposts for a searchterm in a period

SrchTrmSntmntTrendInSoclMdaChnlQry

Net sentiment trend in percentage for a searchterm over a specified period.


PS: The last three services retrieves data for all subjects when filter is not applied on searchterms.

 

 

Calculations used:

 

Net sentiment  = P - N

P = sum of weights of positive posts. The weight could be +1(good), +2(very good)

N = sum of weights of negative posts. The weight can be -1(bad), -2(very bad)

 

Net sentiment trend percentage = (Net sentiment in last n days – Net sentiment in previous n days) / Net sentiment in previous n days.

 

So on the whole, we have the following information:

i) Number of positive, negative, neutral, total mentions about a Subject

ii) Net sentiment about a subject

iii) Net sentiment trend about a subject which is a percentage.

 

Here are some sample ways in which the external apps can right away start using our Odata assets:

 

Use

Control that can be used

Collection to be used

Show the numbers (total, positive, negative neutral mentions or netsentiment) related to a subject

Label

SrchTrmSntmntAnlysInSoclMdaChnlQry

Show the socialposts related to a subject

Table, list, etc

SrchTrmSntmntAnlysSclPstDtlsQry

Show the net sentiment trend of a subject

Label

SrchTrmSntmntTrendInSoclMdaChnlQry

Show chart/graph with the numbers over a period

Chart

SrchTrmSntmntAnlysInSoclMdaChnlQry

 

 

 

Related links:

One of the most overlooked aspects of contact management is the relationship between the contacts in your database and your sales process. It has been my experience that most companies develop their marketing databases with contact information independent and blind to their sales processes.

With more than 5.6 people on average being reported to now be involved in a purchase decision for a solution, you can’t develop a good database without first understanding how you sell.

A critical first step in helping customers through the buyer's journey is to understand who you need to communicate with along the way. Understanding the roles and how decisions get made supporting specific solutions and business processes is a prerequisite for developing the right kind of marketing database.  For example, if you sell complex solutions that require engagement with economic and technical buyers, then the contacts in your database needs to support these types of roles.

I once marketed to a very specialized audience to a defined number of accounts that could only purchase our solutions, if their companies met very specific purchasing criteria. While I was able to find resources for the specialty titles I was seeking, I was not able to meet my second objective of locating these titles for the accounts and criteria we were targeting.

In this case, I ended up developing a custom database using a marketing intern, leveraging an online contact repository by pulling contacts against predefined criteria. While this custom database required some initial development effort, our program responses, leads, and opportunity conversions grew exponentially.  We were now able to target and reach the roles we needed to reach, in the accounts where we needed to do business.

As, you begin to evaluate future contact list purchases, do so from the perspective of addressing your white space and gaps in roles supporting your sales processes. As you do, I'm confident you will begin to view your contacts in your marketing database in an entirely new manner while further appreciating its ultimate power.

To learn more about SAP's in-memory database, SAP HANA, and SAP solutions for Big Data, I invite you to click on the following link.

 

 

Regards,

 

 

Harry E. Blunt

Director, North America Industry Field Marketing
SAP America, Inc.

3999 West Chester Pike 
Newtown Square, PA 19073
M: 302-740-8293

E: harry.blunt@sap.com

With future innovation and sales success tied so closely to the delivery of relevant and personalized customer experiences, companies must get closer and more intelligently connected with their customers while paying greater attention to the user experience. To meet these objectives, companies must develop a holistic framework for managing customer intelligence and their different sales channels while differentiating their offerings through flexible solution delivery models.

 

Competing successfully in the digital economy requires an “always on,” integrated approach for capturing and leveraging customer intelligence. Intelligence should be leveraged throughout all parts of the organization and needs to be visible and relevant at the point of a customer's transaction or engagement. By strategically combining transactional, qualitative, and social data with analytics and BigData, companies can better understand opportunities for future innovation while engaging with customers more personally by becoming much more prescriptive around audience targeting and messaging.

 

Because customers expect personalized relevant experiences regardless of the channels from where and how they engage, all organizations must have a holistic picture of customer engagement supported by a sound strategy focused on Omni-channel commerce. Providing customers with a unified and intelligently connected user experience grows customer relationships, and captures customer intelligence previously undetected by unifying previously disconnected, non-visible and fragmented customer experiences. Companies can dramatically improve their customers’ user experience and loyalty by offering personal, intelligently connected experiences over multiple channels of engagement and commerce.

 

Product and Software Innovators can draw closer to their customers by moving from consumption based purchase models to solution and subscription based models. While there are significant benefits with moving to solution sales and recurring revenue streams like subscriptions, there are also associated added complexities impacting how these new solutions need to be developed, communicated and delivered to the market.  Operationally, moving toward solution and subscription based business models impacts how solutions, are developed, orders get configured and ultimately how revenue is captured and realized. To fully capitalize on these new emerging opportunities for selling solutions and subscriptions, you need to have operational and billing systems that accommodate and support a large degree of custom order configuration and business requirements flexibility, extending from product development through solution delivery.

 

You can learn more about SAP solutions for Customer Engagement and Commerce and how leading manufacturing and software companies are providing differentiated value to their customers by accessing these complimentary resources

 

Warmest regards,

 

Harry E. Blunt

Director, North America Industry Field  Marketing
SAP America, Inc.

3999 West Chester Pike 
Newtown Square, PA 19073
M: 302-740-8293

E: harry.blunt@sap.com

While social selling, customer experience, and buying personas, grab the marketing headlines, I would like to pause and pay homage to the often ignored but equally important North American Industry Classification System (NAICS).

 

It’s my assertion that the use of NAICS codes and their predecessor SIC codes are some of the most misunderstood, and underutilized FREE resources available to marketers and business people.

 

Many of the challenges faced by marketers around content consumption and messaging with relevance can be greatly addressed by doing a better job with industry and audience segmentation prior to audience engagement. Whether you are marketing to the Fortune 1000 or an addressable market size over a million customers, those thousands or millions of customers do not all share the same business characteristics. The key to effective messaging and getting your message to the right people is reaching people where they “live” by targeting and messaging based on finding those unique characteristics.

 

Understanding and incorporating NAICS codes into your target audience strategy is a critical first step in setting future winning audience engagement tactics. Those six digit codes buried among all your other fields of customer data truly do matter.

 

To help you appreciate the magnitude and importance of these differences, I have attached a little light reading. It contains 508 pages of individual NAICS code descriptions from the US Census Department. As you will see, sub-sectors that are operating within the same general industry act and behave very differently. While a chemical provider of chlorine and a paint company operates under the same general classification of chemicals, the way they manufacture products and sell products is very different.

 

If you want another proof point for taking a more granular approach to audience targeting, consider that there are over 10,000 plus active associations and 1000’s of specialty trade journals. These associations, trade publications, and associated websites and social communities are successfully reaching audiences at the sub-sector level with very defined special interest. They continue to thrive and prosper in their niche markets because the content they provide and issues they address while being niche is extremely relevant to their audiences. These associations, niche trade publications and social communities are successfully reaching their target audiences where they “live.”

 

Marketing program returns naturally improve for companies when their content reaches and resonates with their intended audiences. Having well-defined audience segmentation, aided by the NAICS, is a good first step, companies can take to ensure they develop the right messages heard and then acted upon by the right people

 

To learn how SAP can help you engage more personally with your customers through Big Data  and  Omni-channel commerce, please check out the following complimentary resources.



Harry E. Blunt


Director, North America Industry Field Marketing
SAP America, Inc.

3999 West Chester Pike 
Newtown Square, PA 19073
M: 302-740-8293

E: harry.blunt@sap.com

If you ever read a murder mystery or watched a criminal investigation TV show, a common plot element is to start the story with a found dead body often referred to affectionately as “John Doe.”The balance of the show or book typically then focuses on a protagonist working toward uncovering John Doe’s anonymity and how and who ultimately caused this person’s demise.

 

The Investigator rarely just jumps in trying to solve the crime and more often than not the investigation starts with an autopsy of the body. With additional information gleaned from the autopsy, the protagonist then begins to solve the mystery. As the protagonist gathers more information about this dead body, the anonymous dead body quickly evolves into an identified person with distinct attributes leading to the point when the crime finally gets solved.

 

For marketers, working with incomplete and unrefined responder data is a bit like working with an anonymous dead body. If all a marketer knows about a potential prospect or responder is the person’s name, company and even email, there is not much a marketer can do to engage effectively with that individual. Like, the process of an initial autopsy, a marketer has to try to define and characterize this first responder to the best of his or her ability from the outset. Otherwise, there is no logical place from where to potentially engage with this individual with future activities.

 

Successful target marketing must begin with basic contact hygiene. Missing contact details like contact titles, emails, phone numbers and industry NAICS codes and supporting descriptions must be continuously appended and updated within databases. While it’s not always possible to have complete responder contact data initially, companies must make data hygiene a priority to ensure contact information is complete as possible prior to future use for targeting and analysis.

 

Once a customer record is defined with uniquely definable attributes, you can then move forward with identifying and segmenting future audience targets by creating responder profiles based on specific responder behaviors.None of this can take place until contact data is defined and managed as uniquely identifiable attributes. Two aspects of a customer’s record that make it potentially unique is a person’s title, and their Industry NAICS code. The third aspect relates to leveraging and tracking responder behavior, but you can’t move successfully to step three without first having a person’s title and correctly defined NAICS code and supporting description. Otherwise, you run the risk of jumping to conclusions based on inaccurate or incomplete data. To illustrate the point, let me provide a fictitious example. Suppose that for the last six months you have been running a marketing campaign on Big Data, In that campaign you always included responders from prior related activities including those responders without complete contact records. For the last six months any marketing activity you had related to Big Data you continually push to Big Data program responders

 

At the conclusion of the campaign, while participation has been steadily increasing you have seen marginal movement having responders converting to leads and then having leads moving into opportunities. As a postmortem, you decide to do some additional data hygiene around your responders with incomplete contact records. With a more comprehensive responder profile, this is what you find. A large percentage of the newly defined responders came from Life Sciences and particularly Medical Device companies with titles focused on quality and regulatory operations compliance. Having this new insight, you take some time and do some research on the topic of Big Data in the Medical Device community and discover that Big Data is actively being showcased as an issue and opportunity. You then tweak your programming and messaging and focus more on Medical Device operationally focused buying centers, during the second half of the year, and both leads and opportunities substantially improve.

 

While this is a fictitious example, here is the important point. Until, you’re able to develop a more comprehensive responder profile, there is little you can do to move confidently forward with meaningful future engagement and analysis. Target audiences and responders are just “anonymous dead bodies” until they can be characterized and grouped into uniquely definable attributes.

 

And, while data hygiene and segmentation rarely share the same reverence as “improving customer experience.” Like the “murder mystery autopsy,” they are critical disciplines for marketers first to master to ensure relevant audience conversations and future business opportunities.

 


Harry E. Blunt

Director, North America Industry Field Marketing
SAP America, Inc.

3999 West Chester Pike 
Newtown Square, PA 19073
M: 302-740-8293

E: harry.blunt@sap.com

SAP Canada hosted two events last week focusing on the theme of running simple and innovating business processes in a complex global world. They featured fascinating success stories from SAP customers and an eye-opening presentation from TED fellow and Complexity Scientist Eric Berlow on embracing complexity to come up with innovative answers to big data challenges.

Eric kicked off proceedings with his intriguing perspective on how we can leverage the explosion of data to build a ‘data-scope’ which allows us to connect the dots and see simple patterns that are invisible to the naked brain. He calls this ‘finding simple’ from complex causality and multi-dimensionality, a theory that can be applied across digital media and business strategy. He closed by saying that businesses need to focus on being intelligently simple rather than merely simplistic. In other words, IT has to be distilled down to offer real business insights, rather than simplified down to nothing at all.

/profile/1axpIkoxXcWZGPRWFo6pCw/documents/F6yd2yCDZVpqWjKQGR6JHg/thumbnail?max_x=850&max_y=850

Positioning SAP as the go-to intelligent business simplifier

Snehanshu Shah, Vice President, HANA Centre of Excellence, SAP Global, moved the conversation to the cost of growing complexity in business. He introduced SAP’s S/4HANA business suite as the on-premise and cloud solution designed to give organizations the freedom and drive to innovate their business processes. By taking the core functionality of R/4, simplifying it and applying the streamlined Fiori interface, S/4HANA delivers less hardware at lower cost while providing faster answers. This is business intelligence and analytics at the fingertips of every line of business.

Sam Masri, Managing Principal, Industry Value Engineering, SAP Canada, continued the discussion by calling complexity business’s most intractable challenge today – a rising view across many industries. While so many enterprises are throwing a vast portion of their IT spend at keeping the lights on, they’re missing out on achieving a lower TCO of up to 22% by failing to invest more in innovation.

Sharing our customers’ success in simplification and innovation

The event was rounded off with some valuable insights into how some of SAP’s key customers are using our software to run simple. First, Albert Deileman and Jason Leo of the Healthcare of Ontario Pension Plan (HOOPP) told us of their search for a ‘pixie dust’ solution to the organization’s complexity challenges. For them, the SAP HANA Enterprise Cloud (HEC) solution is all about simplicity with results; real-time data speed and agility, perfect replication, rapid queries and modelling, and faster time to market.

John Harrickey from CSA Group was next up, telling us how the transition from ERP to HEC fuelled the company’s growth and expansion into Europe and Asia. It has enabled better employee engagement by mobilizing applications and improving insights, and better customer engagement by enhancing collaborating and responsiveness. He explained how the company has been able to build HANA upon existing SAP technology to create a seamless experience with high functionality, resulting in reduced complexity and a more productive business.

Wally Council of HP Converged Solutions spoke of the company’s need to flip IT investment from keeping the lights on to funding innovation. He brought up the point that the operational complexity challenges faced by huge and geographically-diverse modern enterprises have to be tackled head-on by a major rethink of key business processes and user experiences.

To top it off, our own Mike Golz, CIO, SAP Americas, reminded us that SAP itself runs SAP, and remains our first and best reference customer.

The richly-attended talks were held in the Four Seasons Hotel in Toronto on March 12 and Flames Central in Calgary on March 10. If you would like to find out more about SAP’s Simplify to Innovate initiative and the S/4HANA business suite, please visit the event landing page and www.sap.com/HANA. The presentations from the event are available on SAP Canada’s JAM page.

Disclaimer


This tutorial is intended as a guide for the creation of demo/test data only. The sample script provided is not intended for use in a productive system.


Purpose


This tutorial explains how to create demo data for the Business Suite Foundation database tables SOCIALDATA and SMI_VOICE_CUST using a Python script. The data is saved as excel files. You can find more information about Analyze Sentiment, a Fiori app from Social Intelligence here - New videos on SAP Sentiment Analysis on YouTube available

It will help you to get the context of this post and also to have a basic idea on what is Social Intelligence about.
Prerequisites/Setup


Make sure that the following prerequisites are met before you start out :

• Installation of Python 2.x for windows

Install Python 2.x  for your platform - Download Python | Python.org
PS: During installation, select the option to add Python's installation directory to Windows PATH variable.

 

Install the required python modules: setuptools, jdcal, openpyxl, xlrd.


 

Specifying Input and Customizing the scripts


There are two variations of the script that can be used depending on the use case.


Script 1 - gen_posts_count.py


When to use: This script can be used when you have a list of searchterms, the time range and the average number of posts per week for which you want to generate the demo data. If you use this script you cannot control the sentiment value in the posts. Sentiment indicates whether the social user is telling a good thing, neutral thing or a bad thing through the social post. So this script generates posts with random sentiment.

 

Input File: post_count_per_week.xlsx in which you have to maintain the products and the corresponding number of posts per week to be generated.

See the attached screenshot - post_count_total.PNG

 

Modification to the script: time range has to be specified in the python script at the end of the file. Open the script in a text editor and modify this line to give the start and end dates. - Number of weeks that the time span comprises of: gen_posts([1, 12, 2013], [29, 1, 2014], 8)

 

 

#!/bin/python
# Generates a collection of dummy social media data
from random import choice, randint, random
from time import strftime
from datetime import timedelta, datetime
from openpyxl import Workbook
import xlrd
def get_products_and_counts():
    book = xlrd.open_workbook('post_count_per_week.xlsx')
    sh = book.sheet_by_index(0)
    products = []
    counts = []
    for rownum in range(sh.nrows):
        products.append(sh.row_values(rownum)[0])
        counts.append(sh.row_values(rownum)[1])
    return products, counts
def randomN(prefix, ndigits):
    range_start = 10**(ndigits-1)
    range_end = (10**ndigits)-1
    return prefix + str(randint(range_start, range_end))
def random_date(start, end):
    return start + timedelta(
        seconds=randint(0, int((end - start).total_seconds())))
def gen_posts(s_date, e_date, no_of_weeks):
    social_filename = 'SOCIALDATA' + '.xlsx'
    voice_filename = 'SMI_VOICE_CUST' + '.xlsx'
    social_book = Workbook(optimized_write = True)
    social_sheet = social_book.create_sheet()
    voice_book = Workbook(optimized_write = True)
    voice_sheet = voice_book.create_sheet()
    start_datetime = datetime(s_date[2], s_date[1], s_date[0], 0, 0, 0)
    end_datetime = datetime(e_date[2], e_date[1], e_date[0], 0, 0, 0)
    client_list = ['005']
    user_list = ['Ashwin', 'Saiprabha', 'Anupama', 'Debasish', 'Ajalesh', 'Raghav', 'Dilip', 'Rajesh', 'Saju', 'Ranjit', 'Anindita', 'Mayank', 'Santosh', 'Kavya', 'Jithu']
    #product_list = ['Oz Automotive', 'Samba Motors', 'Smoggy Auto', 'Camenbert Cars', 'Curry Cars', 'Driftar', 'eRacer', 'Rouble Motor Company', 'MoonRider', 'Bumble']
    channel_list = ['TW', 'FB']
    adj_set = {"good" : ['good', 'zippy', 'beautiful'],
          "very_good" : ['exuberant'],
          "neutral" : ['ok'],
          "bad" : ['bad', 'annoying'],
          "very_bad" : ['awful']}
    adj_kind_from_senti = { 2 : "very_good",
                1 : "good",
                0 : "neutral",
                -1 : "bad",
                -2 : "very_bad"}
    post_templates = {"very_good" : ["Hey guys, try {0}, it is {1}! Dont miss!",
                      "People, I got the new {0} - {1}!! Brilliant performance! Give a try!",
                      "If you havent yet, try {0}. The speed is fantastic, It is {1}!",
                      "The brandnew {0} - The product quality is impressive!! Verdict - {1}",
                      "{0} is {1}. Highly recommended"],
            "good"      : ["Today I tried {0}. It is {1}.",
                            "The new {0}. Product quality is top, is {1} and worth a try",
                            "Did you checkout {0}?, {1} thing.",
                            "Latest version of {0} is {1}. Excellent performance for me!",
                            "Didnt know {0} is {1} stuff. Superb speed!. Do try it."],
            "neutral"  : ["Checked out {0}. It is {1}",
                            "The new {0} is {1}. Dont expect much.",
                            "Difficult to judge the new {0}. It is {1}.",
                            "Heard the new {0} is {1}. Any first hand info on the performance?",
                            "Anyone know how is {0}, reviews say it is {1}. Quality is what matters"],
            "very_bad"  : ["OMG!! Tried {0}. Its performance is damn too low. It is {1}",
                            "Never go for {0}, the speed is very less, {1} thing.",
                            "Oh, such a {1} thing {0} is!",
                            "Dont ever think of getting a {0}, very bad product quality. It is {1}",
                            "Why do we have {1} products like {0}? :("],
            "bad"      : ["Tried the new {0}. It is not recommended - {1}",
                            "Shouldnt have gone for the {1} {0}. Pathetic product quality.",
                            "First hand experience: {0} is {1}!",
                            "My {0} is {1}. The speed is way too less. Is it just me?!",
                            "The new {0} is {1}. Performance is disappointing. Fail!!"]}
    products, counts = get_products_and_counts()
    for j in range(len(products)):
        product = products.pop()
        count = int(counts.pop()) * no_of_weeks
        print product, count
        for k in range(count):
            sentiment = randint(-2, 2)
            sentiment_valuation = sentiment + 3 if sentiment else sentiment
            adj_kind = adj_kind_from_senti[sentiment]
            adj = choice(adj_set[adj_kind])
            client = choice(client_list)
            guid = randomN('POB', 29)
            user = choice(user_list)
            channel = choice(channel_list)
            post_template = choice(post_templates[adj_kind])
            posted_on = random_date(start_datetime, end_datetime)
            post = post_template.format(product, adj)
            social_sheet.append([client, guid, channel[:2].upper() + str(randomN('',6)), 'English', channel, user, posted_on.strftime("%a, %d %b %Y %H:%M:%S +0000"),'','','','','','','','','','','', product,'', post])
            voice_sheet.append([client, guid, 'Text Analysis', 'Sentiment', '', sentiment, sentiment_valuation,'', '', posted_on.strftime("%Y%m%d%H%M%S")])
            voice_sheet.append([client, guid, 'Text Analysis', 'PRODUCT', product, sentiment, sentiment_valuation,'', '', posted_on.strftime("%Y%m%d%H%M%S")])
    social_book.save(social_filename)
    voice_book.save(voice_filename)
    print 'Demo data saved in SOCIALDATA.xlsx, SMI_VOICE_CUST.xlsx'
#modify this line => gen_posts(start_date, end_date, no.of weeks for which data is to be generated)
gen_posts([1, 12, 2013], [28, 1, 2014], 8)

PS: You can configure the other aspects like usernames, channels, countries, locations, adjectives, post templates also.

 

 

Script 2 - gen_senti_count.py


When to use: This script can be used when you have a list of searchterms, the time range and the number positive, negative and neutral posts to be generated for each product in that time span. If you use this script you can control the sentiment value in the posts.

 

Input File: senti_count_per_week.xlsx in which you have to maintain the products and the corresponding number of posts per week to be generated. See the attached screenshot - senti_count_total.PNG

 

Modification to the script: time range has to be specified in the python script at the end of the file. Open the script in a text editor and modify this line to give the start and end dates. - Number of weeks that the time span comprises of: gen_posts([1, 12, 2013], [29, 1, 2014], 8)

 

 

 

#!/bin/python
# Generates a collection of dummy social media data
from random import choice, randint, random
from time import strftime
from datetime import timedelta, datetime
from openpyxl import Workbook
import xlrd
#Reads lines "NIKE 23 14 45" from 7days.xlsx which is the count of pos, neg and neu posts to be generated for NIKE in the given period
def get_products_and_senti_num():
    book = xlrd.open_workbook('senti_count_total.xlsx')
    sh = book.sheet_by_index(0)
    products = []
    senti_num = []
    for rownum in range(sh.nrows):
        products.append(sh.row_values(rownum)[0])
        senti_num.append(sh.row_values(rownum)[1:4])
    return products, senti_num
#Returns prefix + ndigits
def randomN(prefix, ndigits):
    range_start = 10**(ndigits-1)
    range_end = (10**ndigits)-1
    return prefix + str(randint(range_start, range_end))
def random_date(start, end):
    return start + timedelta(
        seconds=randint(0, int((end - start).total_seconds())))
def gen_posts(s_date, e_date):
    social_book = Workbook(optimized_write = True)
    social_sheet = social_book.create_sheet()
    voice_book = Workbook(optimized_write = True)
    voice_sheet = voice_book.create_sheet()
    start_datetime = datetime(s_date[2], s_date[1], s_date[0], 0, 0, 0)
    end_datetime = datetime(e_date[2], e_date[1], e_date[0] + 1, 0, 0, 0)
    client_list = ['001']
    user_list = ['John', 'William', 'James', 'Jacob', 'Ryan', 'Joshua', 'Michael', 'Jayden', 'Ethan', 'Christopher', 'Samuel', 'Daniel', 'Kevin', 'Elijah']
    channel_list = ['TW', 'FB']
    countries = ['India', 'Germany', 'France', 'The United States']
    locations = {"India" : ["Bangalore", "Chennai", "Delhi", "Mumbai"],
                "Germany": ["Berlin", "Munich", "Stuttgart", "Frankfurt"],
                "France": ["Paris", "Marseille", "Lyon"],
                "The United States": ["Florida", "Washington DC", "Texas", "Dallas"]}
    country_codes = {"India": "IN",
                    "Germany" : "DE",
                    "France" : "FR",
                    "The United States": "US"}
#The adj_set has the adjectives that will be used in the posts.
    adj_set = {"good" : ['good', 'nice'],
          "very_good" : ['refreshing', 'magical'],
          "neutral" : ['ok'],
          "bad" : ['not good', 'substandard', 'unpleasant', 'poor'],
          "very_bad" : ['awful', 'horrible', 'terrible']}
    adj_kind_from_senti = { 2 : "very_good",
                1 : "good",
                0 : "neutral",
                -1 : "bad",
                -2 : "very_bad"}
    post_templates = {"very_good" : ["Hey guys, try {0}, it is {1}! Dont miss!",
                      "People, I got the new {0} - {1}!! Brilliant! Give a try!",
                      "I'm loving {0}!!",
                      "Using {0} feels great!!",
                      "{0} is {1}. My body feels so refreshing",
                      "{0} - The product quality is impressive!! Verdict - {1}",
                      "{0} is {1}. Highly recommended",
                      "{0} gives instant refreshing moisturizing effect!"],
            "good"      : ["Today I tried {0}. It is {1}.",
                            "The new {0}. Product quality is top, is {1} and worth a try",
                            "Did you checkout {0}?, {1} thing.",
                            "I like {0}. It smells nice and so soft",
                            "Didnt know {0} is {1} stuff. Superb!. Do try it."],
            "neutral"  : ["Checked out {0}. It is {1}",
                            "The new {0} is {1}. Dont expect much.",
                            "Heard the new {0} is {1}. Any first hand info on the it?",
                            "Anyone know how is {0}, reviews say it is {1}. Quality is what matters"],
            "very_bad"  : ["OMG!! Tried {0}. Its not for you. It is {1}",
                            "Never go for {0}, the quality is very less, {1} thing.",
                            "Oh, such a {1} thing {0} is!",
                            "{0} is sold out in my area - Sad!",
                            "Couldnt find {0} in my local store. Bad that I cant get that.",
                            "Local stored have sold out {0}, please send in more!!",
                            "We need more stock of {0} in here. Out of stock everywhere I check",
                            "{0} is out of stock - So sad!",
                            "Dont ever think of getting a {0}, very bad product. It is {1}",
                            "Why do we have {1} products like {0}? :("],
            "bad"      : ["Tried the new {0}. It is not recommended - {1}",
                            "Shouldnt have gone for the {1} {0}. Pathetic product quality.",
                            "First hand experience: {0} is {1}!",
                            "10 stores and no {0}. I want it desperately",
                            "Tried finding {0}. Can't find it in any stores in my area.",
                            "My {0} is {1}. The quality is way too less. Is it just me?!",
                            "The new {0} is {1}. It is disappointing. Fail!!"]}
    products, senti_num = get_products_and_senti_num()
    for j in range(len(products)):
        product = products.pop()
        senti = senti_num.pop()
        pos = int(senti[0])
        neg = int(senti[1])
        neu = int(senti[2])
        print product, "-", pos, neg, neu, " posts created."
        for k in range(pos + neg + neu):
            if pos:
                sentiment = randint(1,2)
                pos -= 1
            elif neg:
                sentiment = randint(-2,-1)
                neg -= 1
            else:
                sentiment = 0
                neu -= 1
            sentiment_valuation = sentiment + 3 if sentiment else sentiment
            adj_kind = adj_kind_from_senti[sentiment]
            adj = choice(adj_set[adj_kind])
            client = choice(client_list)
            guid = randomN('POB', 29)
            user = choice(user_list)
            channel = choice(channel_list)
            post_template = choice(post_templates[adj_kind])
            posted_on = random_date(start_datetime, end_datetime)
            post = post_template.format(product, adj)
            num_of_votes = str(randint(0, 150))
            if channel == 'TW':
                post_link = 'http://twitter.com/' + user + randomN('', 5)
            if channel == 'FB':         
                post_link = 'http://facebook.com/' + user + randomN('', 5)
            post_type = choice(['Status', 'Link', 'Photo', 'Video'])
            country = choice(countries)
            location = choice(locations[country])
            country_code = country_codes[country]
            latitude = str(randomN("", 2) + '.' + str(randint(2, 20)))
            longitude = str(randomN("", 2) + '.' + str(randint(2, 20)))
            social_sheet.append([client, guid, channel[:2].upper() + str(randomN('',6)), 'English', channel, user, posted_on.strftime("%a, %d %b %Y %H:%M:%S +0000"), post_type, post_link, num_of_votes, location, country, latitude, longitude, '3', 'Demo post', user, 'Demo User Retrieval', product, posted_on.strftime("%Y%m%d%H%M%S"), post, posted_on.strftime("%Y%m%d%H%M%S"), 'Demo Post Parent', "DemoJ", country_code, 'DS'])
            voice_sheet.append([client, guid, 'TextAnalysis', 'Sentiment', 'DEMO', sentiment, sentiment_valuation, 'J', posted_on.strftime("%Y%m%d"), posted_on.strftime("%Y%m%d%H%M%S")])
            voice_sheet.append([client, guid, 'TextAnalysis', 'PRODUCT', product, sentiment, sentiment_valuation, 'J', posted_on.strftime("%Y%m%d"), posted_on.strftime("%Y%m%d%H%M%S")])
    social_book.save('SOCIALDATA.xlsx')
    voice_book.save('SMI_VOICE_CUST.xlsx')
    print 'Demo data saved in SOCIALDATA.xlsx, SMI_VOICE_CUST.xlsx'
#modify this line => gen_posts(start_date, end_date)
gen_posts([22, 05, 2014], [05, 06, 2014])



Running the script


Both of the above scripts can be run in the following manner:


1) Save the script and input excel file in a directory.

2) Press hold Shift key and Right click.

3) Select – ‘Open command window here’

4) In the commandline type: python <scriptname>

5) Done. If everything worked as expected, you will have SOCIALDATA.xlsx and SMI_VOICE_CUST.xlsx files generated in that folder with the dummy data.

 


Tailpiece

As mentioned in the disclaimer already, these scripts should be used only for demo purposes.

 

The screenshots attached show how the input excel files should look like.

 

If you run into any issues during the setup or execution of the script, please let me know in the comments section.


This blog highlights videos on SAP Sentiment Analysis and its usage in SAP Demand Signal Management which were published recently on YouTube.

 

 

SAP Analyze Sentiments - Introduction

This video under https://www.youtube.com/watch?v=HH8W7BOfL_s explains short and illustrative what Sentiment Analysis is about and how meaningful insights can be derived from it for your business.

Description:

More and more people are active in social networks. At the same time people are increasingly looking for products online before they make buying decisions. The Analyze Sentiments app helps you access and analyze unstructured social media content and derive meaningful insights for your business. Being an integral part of various business processes, for example, in the area of Sales and Procurement, the Analyze Sentiments app is easily accessible from the SAP Fiori Launchpad.




SAP Demand Signal Management - Supported by Analyze Sentiments

This video under https://www.youtube.com/watch?v=1D2nKGf1izA describes how Sentiment Analysis can be used in Demand Signal Management. You can consider this to be an example, as Sentiment Analysis can be used in various business processes.


Description:

SAP Demand Signal Management gives you real-time insights into market and sales shares of your own brands and products - and those of your competitors.

Sentiment Analysis is an integral part of Demand Signal Management processes. The Analyze Sentiments app helps you to analyze the latest social media sentiments and derive meaningful insights for your business. See how SAP Demand Signal Management and the Analyze Sentiments app work together to help you to make faster and better decisions for your business.


You can find more information on SAP Demand Signal Management under the link http://scn.sap.com/community/demand-signal-management/blog/2013/08/27/an-introduction-to-sap-demand-signal-management

I want to provide an overview about possible decimal issues in BPS Planning used in the CRM Marketing scenario. There are some known issues related to decimal settings in PBS planning. This blog should provide information about the design of the decimal validation and how to set up the planning layout correctly. Furthermore this should contain a collection of solutions for known issues.

 

When looking the the planning layout created for a trade promotion in CRM we can see a key figure in the plannning layout defined with 2 decimals.

planning layout .jpg

tpm planning layout2.jpg

I will take these example to explain the design.

 

General Settings

 

When setting up the planning layout the following 4 level dependencies need to be considered.

 

1. UPX Layout Definition

2. BPS0 Customizing

3. Key Figure

4. Data Element

 

When the planning layout is rendered the first level that is considered is the UPX Layout Definition. In transaction UPX_MNTN the number of decimals can be defined:

upx_mntn bonus display.jpg

  upx_mntn kf22.jpg

The decimals places set in the UPX layout defines the number of decimals displayed in the planning layout. This number is for display reasons only.

 

On the second level there is the BPS0 Customizing. This is the first level that defines how the key figures are stored. That means key figures are rounded to the number of decimals defined in BPS0 and stored as the rounded value.

bps0 dec.jpg

For data consistency reasons the number of decimals defined in UPX_MNTN must be smaller or equal to the number of decimals defined in BPS0. Otherwise an error will be raised.

 

If there are no decimals defined in BPS0 the same rule is valid for the key figure definition in RSD1.

rsd1 key fig.jpg

If there are no decimals defined in the key figure details the data element for the key figure is considered.

rsd1 key fig data element.jpg

rsd1 key figure data element2.jpg

The decimals defined in the UPX_MNTN are considered for displaying the key figures, whereas the decimals defined in the levels below BPS0 are considered for calculations and storing the values. You should not have more decimals in layout than what you can actually save in the database. The general rule is the following:

 

No of display decimals <= No of decimals used for calculation


Please refer to the following KBA for further information about the dependencies between the different levels:

 

1936500 - Enter key figure 0,000 with a valid format (2 decimal places)

 

 

Zero Decimal key figures

 

For key figures defined with having zero decimal places the following needs to be considered.

 

When having 0 decimal places defined in UPX_MNTN, system considers the BPS0 settings. To display the key figure with 0 decimals, both the UPX_MNTN and BPS0 decimals need to be set to zero.

 

upx_mntn zero decimals.jpg
bps zero decimals.jpg

 

In case UPX_MNTN has defined 0 decimals but BPS0 has 2 defined 2 decimals the settings from BPS0 will be considered and the key figure will be displayed with 2 decimals.

 

This design is valid for zero decimal key figures only. For further information please refer to the following note:

 

2021933 - Use decimals settings from BPS when Enh Layout is set to 0

 

 

Percentage based key figures

 

What needs to be considered for percentage based key figures?

tpm laoyut percentage.jpg

The number of displayed decimals is taken from the UPX_MNTN settings as well.

upx_mntn percentage.jpg

This is similar to any other key figure definition. The difference is the way the system stores the percentage values. Depending on the parameter UPX_KPI_KFVAL_PERC_CONV_TO_BW the percentage value is stored as divided by 100. A value of 10% is therefore stored as 0,01. This requires the settings for the percentage key figure to have 2 more decimals defined in BPS0 than in UPX_MNTN not to lose precission.

bpd0 percentage.jpg

This is documented in the following SAP note:

 

1407682 - Planning services customizing for percentage key figures

 

With the parameter UPX_KPI_KFVAL_PERC_CONV_TO_BW set the percentage key figure values is stored in BW as 10 for 10%. If the parameter is set the above decimal setting is not required. Information about the UPX_KPI_KFVAL_PERC_CONV_TO_BW parameter in UPC_DARK2 table is available in the following SAP note:

 

1867095 - Planning Services Customizing Flags in the UPC_DARK2 Table

 

There are some known issues for percentage key figures, those are solved with the following SAP notes:

 

1523793 - Wrong rounding of percentage key figures with classic render

1370566 - Rounding error for Percentage Key Figures

 

If percentage key figures need to be diplayed without any decimals the following settings are to be applied:

 

UPX_MNTN: the key figure needs to be set to 0 decimals

BPS0: the key figure needs to be set to 2 decimals

 

This fulfills the rule for zero decimals in addition to the percentage key figure rule to require 2 more decimals as displayed.

 

Currency key figures

 

Since most currencies use 2 decimals per design there should not be any issues for the most currencies. However there are some known issues for exceptional currencies, so currencies with other than 2 decimal places such as JPY. In case of issues with those currencies the following SAP notes are required in the system:

 

2126484 - Correct CHECKMAN error introduced with the note 2099874

2106896 - Decimal issues in Planning Layouts when working with exceptional currencies

2099874 - Missing conversion for exceptional currencies in UPX_KPI_KF_PLEVEL_READ2
2021933 - Use decimals settings from BPS when Enh Layout is set to 0

1962963 - Planning Layout issues with exceptional currencies with more than two decimals

1535708 - Plan data for currencies without decimals


Rounding issues with Conditions Generation in CRM

 

When generating condition in a CRM trade promotion using BI rates the BPS key figure values are retrieved for getting the conditions amount. This may lead to rounding issues. The following note should solve those rounding issues:

 

2196545 - Discounts are getting rounded while generating conditions

 

 

Using master and dependent profiles

 

When using master and dependent profiles the decimal settings need to be exactly the same for the key figures in the master and the dependent profiles. It is the master profile that is synchronized and that is rendered for calculating the key figures. Therefore the key figures hold the values with decimals from the master profile. However for display reasons the rendering happens for the displayed profile, so for the depending profile. Therefore the decimal settings need to be in sync in the master and the dependent profiles.

 

Campaign Cost Planning

 

2181291 - Marketing Cost Planning rounds the key figure values for currencies with less than 2 decimals


Known issues


There are some known issues that are corrected with the following SAP notes:


2119191 - Decimals getting rounded for virtual and calculated key-figures

2106896 - Decimal issues in Planning Layouts when working with exceptional currencies

2085223 - Decimals issue in Planning Layouts rendered with the class CL_UPX_LAYOUT_RENDER

2080064 - Incorrect error message for UPX key figure decimal settings

1817554 - ASSIGN_DECIMALS_TOO_HIGH when synchronizing occurs

 

The blog will be updated on a regulary basis. If you find any information missing please let me know.

Introduction


Social data harvesting connector enables harvesting posts, write ups, and social user data from different social media channels such as Facebook, Twitter, Wikipedia, Blogs and so on through DataSift.

 

In the first release of social data harvesting connector, the approach was to fetch social data from different social media channels.

 

In the latest release of SP02, the main approach is to consider the consent from the social user and take the appropriate action accordingly on the social posts during harvesting. The configuration for consent handling and related actions is configurable in SAP Business Suite system.

 

Social User Consent Handling function is available only when the business function FND_SOMI_CM is switched on.

 

A quick list of new features and enhancements include:

 

  • Social User Consent Handling during data harvesting and taking the appropriate action accordingly on the social posts

        The consent types which are supported in the connector are as follows:

                                         

             - No Consent Required, Store Anonymously

             - No Consent Required, Store Complete User Information

 

  • Enhanced DataSift Mapper files to fetch the data from Facebook Pages (Managed Source)

 

  • Updated DataSift Mapper file with fields provided by DataSift to fetch the data from channel Facebook public

 

 

Release Information


The new features of release SP02  is available from release SAP Business Suite Foundation 7.47 SP 06 (SAP_BS_FND 747) onwards.

 

 

Implementation


You should have the software component SAP SOMI DS CONT. You should download the component from Software Download Center in SAP Service Market Place. You must have a valid license/API key from DataSift.

 


To access the Software Download Center from the SAP Service Marketplace homepage, service.sap.com, choose SAP Support Portal → Software Downloads → Software Download Center.

 

To search for the software component SAP SOMI DS CONT, proceed as follows:


  • Select Search for Software Downloads in the left navigation bar.
  • Search for the software component SAP SOMI DS CONT 1.0
  • Download the latest SP- SP02 for SAP SOMI DS CONT 1.0

 

The Installation Guide for the Social Data Harvesting connector can be found at https://websmp110.sap-ag.de/instguides -> SAP In-Memory Computing -> SAP Customer Engagement Intelligence -> Installation Guide Social Data Harvesting Connector


For detailed documentation, refer the attached pdf in the SAP Note 2079650.


Note: The updated help portal documentation is available only after SAP Business Suite Foundation 7.47 SP07 release to customer.


SAP Trade Promotion Optimization (TPO)

Recently I was involved in SAP TPO Proof of Concept (PoC) for a TOP FMCG company in US region. This project I believe may be one of its kinds for exploring SAP TPO capabilities and predict accurate volume and lift which involved Modern Trade POS data. We received last 3 years POS data with Account planning, promotion and sales data. I wanted to share learnings & highlighting few features of SAP TPO.

 

Background: Research trends indicate trade promotion related spends accounts for 8-12 % of overall turnover of a CPG company and up to 60% of CPG marketing budgets for stimulating channel demand. While trade promotion spending as a percentage of marketing budgets has increased dramatically, the inefficiency of trade promotion represents the "number-one concern" among manufacturers. Yet, there is little visibility into where this spending actually goes, or how effectively it increases revenues, expands market share, or creates brand awareness among consumers.  With millions of dollars being spent to stimulate demand a marginal improvement in the fund allocation and recalibration of promotion processes could have a disproportionate impact on sales uplift and promotion ROI. SAP TPO uses advanced analytical constructs like optimization, predictive analytics; What-if analysis can provide significant visibility into the effectiveness of this trade promotion spends. The information attained can provide insights in terms of sales uplift contributions and can help in optimizing the same in the face of many real world constraints during the fund allocation process.

What is Trade Promotion Optimization?

TPO assists CPG manufacturer strategically to optimize the trade spending across their total product portfolio. Trade Promotion Optimization is an approach that uses business rules, constraints, and goals to mathematically create a trade calendar that can meet all of these requirements. Optimization is helpful for strategic questions, such as “what combination of promotional events (feature price, frequency, timing & depth of deal allowances) will meet or beat my revenue and/or profit goals and still stay within my trade promotion budget?” Right TPO models can also solve for ratio mix of revenue, volume and/or profitability, as well as profit contribution for both the manufacturer and retailer. SAP TPO enables trade marketing and sales teams to leverage advanced predictive modeling to suggest optimal price and merchandising decisions based on goals and objectives, or to assess revenue, volume and profitability.


SAP TPO: It’s a SAPCRM Add-on, which comprises a forecasting and modeling engine. The TPO science is dependent on DMF. SAP TPO enables users to understand the demand baseline (Sell out baseline) prediction. SAP TPO predicts the regular volume, revenue, profit margin etc. for manufacturer and planning account for agreed duration.

SAP CRM: Supports all processes involving direct customer contact throughout the entire customer relationship life cycle- from market segmentation, sales lead generation and opportunities to post sales and customer service. Includes business scenarios such as account and trade promotion management.

SAP DSiM: Demand data is loaded into DSiM system which harmonizes the data as per the original master data system (ERP). SAP delivers few methods to harmonize the Syndicated (Market Research) / POS / external data.

SAP BW: Receives harmonized data from DSiM and send it to DMF system for demand modelling.

SAP DMF: Demand Management Foundation provides predictive demand driven forecasts and optimization simulations for all promotion planning across channels and customer segments. In DMF you can do model and forecast for set of customers, channels and markets.  By using demand data, DMF systems help to forecast and optimize the predictions as per the requirement. It’s a science engine, which transforms historical demand data into models for generating forecasting & optimization. SAP TPO uses ‘Bayesian ‘science techniques. A forecast run is created for each call of science system (DMF). The forecast run can be used to see the parameters and results of each prediction that adds to the what-if scenario.


Data: Historical data plays a major role in TPO. Prediction / forecasting results of SAP TPO depend on historical data. SAP TPO supports mainly POS, Syndicated (market research) or internal data that can be uploaded into DMF directly or through DSiM. DSiM harmonizes the data based on your primary data (product hierarchies in ERP).


Analytics: Historical sales and promotion data is used for building predictive models which is used for planning future promotions. Bayesian Hierarchical Modeling (BHM) techniques are used for building these models. BHM not only consider the individual product and markets behavior while modeling instead it also considers the learning from category or brand sales trend. The main advantage with BHM is that it provides better accuracy even with small data sets and the accuracy can be further improvised by correctly specifying settings for priors factors like price, promotional lift etc.,

Accurate promotional uplift could be derived by correctly specifying the demand patterns of promotional sales in different days of the week.

Predictive models not only captures the impact of factors like price, holiday, distribution and sales trend but it also provides a flexibility to capture the impact of dynamic demand behavior of the product by classifying them into various homogeneous groups based on their demand pattern.

SAP TPO has inbuilt analytics which is visible from CRM TPO screen.


User Interface / Integration options:

  • TPO integrated in Trade promotion planning without additional assignment block
  • TPO integration assignment block
  • Promotion optimization can be created independently of any trade promotion (prediction & simulation are also available)

TPO Forecast types controls whether to predict, SAP TPO has 2 types of forecasts

What-if Analysis forecast types

  • Prediction: It analyses past promotions performance for a given price and promotional vehicles (like displays, features, price reduction, and multi-buys) and predicts one outcome in line with trend.
  • Simulation: Most of times, the challenge is not just getting results but getting them with in constraints, what can be a best option in such case. Simulation, in addition to price and promotional vehicles can also consider objectives like profit optimization & sales volume optimization and more importantly constraints like trade spending limits and forecast multiple optimal scenarios. The best suitable one can be chosen after analyzing all scenarios.

What-if Analysis results: SAP TPO presents forecasted results in intuitive graphical dashboards which makes it easy to view and compare different forecast outcomes in a single view. As of version TPO 2.0, it will depict forecasted results in 5 dashboards with a different perspective in each. On one dashboard the user has the option to change 'trade spends' and see impact instantly.  More dashboards can be added through enhancements. These dashboards not only present data but also the insights.  This can reduce the strain of going through various details on each forecast scenario to make a choice.


Dashboards: SAP TPO screen has got few dashboards like Basic analysis( provides with key figures like Volume uplift, non -Promo Revenue, Promo revenue, retailer revenue) , Volume decomposition (Provides volume uplift with respect to base demand, tactic lift, price lift, seasonality, holiday, cannibalization) , win-win assessment(Promo margin and promo profit). SAP TPO Agreement screen has got few dashboards like weekly review (base line & total volume) Price & Volume decomposition, Profit and loss.

Integration with SAP TPM: SAP TPO is tightly integrated with SAP TPM. Few additional assignment blocks, fields and buttons are provided. Assignment blocks like Promotion causals, What-if analysis, and optimization scenario etc.,


Learnings: Data quality is the most important and critical element of any forecast as it will influence the forecast results. It is essential to have complete and accurate data without gaps. When external data like syndicated or research data is used it is crucial to check if they are true or close representative of retailers being used in all required locations.

One important  lesson learnt from experience is do not underestimate how much effort it will take to source, clean, format and load the data.

Within SAP TPO, each forecast has a forecast confidence indicator, which represents system model confidence in the forecast and is based on past data.

Suggest having an exercise called “KNOW BUSINESS INSIGHTS” which will generate business insights for any organizations. SAP DSiM on HANA can help you here.


Conclusion: SAP TPO can be implemented on its own as a standalone tool but implementing in conjunction with SAP TPM can realize the true potency of each other. SAP TPO can plan promotion strategy and TPM can execute it smoothly with its integration to other processes i.e. funds management & claims management.

SAP TPO requires consultants with DMF knowledge getting experienced people is a challenge. It would be really helpful to have statisticians to model and improve the models based on external factors.


Disclaimer :

 

This tutorial is intended as a guide for the creation of demo/test data only. The scripts provided in this blog are not intended for use in a productive system.

 

Purpose :

 

This blog explains harvesting of historical twitter data through GNIP. The pre-installed Python Interpreter from the SAP HANA Client is used to execute Python scripts from the SAP HANA Studio. The different scripts (discussed later) are used to harvest historical twitter data from GNIP and store the useful data into Business Suite Foundation tables SOCIALDATA and SOCIALUSERINFO.

 

 

Prerequisites :

 

Make sure the following prerequisites are met before you start.

  • Installation of SAP HANA Studio and SAP HANA Client
    Install SAP HANA Studio and SAP HANA Client and apply for a HANA user with read, write and update authorization for foundation database tables SOCIALDATA and SOCIALUSERINFO.
  • Create a GNIP account
  • Enable Historical PowerTrack API in your GNIP account
    For harvesting historical data, the Historical PowerTrack API should be enabled for you account. If it is not already enabled for your account, contact you account manger to do the same.

 

Setup :

 

For initial setup and configuration refer the blog http://scn.sap.com/community/crm/marketing/blog/2014/09/29/twitter-data-harvesting-adapter-using-python-script-for-gnip

 

 

Code:

 

Harvesting historical twitter data from GNIP involves 3 basic steps :

 

1. Request Historical Job : You can request a Historical PowerTrack job by making a HTTP POST requests to the API endpoint. You will need to include a fromDate, toDate, rules and some additional metadata in the JSON POST body.

 

To request a Historical PowerTrack job, send a request to the following URL with you user credentials and a POST body similar to the following example.

https://historical.gnip.com/accounts/<GNIP_ACCOUNT_NAME>/jobs.json

 

create_job.py

 

import urllib2
import base64
import json
UN = '' # YOUR GNIP ACCOUNT EMAIL ID
PWD = ''
account = '' # YOUR GNIP ACCOUNT USER NAME
def get_json(data):
    return json.loads(data.strip())
def post():
    url = 'https://historical.gnip.com/accounts/' + account + '/jobs.json'
    publisher = "twitter"
    streamType = "track"
    dataFormat = "activity-streams"
    fromDate = "201410140630"
    toDate = "201410140631"
    jobTitle = "job30"
    rules = '[{"value":"","tag":""}]'
    jobString = '{"publisher":"' + publisher + '","streamType":"' + streamType + '","dataFormat":"' + dataFormat + '","fromDate":"' + fromDate + '","toDate":"' + toDate + '","title":"' + jobTitle + '","rules":' + rules + '}'
    base64string = base64.encodestring('%s:%s' % (UN, PWD)).replace('\n', '')
    req = urllib2.Request(url=url, data=jobString)
    req.add_header('Content-type', 'application/json')
    req.add_header("Authorization", "Basic %s" % base64string)
    
    proxy = urllib2.ProxyHandler({'http': 'http://proxy:8080', 'https': 'https://proxy:8080'})
    opener = urllib2.build_opener(proxy)
    urllib2.install_opener(opener)
    try:
        response = urllib2.urlopen(req)
        the_page = response.read()
        the_page = get_json(the_page)
        print 'Job has been created.'
        print 'Job UUID : ' + the_page['jobURL'].split("/")[-1].split(".")[0]
    except urllib2.HTTPError as e:
        print e.read()
        
if __name__=='__main__':
    post()












 

The above code creates a job and receives a UUID for the job created in a JSON file along with other details.

The UUID is used to run the following scripts.

  • accept_reject.py
  • get_links.py
  • monitor_job.py


Note : put the above UUID in the 'url' parameter in each of the above scripts before executing them.

 

2. Accept/Reject a Historical Job : After delivery of the estimate, you can accept or reject the previously requested job with a HTTP PUT request to the job URL endpoint. To accept or reject the estimate, send a request to the following URL with your user credentials and a POST body (example below) updating the status to accept (or reject).

https://historical.gnip.com/accounts/<GNIP_ACCOUT_NAME>/publishers/twitter/historical/track/jobs/<uuid>.json

 

The following is an example of a valid POST body:

{
"status": "accept"
}


















 

accept_reject.py

 

import urllib2
import base64
import json
uuid = ''
account = ''
url = 'https://historical.gnip.com:443/accounts/'+account+'/publishers/twitter/historical/track/jobs/'+ uuid + '.json'
UN = '' # YOUR GNIP ACCOUNT EMAIL ID
PWD = ''
account = '' # YOUR GNIP ACCOUNT USER NAME
def get_json(data):
    return json.loads(data.strip())
def accept_reject():
    
    choice = 'accept' # Switch to 'reject' to reject the job.
    payload = '{"status":"' + choice + '"}'
    
    base64string = base64.encodestring('%s:%s' % (UN, PWD)).replace('\n', '')
    req = urllib2.Request(url=url, data=payload)
    req.add_header('Content-type', 'application/json')
    req.add_header("Authorization", "Basic %s" % base64string)
    req.get_method = lambda: 'PUT'
    
    proxy = urllib2.ProxyHandler({'http': 'http://proxy:8080', 'https': 'https://proxy:8080'})
    opener = urllib2.build_opener(proxy)
    urllib2.install_opener(opener)
    
    try:
        response = urllib2.urlopen(req)
        the_page = response.read()
        the_page = get_json(the_page)
        uuid = the_page['jobURL'].split("/")[-1].split(".")[0]
        print 'Job has been accepted.'
        print 'UUID : ' + uuid
    except urllib2.HTTPError as e:
        print e.read()
    
if __name__=='__main__':
    accept_reject()

















 

If the job has not been created successfully before running this script. you will get the below error message

'{"status":"error","reason":"Invalid state transition: Job cannot be accepted"}'.

 

After accepting the job, you can monitor its status by sending a request to the following URL with your user credentials.

https://historical.gnip.com/accounts/<GNIP_ACCOUNT_NAME>/publishers/twitter/historical/track/jobs/<uuid>.json

 

A request to the above URL gives us a JSON file containing "percentComplete" field along with other details about the running job. This field can be used to check whether the job has been completed or not.

 

monitor_job.py

 

import urllib2
import base64
import json
uuid = ''
account = ''
url = 'https://historical.gnip.com:443/accounts/'+ account +'/publishers/twitter/historical/track/jobs/' + uuid + '.json'
UN = '' # YOUR GNIP ACCOUNT EMAIL ID
PWD = ''
account = '' # YOUR GNIP ACCOUNT USER NAME
def get_json(data):
    return json.loads(data.strip())
class RequestWithMethod(urllib2.Request):
    def __init__(self, url, method, headers={}):
        self._method = method
        urllib2.Request.__init__(self, url, headers)
    def get_method(self):
        if self._method:
            return self._method
        else:
            return urllib2.Request.get_method(self)
def monitor():
    base64string = base64.encodestring('%s:%s' % (UN, PWD)).replace('\n', '')
    
    req = RequestWithMethod(url, 'GET')
    req.add_header('Content-type', 'application/json')
    req.add_header("Authorization", "Basic %s" % base64string)
    
    proxy = urllib2.ProxyHandler({'http': 'http://proxy:8080', 'https': 'https://proxy:8080'})
    opener = urllib2.build_opener(proxy)
    urllib2.install_opener(opener)
    
    try:
        response = urllib2.urlopen(req)
        the_page = response.read()
        the_page = get_json(the_page)
        print the_page
        print 'Statu Message : ' + str(the_page['statusMessage'])
        print 'Percent Complete : ' + str(the_page['percentComplete']) + ' %'
        if 'quote' in the_page:
            print 'Estimated File Size in Mb : ' + str(the_page['quote']['estimatedFileSizeMb'])
            print 'Expires At : ' + str(the_page['quote']['expiresAt'])      
    except urllib2.HTTPError as e:
        print e.read()
            
if __name__ == '__main__':
    print url
    monitor()













 

3. Retrieve Historical Data : When the job has completed, GNIP will provide a dataURL endpoint containing a list of file URLs that can be downloaded in parallel. To retrieve this list of data files, send a HTTP GET request to the following URL with your user credentials :

https://historical.gnip.com/accounts/<GNIP_ACCOUNT_NAME>/publishers/twitter/historical/track/jobs/<uuid>/results.json

 

get_links.py

 

import urllib2
import base64
import json
uuid = ''
account = ''
url = final_url = 'https://historical.gnip.com/accounts/'+ account +'/publishers/twitter/historical/track/jobs/'+ uuid +'/results.json'
UN = '' # YOUR GNIP ACCOUNT EMAIL ID
PWD = ''
account = '' # YOUR GNIP ACCOUNT USER NAME
def get_json(data):
    return json.loads(data.strip())
def get_links():
    base64string = base64.encodestring('%s:%s' % (UN, PWD)).replace('\n', '')
    req = urllib2.Request(url)
    req.add_header('Content-type', 'application/json')
    req.add_header("Authorization", "Basic %s" % base64string)
    
    proxy = urllib2.ProxyHandler({'http': 'http://proxy:8080', 'https': 'https://proxy:8080'})
    opener = urllib2.build_opener(proxy)
    urllib2.install_opener(opener)
    try:
        response = urllib2.urlopen(req)
        the_page = response.read()
        the_page = get_json(the_page)
        print 'Total URL count : ' + str(the_page['urlCount']) + '\n'
        for item in the_page['urlList']:
            print item
    except urllib2.HTTPError:
        print 'Job is not yet delivered.'
if __name__=='__main__':
    get_links()















 

The above code returns a list of URLs using which data can be downloaded. For each URL returned by get_links.py, we can run get_data.py to get the data from that URL and store the data into SOCIALDATA and SOCIALUSERINFO tables.

 

Note : Copy the links returned(one at a time) by get_links.py to the 'url' variable before executing get_data.py.

 

get_data.py

 

import urllib2
import zlib
import threading
from threading import Lock
import sys
import ssl
import json
from datetime import datetime
import calendar
import dbapi
from wsgiref.handlers import format_date_time
CHUNKSIZE = 4*1024
GNIPKEEPALIVE = 30
NEWLINE = '\r\n'
# GNIP ACCOUNT DETAILS
url = ''
HEADERS = { 'Accept': 'application/json',
           'Connection': 'Keep-Alive',
            'Accept-Encoding' : 'gzip' }
server = ''
port = 
username_hana = ''
password_hana = ''
schema = ''
client = ''
socialmediachannel = 'TW'
print_lock = Lock()
err_lock = Lock()
hdb_target = dbapi.connect(server, port, username_hana, password_hana)
cursor_target = hdb_target.cursor()
class procEntry(threading.Thread):
    def __init__(self, buf):
        self.buf = buf
        threading.Thread.__init__(self)
    def unicodeToAscii(self, word):
        return word.encode('ascii', 'ignore')
    
    def run(self):
        for rec in [x.strip() for x in self.buf.split(NEWLINE) if x.strip() <> '']:
            try:
                jrec = json.loads(rec.strip())
                with print_lock:
                    res = ''
                    if 'verb' in jrec:
                        verb = jrec['verb']
                        verb = self.unicodeToAscii(verb)
                        # SOCIALUSERINFO DETAILS
                        socialUser = jrec['actor']['id'].split(':')[2]
                        socialUser = self.unicodeToAscii(socialUser)
                        socialUserProfileLink = jrec['actor']['link']
                        socialUserProfileLink = self.unicodeToAscii(socialUserProfileLink)
                        socialUserAccount = jrec['actor']['preferredUsername']
                        socialUserAccount = self.unicodeToAscii(socialUserAccount)
                        friendsCount = jrec['actor']['friendsCount']
                        followersCount = jrec['actor']['followersCount']
                        postedTime = jrec['postedTime']
                        postedTime = self.unicodeToAscii(postedTime)
                        displayName = jrec['actor']['displayName']
                        displayName = self.unicodeToAscii(displayName)
                        image = jrec['actor']['image']
                        image = self.unicodeToAscii(image)
                        
                        # SOCIALDATA DETAILS
                        socialpost = jrec['id'].split(':')[2]
                        socialpost = self.unicodeToAscii(socialpost)
                        createdbyuser = socialUser
                        creationdatetime = postedTime
                        socialpostlink = jrec['link']
                        creationusername = displayName
                        socialpostsearchtermtext = jrec['gnip']['matching_rules'][0]['value']
                        socialpostsearchtermtext = self.unicodeToAscii(socialpostsearchtermtext)
                        
                        d = datetime.utcnow()
                        time = d.strftime("%Y%m%d%H%M%S")
                        
                        creationdatetime_utc = datetime.strptime(postedTime[:-5], "%Y-%m-%dT%H:%M:%S")
                        creationdatetime_utc = creationdatetime_utc.strftime(("%Y%m%d%H%M%S"))
                        
                        stamp = calendar.timegm(datetime.strptime(creationdatetime[:-5], "%Y-%m-%dT%H:%M:%S").timetuple())
                        creationdatetime = format_date_time(stamp)
                        creationdatetime = creationdatetime[:-4] + ' +0000'
                        
                        if verb == 'post':
                            socialdatauuid = jrec['object']['id'].split(':')[2]
                            socialdatauuid = self.unicodeToAscii(socialdatauuid)
                            socialposttext = jrec['object']['summary']
                            socialposttext = self.unicodeToAscii(socialposttext)
                            res = socialUser + '\t'  + socialUserAccount + '\t' + str(friendsCount) + '\t' + str(followersCount) + '\t' + postedTime + '\t' + displayName + '\t' + displayName.upper() + '\t' + socialUserProfileLink + '\t' +image
                        elif verb == 'share':
                            socialdatauuid = jrec['object']['object']['id'].split(':')[2]
                            socialdatauuid = self.unicodeToAscii(socialdatauuid)                            
                            socialposttext = jrec['object']['object']['summary']
                            socialposttext = self.unicodeToAscii(socialposttext)
                            res = socialposttext + '\t' +socialUser + '\t'  + socialUserAccount + '\t' + str(friendsCount) + '\t' + str(followersCount) + '\t' + postedTime + '\t' + displayName + '\t' + displayName.upper() + '\t' + socialUserProfileLink + '\t' +image
                        print(res)
                        sql = 'upsert ' + schema + '.SOCIALUSERINFO(CLIENT, SOCIALMEDIACHANNEL, SOCIALUSER, SOCIALUSERPROFILELINK, SOCIALUSERACCOUNT, NUMBEROFSOCIALUSERCONTACTS, SOCIALUSERINFLUENCESCOREVALUE, CREATIONDATETIME, SOCIALUSERNAME, SOCIALUSERNAME_UC, SOCIALUSERIMAGELINK, CREATEDAT) values(?,?,?,?,?,?,?,?,?,?,?,?) with primary key'
                        cursor_target.execute(sql, (client, socialmediachannel, socialUser, socialUserProfileLink, socialUserAccount, friendsCount, followersCount, creationdatetime, displayName, displayName.upper(), image, time))
                        hdb_target.commit()
                            
                        sql = 'upsert ' + schema + '.SOCIALDATA(CLIENT, SOCIALDATAUUID, SOCIALPOST, SOCIALMEDIACHANNEL, CREATEDBYUSER, CREATIONDATETIME, SOCIALPOSTLINK, CREATIONUSERNAME, SOCIALPOSTSEARCHTERMTEXT, SOCIALPOSTTEXT, CREATEDAT, CREATIONDATETIME_UTC) VALUES(?,?,?,?,?,?,?,?,?,?,?,?) WITH PRIMARY KEY'                    
                        cursor_target.execute(sql, (client, socialdatauuid, socialpost, socialmediachannel, createdbyuser, creationdatetime, socialpostlink, creationusername, socialpostsearchtermtext, socialposttext, time, creationdatetime_utc))
                        hdb_target.commit()
            except ValueError, e:
                with err_lock:
                    sys.stderr.write("Error processing JSON: %s (%s)\n"%(str(e), rec))
def getStream():
    proxy = urllib2.ProxyHandler({'http': 'http://proxy:8080', 'https': 'https://proxy:8080'})
    opener = urllib2.build_opener(proxy)
    urllib2.install_opener(opener)
    req = urllib2.Request(url, headers=HEADERS)
    response = urllib2.urlopen(req, timeout=(1+GNIPKEEPALIVE))
    decompressor = zlib.decompressobj(16+zlib.MAX_WBITS)
    remainder = ''
    while True:
        tmp = decompressor.decompress(response.read(CHUNKSIZE))
        if tmp == '':
            return
        [records, remainder] = ''.join([remainder, tmp]).rsplit(NEWLINE,1)
        procEntry(records).start()
if __name__ == "__main__":
    print('Started...')
    try:
        getStream()
    except ssl.SSLError, e:
        with err_lock:
            sys.stderr.write("Connection failed: %s\n"%(str(e)))
 














When you run the above script, data from the location specified by 'url' field will be downloaded and stored into SOCIALDATA and SOCIALUSERINFO tables.

 

References :

1. Gnip Support

 

 

2. Harvesting real time Tweets from GNIP into Social Intelligence tables using a Python Script : http://scn.sap.com/community/crm/marketing/blog/2014/09/29/twitter-data-harvesting-adapter-using-python-script-for-gnip

 

3. Demo Social and Sentiment data generation using Python script :

http://scn.sap.com/community/crm/marketing/blog/2015/01/12/demo-social-and-sentiment-data-generation-using-python-script

 

 

4. Harvesting Tweets into Social Intelligence tables using a Python Script : http://scn.sap.com/docs/DOC-53824

Disclaimer

This tutorial is intended as a guide for the creation of demo/test data only. The sample script provided is not intended for use in a productive system.

Purpose

The following tutorial explains a way of harvesting twitter data through GNIP. The pre-installed Python Interpreter from the SAP HANA client is used to execute a Python script from SAP HANA Studio. The script harvests the data from GNIP and extracts the useful data out of it and stores these details into Business Suite Foundation database tables SOCIAL DATA and SOCIALUSERINFO. Currently the script runs infinitely. If you want to stop harvesting the data, you can manually do it by stopping the execution of this script in the SAP HANA Studio. You can however modify the script to run for a specific period of time. To run the script, you will also need to make a few customizing and configuration settings in order to use the Pydev Plugin in SAP HANA Studio.

Prerequisites

Make sure that the following prerequisites are met before you start out :
• Installation of SAP HANA Studio and SAP HANA Client
Install SAP HANA Studio and SAP HANA Client and apply for a HANA user with Read, Write and Update authorization for foundation database tables   SOCIALDATA and SOCIALUSERINFO

Create a GNIP account

Data Stream configuration in your GNIP account
Create a data stream for a source (like Twitter, Facebook, etc…) in your GNIP account. Remember, using a data stream you can harvest data from only a single source. So you should have different data streams for different data sources. After creating a data stream, define the rules in the ‘Rules’ tab to filter the data that you are getting from GNIP. For writing the rules refer the link : http://support.gnip.com/apis/powertrack/rules.html

Setup
1. Configuring Python in SAP HANA Studio Client
   
Python version 2.6 is already embedded in SAP HANA client, so you do not need to install Python from scratch. To configure Python API to connect to SAP HANA, proceed as follows.
        
1. Copy and paste the following files from C:\Program Files\SAP\hdbclient\hdbcli to C:\Program Files\SAP\hdbclient\Python\Lib
                a. _init_.py
                b. dbapi.py
                c. resultrow.py

2. Copy and paste the following files from C:\Program Files\SAP\hdbclient to C:\Program\Files\SAP\hdbclient\Python\Lib
                a. pyhdbcli.pdb
                b. pyhdbcli.pyd
          
Note:
       
In Windows OS, by default the installation path is C:\Program Files\SAP\.. for a 64 bit installation SAP HANA Studio and SAP HANA Database client

If you opted for a 32 bit Installation, the default path is C:\Program Files(x86)\sap\..

2. Setting up the Editor to run the file
2.1. Install Pydev plugin to use Python IDE for Eclipse
             
The preferred method is to use the Eclipse IDE from SAP HANA Studio. To be able to run the python script, you first need to install the Pydev plugin in SAP HANA Studio.
                   
                    a. Open SAP HANA Studio. Click HELP on menu tab and select Install New Software
                    b. Click the button Add and enter the following information
               gnip1.jpg
                       Name : pydev
                       Location : http://pydev.org/updates

                   c. Select the settings as shown in this screenshot.
                   gnip2.jpg
                       d. Press Next twice
                         e. Accept the license agreements, then press Finish.
                         f. Restart SAP HANA studio.

2.2. Configure the Python Interpreter

In SAP HANA studio, carry out the following steps:
     a. Select the menu entries Window -> Preferences
     b. Select PyDev -> Interpreters -> Python Interpreter
     c. Click New button, type in an Interpreter name. Enter in filed Interpreter Executable the following executable file C:\Program Files\hdbclient\Python\Python.exe. Press OK twice.

2.3. Create a Python project

In SAP HANA Studio, carryout the following steps:
     a. Click File -> New -> Project, then select Pydev project
     b. Type in a project name, then press Finish
     c. Right-click on your project. Click New -> File, then type your file name, press Finish.

Customizing and Running the Script

1. Customizing the python script

Copy and paste the below provided code into the newly created python file. Enter the values for the below parameters in the file.
     a. URL – unique url for the datastream you have created in your GNIP account
          (For ex : 'https://stream.gnip.com/accounts/<GNIP_USERNAME>/publishers/<STREAM>/streams/track/dev.json')
     b. username_gnip – your GNIP account username
     c. password_gnip – your GNIP account password
     d. server – HANA server name (Ex : lddbq7d.wdf.sap.corp)
     e. port – HANA server port
     f. username_hana – HANA server username
     g. password_hana – HANA server password
     h. schema – schema name
     i. client – client number

import urllib2
import base64
import zlib
import threading
from threading import Lock
import sys
import ssl
import json
from datetime import datetime
import calendar
import dbapi
from wsgiref.handlers import format_date_time
from time import mktime
CHUNKSIZE = 4*1024
GNIPKEEPALIVE = 30
NEWLINE = '\r\n'
URL = ''
username_gnip = ''
password_gnip = ''
HEADERS = { 'Accept': 'application/json',
            'Connection': 'Keep-Alive',
            'Accept-Encoding' : 'gzip',
            'Authorization' : 'Basic %s' % base64.encodestring('%s:%s' % (username_gnip, password_gnip))  }
server = ''
port =
username_hana = ''
password_hana = ''
schema = ''
client = ''
socialmediachannel = ''
print_lock = Lock()
err_lock = Lock()
class procEntry(threading.Thread):
    def __init__(self, buf):
        self.buf = buf
        threading.Thread.__init__(self)
    def unicodeToAscii(self, word):
        return word.encode('ascii', 'ignore')
    def run(self):
        for rec in [x.strip() for x in self.buf.split(NEWLINE) if x.strip() <> '']:
            try:
                jrec = json.loads(rec.strip())
                with print_lock:
                    verb = jrec['verb']
                    verb = self.unicodeToAscii(verb)
                
                    # SOCIALUSERINFO DETAILS
                    socialUser = jrec['actor']['id'].split(':')[2]
                    socialUser = self.unicodeToAscii(socialUser)
                    socialUserProfileLink = jrec['actor']['link']
                    socialUserProfileLink = self.unicodeToAscii(socialUserProfileLink)
                    socialUserAccount = jrec['actor']['preferredUsername']
                    socialUserAccount = self.unicodeToAscii(socialUserAccount)
                    friendsCount = jrec['actor']['friendsCount']
                    followersCount = jrec['actor']['followersCount']
                    postedTime = jrec['postedTime']
                    postedTime = self.unicodeToAscii(postedTime)
                    displayName = jrec['actor']['displayName']
                    displayName = self.unicodeToAscii(displayName)
                    image = jrec['actor']['image']
                    image = self.unicodeToAscii(image)
                
                    # SOCIALDATA DETAILS
                    socialpost = jrec['id'].split(':')[2]
                    socialpost = self.unicodeToAscii(socialpost)
                    createdbyuser = socialUser
                    creationdatetime = postedTime
                    socialpostlink = jrec['link']
                    creationusername = displayName
                    socialpostsearchtermtext = jrec['gnip']['matching_rules'][0]['value']
                    socialpostsearchtermtext = self.unicodeToAscii(socialpostsearchtermtext)
                
                    d = datetime.utcnow()
                    time = d.strftime("%Y%m%d%H%M%S")
                
                    creationdatetime_utc = datetime.strptime(postedTime[:-5], "%Y-%m-%dT%H:%M:%S")
                    creationdatetime_utc = creationdatetime_utc.strftime(("%Y%m%d%H%M%S"))
                
                    stamp = calendar.timegm(datetime.strptime(creationdatetime[:-5], "%Y-%m-%dT%H:%M:%S").timetuple())
                    creationdatetime = format_date_time(stamp)
                    creationdatetime = creationdatetime[:-4] + ' +0000'
                
                    if verb == 'post':
                        socialdatauuid = jrec['object']['id'].split(':')[2]
                        socialdatauuid = self.unicodeToAscii(socialdatauuid)
                    
                    
                        socialposttext = jrec['object']['summary']
                        socialposttext = self.unicodeToAscii(socialposttext)
                    
                        res = client + '\t' + socialmediachannel + '\t' + socialUser + '\t'  + socialUserAccount + '\t' + str(friendsCount) + '\t' + str
(followersCount) + '\t' + postedTime + '\t' + displayName + '\t' + displayName.upper() + '\t' + socialUserProfileLink + '\t' +image
                    
                    elif verb == 'share':
                        socialdatauuid = jrec['object']['object']['id'].split(':')[2]
                        socialdatauuid = self.unicodeToAscii(socialdatauuid)
                    
                        socialposttext = jrec['object']['object']['summary']
                        socialposttext = self.unicodeToAscii(socialposttext)
                    
                        res = client + '\t' + socialmediachannel + '\t' + socialUser + '\t'  + socialUserAccount + '\t' + str(friendsCount) + '\t' + str
(followersCount) + '\t' + postedTime + '\t' + displayName + '\t' + displayName.upper() + '\t' + socialUserProfileLink + '\t' +image
                    
                    print(res)
                    hdb_target = dbapi.connect(server, port, username_hana, password_hana)
                    cursor_target = hdb_target.cursor()
                    
                    sql = 'upsert ' + schema + '.SOCIALUSERINFO(CLIENT, SOCIALMEDIACHANNEL, SOCIALUSER, SOCIALUSERPROFILELINK, SOCIALUSERACCOUNT,
NUMBEROFSOCIALUSERCONTACTS, SOCIALUSERINFLUENCESCOREVALUE, CREATIONDATETIME, SOCIALUSERNAME, SOCIALUSERNAME_UC, SOCIALUSERIMAGELINK, CREATEDAT) values
(?,?,?,?,?,?,?,?,?,?,?,?) with primary key'
                    cursor_target.execute(sql, (client, socialmediachannel, socialUser, socialUserProfileLink, socialUserAccount, friendsCount,
followersCount, creationdatetime, displayName, displayName.upper(), image, time))
                    hdb_target.commit()
                    
                    sql = 'upsert ' + schema + '.SOCIALDATA(CLIENT, SOCIALDATAUUID, SOCIALPOST, SOCIALMEDIACHANNEL, CREATEDBYUSER, CREATIONDATETIME,
SOCIALPOSTLINK, CREATIONUSERNAME, SOCIALPOSTSEARCHTERMTEXT, SOCIALPOSTTEXT, CREATEDAT, CREATIONDATETIME_UTC) VALUES(?,?,?,?,?,?,?,?,?,?,?,?) WITH PRIMARY
KEY'                
                    cursor_target.execute(sql, (client, socialdatauuid, socialpost, socialmediachannel, createdbyuser, creationdatetime, socialpostlink,
creationusername, socialpostsearchtermtext, socialposttext, time, creationdatetime_utc))
                    hdb_target.commit()
            except ValueError, e:
                with err_lock:
                    sys.stderr.write("Error processing JSON: %s (%s)\n"%(str(e), rec))
def getStream():
    proxy = urllib2.ProxyHandler({'http': 'http://proxy:8080', 'https': 'https://proxy:8080'})
    opener = urllib2.build_opener(proxy)
    urllib2.install_opener(opener)
    req = urllib2.Request(URL, headers=HEADERS)
    response = urllib2.urlopen(req, timeout=(1+GNIPKEEPALIVE))
    decompressor = zlib.decompressobj(16+zlib.MAX_WBITS)
    remainder = ''
    while True:
        tmp = decompressor.decompress(response.read(CHUNKSIZE))
        if tmp == '':
            return
        [records, remainder] = ''.join([remainder, tmp]).rsplit(NEWLINE,1)
        procEntry(records).start()
if __name__ == "__main__":
    print('Started...')
    while True:
        try:
            getStream()
        except ssl.SSLError, e:
            with err_lock:
                sys.stderr.write("Connection failed: %s\n"%(str(e)))



2. Run the script from your editor


3. Checking the Results in the database tables SOCIALDATA and SOCIALUSERINFO.


Other blog posts on connecting Social Channels: 

 

Twitter connector to harvest tweets into Social Intelligence tables using Python script

http://scn.sap.com/docs/DOC-53824

 

Historical data harvesting from GNIP using Python scripts

http://scn.sap.com/community/crm/marketing/blog/2014/10/16/historical-data-harvesting-from-gnip-using-python-scripts

 

Demo Social and Sentiment data generation using Python script

http://scn.sap.com/community/crm/marketing/blog/2015/01/12/demo-social-and-sentiment-data-generation-using-python-script

 

 

(If you find any mistakes or if you have any doubts in this blog please leave a comment)

I noticed very high interest in all topics that are related to marketing prospect functionality in SAP CRM.

On the other hand I have the impression that only a few companies actually started using marketing prospects in SAP CRM. With this they get the chance to measure the real value of marketing prospects for their business.

 

Perhaps using marketing prospects is considered as a big topic, or even a real mind shift? You go away from starting with prospect data that is already quite complete, in the sense of typically having a name and at least parts of the address, instead of starting typically with only an e-mail address.

 

I would like to encourage you to stop hesitating.

For sure you are infected by the market trends to bring your prospects as early as possible on a great "customer journey". Otherwise competitors could get them.

The pity is that most companies have a lot ot prospect data but don't use it yet. They don't address these prospects with marketing activities. As said such prospect data is far from being complete. That's why - from my perspective too often - it is just stored but never used.

 

What about starting with a first small set of such prospect data? Bring this data in the SAP CRM system, and define exactly the kind of lifecycle for them. - After a while measure your success with them, and then in case you are successful you can go ahead with the next set of prospect data, and increase the amount step-by-step.

 

Keeping my fingers crossed for your success!

 

Some additional information that could help to get this started:

For getting an overview of what is new in the marketing prospect area you can read SAP Note 1896854.

For understanding how to measure your success with marketing prospects you can read http://scn.sap.com/community/crm/marketing/blog/2014/03/07/how-to-measure-the-effectiveness-of-your-marketing-activities-with-prospects

Actions

Filter Blog

By author:
By date:
By tag: