1 2 3 4 Previous Next

SAP CRM: Marketing

52 Posts

Data Mining and changing Marketing strategies:

Once during my onsite assignment on a project for a big telecom giant in USA, I got the Opportunity to interact with one of its executive director.  He was using an executive dashboard with some very nice looking charts. I discussed with him about the dashboard and the piece of information which will be generated as our project’s goal. He informed me about the few sets of number and asked me to find out the way to put out a pattern which could help his Business in making an informed decision.

It’s very common now to hear about the KPIs (Key performance indicatrs), Importance of Metrics, their measurements etc. People normally overuse these terms for Business intelligence.   This is so much important to businesses as they can’t run without the future forecasts and these measurements.Someone has summarized the business philosophy:

  • If you can’t measure something, you really don’t know much about it.
  • If you don’t know much about it, you can’t control it.
  • If you can’t control it, you are at the mercy of chance.


This sums up the importance of data mining, and how measured data  translates into information, which finally outcomes the knowledge. This knowledge is what is used to run any business.

Over the last decade the technology has played the major role in defining the Marketing strategies. For example In one of the SAP CRM Service marketing assignment at a Big Auto Giant, we have derived the Target group or audience for e-campaign using their historical transactions The forecast or prediction that these customers would need the specific service was purely based on the data collected from their last transactions. This is the case of linear marketing or as per the best practices.

With changing times the customer is more informed with multi channels availability for information and collaborations on social sites. The marketing strategy for businesses are required to embrace these changes and have to now work in the proactive mode rather that reactive, as based on their historical transactions.

Now we have to work with customer intent during their browsing on our retail websites or visiting our store or querying at the Call Center. Requirement is to drive instant insight across lines of business, connect with business and social networks, and plug into the Internet of Things in real time?

There are many digital real time solutions available in market with the newer technologies. It’s not only the requirement but mandatory reflux to change the marketing strategies. The information derived needs to be implemented in marketing real-time (at the moment) and this information of customer intent, could change the buying decisions.

Our marketing strategies need to be enabled with these newer technologies which can derive Information in real time. SAP has enabled the SAP C4C solution and HANA platform as the new generation product which market is embracing. SAP Hybris provides us the e-marketing enablement. It also has the advance audience targeting and detection which could realize these real time marketing scenarios.

I am an Old School Marketer by Trade but...This is completely new to me. As I look forward to the journey yet I remain a little afraid as it is out of my comfort zone. It's a good thing I guess I do enjoying learning new things. As I have gone through and read along I understand a great deal of the general concepts but am at the same time in total unfamiliar territory. I've been doing Marketing, Web Design, Writing, and so on since 1999! You'd think I should already know this. Trust me we are constantly learning...and if Adventurous enough we occasionally walk upon uncharted territory and actually learn something that will also enhance my overall long term journey into the beyond of what we stated way back with as what I like to call "The HTML Alphabet"!


I will take it "One Step At The Time". Excited to learn yet fearful at the same time is a true mix of emotions. It would even be safe to say that the next few days will be done slow with extreme caution and carefulness! This is a short note here as I am eager to begin my journey with a little reading and seeking out Informative Videos that will break down the unfamiliarity of this intriguing but intimidating subject matter. It'll be fun! As nervous as I may feel it will also be exciting to move into something new. In my next post I promise to tell you all about what my discovery is and is not! Till next time.


Laurie Bullard (Reeal)

Newbie Student


P.S. I have much to learn but can Guarantee I will and improve my content and knowledge of where to and where not to Blog pertaining to a subject. Honestly as of right now I do not know the answers yet. As I do learn and receive my "Certification" I hope I get the opportunity to share what I have learned to better enhance the capabilities of others as well.


This tutorial is intended as a guide for the creation of demo/test data only. The sample script provided is not intended for use in a productive system.


The following tutorial explains a way of generating demo data for the Gigya related database tables in SAP Business Suite Foundation.

Following are the tables:













The pre-installed Python Interpreter from the SAP HANA client is used to execute a Python script from SAP HANA Studio.

To run the script, you will also need to make a few customizing and configuration settings in order to use the Pydev Plugin in SAP HANA Studio.


Make sure that the following prerequisites are met before you start out :

• Installation of SAP HANA Studio and SAP HANA Client
Install SAP HANA Studio and SAP HANA Client and apply for a HANA user with Read, Write and Update authorization for foundation database tables listed in the previous section.



1. Configuring Python in SAP HANA Studio Client

Python version 2.6 is already embedded in SAP HANA client, so you do not need to install Python from scratch. To configure Python API to connect to SAP HANA, proceed as follows.

1. Copy and paste the following files from C:\Program Files\SAP\hdbclient\hdbcli to C:\Program Files\SAP\hdbclient\Python\Lib

                a. _init_.py
                b. dbapi.py
                c. resultrow.py

2. Copy and paste the following files from C:\Program Files\SAP\hdbclient toC:\Program\Files\SAP\hdbclient\Python\Lib

                a. pyhdbcli.pdb
                b. pyhdbcli.pyd



In Windows OS, by default the installation path is C:\Program Files\SAP\.. for a 64 bit installation SAP HANA Studio and SAP HANA Database client


If you opted for a 32 bit Installation, the default path is C:\Program Files(x86)\sap\..

2. Setting up the Editor to run the file

2.1. Install Pydev plugin to use Python IDE for Eclipse


The preferred method is to use the Eclipse IDE from SAP HANA Studio. To be able to run the python script, you first need to install the Pydev plugin in SAP HANA Studio.

                    a. Open SAP HANA Studio. Click HELP on menu tab and select Install New Software
                    b. Click the button Add and enter the following information

                       Name : pydev

                       Location : http://pydev.org/updates

                   c. Select the settings as shown in this screenshot.


                       d. Press Next twice

                         e. Accept the license agreements, then press Finish.

                         f. Restart SAP HANA studio.


2.2. Configure the Python Interpreter


In SAP HANA studio, carry out the following steps:
     a. Select the menu entries Window -> Preferences

     b. Select PyDev -> Interpreters -> Python Interpreter

     c. Click New button, type in an Interpreter name. Enter in filed Interpreter Executable the following executable file C:\Program Files\hdbclient\Python\Python.exe. Press OK twice.

2.3. Create a Python project

In SAP HANA Studio, carryout the following steps:

     a. Click File -> New -> Project, then select Pydev project

     b. Type in a project name, then press Finish

     c. Right-click on your project. Click New -> File, then type your file name, press Finish.

Customizing and Running the Script

1. Customizing the python script

Copy and paste the below provided code into the newly created python file. Enter the values for the below parameters in the file.

     a. server – HANA server name (Ex : lddbq7d.wdf.sap.corp)

     b. port – HANA server port

     c. username_hana – HANA server username

     d. password_hana – HANA server password

     e. schema – schema name

     f. client – client number

    g. count - number of users for which the records shall be created


import sys, dbapi
from time import strftime
from random import randint, choice
#Returns prefix + ndigits
def randomN(prefix, ndigits):
    range_start = 10**(ndigits-1)
    range_end = (10**ndigits)-1
    return prefix + str(randint(range_start, range_end))
def get_patent_pub_name():
    part1 = choice(['Decomposistion', 'Sel-focusing', 'Ground-based', 'Process', 'Method', 'System', 'Apparatus'])
    part2 = choice(['of', 'and', 'for', 'in'])
    part3 = choice(['Carbon dioxide', 'Oxygen', 'Nitrogen', 'Hydride', 'Peroxide', 'Ultraviolet radiation', 'Light' ,'molecule'])
    part4 = choice(['conversion', 'generation', 'mixture', 'container', 'dispenser'])
    return ' '.join([part1, part2, part3, part4])
# def random_date(start, end):
#     return start + timedelta(seconds=randint(0, int((end - start).total_seconds())))
server = 'lddbbfi.wdf.sap.corp'
port = 30215
username_hana = ''
password_hana = ''
schema = 'SAPBFI'
client = '001'
#This is the number of users for which records shall be created
count = 5
hdb_target = dbapi.connect(server, port, username_hana, password_hana)
cursor_target = hdb_target.cursor()
favorite_sql = 'upsert ' + schema + '.SMI_USR_FAVORITE(CLIENT, FAV_UUID, DATAPROVIDERNAME, USERIDINDATAPROVIDER, TYPE, NAME, CATEGORY) values (?,?,?,?,?,?,?) with primary key'
skill_sql = 'upsert ' + schema + '.SMI_USR_SKILL(CLIENT, SKILL_UUID, DATAPROVIDERNAME, USERIDINDATAPROVIDER, SKILL, SKILL_LEVEL, SKILL_YEARS) values (?,?,?,?,?,?,?) with primary key'
phone_sql = 'upsert ' + schema + '.SMI_USR_PHONE(CLIENT, PHONE_UUID, DATAPROVIDERNAME, USERIDINDATAPROVIDER, PHONETYPE, PHONENUMBER) values (?,?,?,?,?,?) with primary key'
like_sql = 'upsert ' + schema + '.SMI_USR_LIKE(CLIENT, LIKE_UUID, DATAPROVIDERNAME, USERIDINDATAPROVIDER, NAME, CATEGORY, ID, LIKECREATIONTIMSTAMP_UTC) values (?,?,?,?,?,?,?,?) with primary key'
channel_list = ['TW', 'FB','BLOG']
men_names = ['Mohan', 'Suresh', 'Salman', 'Nivin', 'Jayasurya', 'Vijay', 'Prabhas', 'Fahad', 'Fazil', 'Asif', 'Prithviraj', 'Muhammed', 'Shankar', 'Rajni', 'Ajith', 'Surya', 'Kamal']
women_names = ['Mamta', 'Kavya', 'Sindhu', 'Shriya', 'Trisha', 'Tabu', 'Simran', 'Meena', 'Asin', 'Kareena', 'Vidya', 'Sonakshi', 'Aiswarya', 'Preity', 'Namita', 'Sherin', 'Shamna', 'Miya' ,'Sruthy']
countrycodes = ['IN', 'DE', 'FR', 'US', 'CH', 'IT', 'RU']
professionalheadlines = ['Data mining Expert', 'Career consultant', 'Programming Guru', 'Final word in English Grammar', 'Wildlife Explorer', 'Geologist', 'Writer, Director', 'Singer, Actor', 'Expert Sculptor', 'Master in Physics', 'Astronomy Rockstar', 'Social Science Guru']
for i in range(count):
    dataprovidername = 'GIGYA'
    useridindataprovider = guid = randomN('_guid_', 29)
    counter = '1'
    socialmediachannel = choice(channel_list)
    socialuser = str(randint(111111111111111111, 999999999999999999))
    isloginidentity = 't'
    gender = choice(['1', '2'])
    if gender == '1':
        firstname = choice(men_names)
        lastname = choice(women_names)
        nickname = firstname[:3].lower() + '_' + choice(['star', 'therock', 'blazing', 'ismyelf', 'rocks', 'theking', 'kingest', 'royal', 'crazy', 'rider', 'fiery']) + str(randint(222, 999))
        firstname = choice(women_names)
        lastname = choice(men_names)
        nickname = firstname[:3].lower() + '_' + choice(['star', 'barbie', 'blazing', 'ismyelf', 'rocks', 'thequeen', 'queenest', 'royal', 'crazy', 'beauty', 'girl']) + str(randint(222, 999))
    isallowedforlogin = 't'
    isexpiredsession = 'f'
    lastlogintimestamp_utc = 0
    photourl = 'http://www.' + socialmediachannel.lower() + '.com/photo/' + socialuser
    thumbnailurl = 'http://www.' + socialmediachannel.lower() + '.com/thumbnail/' + socialuser
    age = str(randint(18, 90))
    birthday = ''
    birthmonth = ''
    birthyear = ''
    email = nickname + '@' + choice(['gmail', 'yahoo', 'mail', 'hotmail']) + '.com'
    countrycode = choice(countrycodes)
    state = 'test'
    city = 'test'
    zip = str(randint(2222222, 9999999))
    profileurl = 'http://www.' + socialmediachannel.lower() + '.com/' + socialuser
    proxiedemail = 'test'
    address = 'test'
    languages = 'test'
    professionalheadline = choice(professionalheadlines)
    bio = 'test'
    honors = 'test'
    industry = 'test'
    specialities = 'test'
    religion = 'test'
    politicalview = 'test'
    interestedin = choice(['1', '2'])
    relationshipstatus = ''
    hometown = 'test'
    followerscount = str(randint(0, 500))
    followingcount = str(randint(0, 500))
    username = firstname + socialuser
    locale = choice(['en_US', 'en_UK', 'en_IN'])
    isverified = choice(['t', 'f'])
    usertimezone = 'test'
    educationlevel = 'test'
    profile_record = (client, dataprovidername, useridindataprovider, firstname, lastname, nickname, photourl, profileurl, age, gender, birthday, birthmonth, birthyear, countrycode, state, city, address, bio, thumbnailurl, zip, proxiedemail, languages, honors, professionalheadline, industry, specialities, religion, interestedin, relationshipstatus, hometown, followerscount, followingcount, username, username, locale, isverified, usertimezone, educationlevel)
    cursor_target.execute(profile_sql, profile_record)
    identity_record = (client, dataprovidername, useridindataprovider, counter, socialmediachannel, socialuser, isloginidentity, nickname, isallowedforlogin, isexpiredsession, lastlogintimestamp_utc, photourl, thumbnailurl, firstname, lastname, gender, age, birthday, birthmonth, birthyear, email, countrycode, state, city, zip, profileurl, proxiedemail, address, languages, professionalheadline, bio, industry, specialities, religion, politicalview, interestedin, relationshipstatus, hometown, followerscount, followingcount, username, locale, isverified, usertimezone)
    cursor_target.execute(identity_sql, identity_record)
    useridsignature = 'test1'
    signaturetimestamp_utc = '123'
    isuserregistered = choice(['t', 'f'])
    userregstrdtimestamp_utc = '123'
    isuseraccountverified = choice(['t', 'f'])
    useraccntverifiedtimestamp_utc = '123'
    isuseraccntactive = choice(['t', 'f'])
    isuseraccntlockedout = choice(['t', 'f'])
    influencerrank = str(randint(0, 101))
    lastloginlocation_countrycode = countrycode
    lastloginlocation_state = 'test'
    lastloginlocation_city = 'test'
    lastloginlocation_latitude = '123'
    lastloginlocation_longitude = '123'
    oldestdataupdatedtimestamp_utc = '123'
    accountcreatedtimestamp_utc = '123'
    registrationsource = 'test'
    account_record = (client, dataprovidername, useridindataprovider, useridsignature, signaturetimestamp_utc, socialmediachannel, isuserregistered, userregstrdtimestamp_utc, isuseraccountverified, useraccntverifiedtimestamp_utc, isuseraccntactive, isuseraccntlockedout, influencerrank, lastloginlocation_countrycode, lastloginlocation_state, lastloginlocation_city, lastloginlocation_latitude, lastloginlocation_longitude, oldestdataupdatedtimestamp_utc, accountcreatedtimestamp_utc, registrationsource)
    cursor_target.execute(account_sql, account_record)
    num_of_patents = randint(1, 5)
    for i in range(num_of_patents):
        patent_uuid = randomN('patent_id', 12)
        title = get_patent_pub_name()
        summary = 'This patent is about the ' + title
        patentnumber = str(randint(222222222, 999999999))
        patentoffice = 'Patent office-' + countrycode
        status = choice(['Awarded', 'Submitted', 'Under scrutiny', 'Declined', 'Application received'])
        patentdate = ''
        patenturl = 'https://www.' + patentoffice + '.com/patents/' + patentnumber
        patent_record = (client, patent_uuid, title, dataprovidername, useridindataprovider, summary, patentnumber, patentoffice, status, patentdate, patenturl)
        cursor_target.execute(patent_sql, patent_record)
        pblctn_uuid = randomN('publctn_id', 12)
        pblctn_title = title
        pblctn_summary = choice(['A work on ', 'A write up on ', 'Book about ', 'Article: ', 'Book: ']) + title
        publisher = choice(['Mondadori', 'Bonnier', 'ThomsonReuters', 'Harper Collins', 'Oxford', 'Wiley', 'O\'reily', 'Shogakukan', 'Informa', 'Simon & Schuster', 'Pearson', 'Saraiva', 'Sanoma', 'Cambridge University Press'])
        publicationdate = ''
        publicationurl = 'https://www.' + publisher.replace(' ', '') + '.com/' + pblctn_title.replace(' ', '')
        publication_record = (client, pblctn_uuid, dataprovidername, useridindataprovider, pblctn_title, pblctn_summary, publisher, publicationdate, publicationurl)
        cursor_target.execute(publication_sql, publication_record)
        cert_uuid = randomN('cert_id', 12)
        cert_name = title
        cert_number = str(randint(23423423,345345345))
        authority = choice(['Mondadori', 'Bonnier', 'Harper Collins', 'Wiley', 'O\'reily', 'Shogakukan', 'Informa', 'Pearson', 'Saraiva', 'Sanoma', 'Cambridge University'])
        cert_startdate = ''
        cert_enddate = ''
        cert_record = (client, cert_uuid, dataprovidername, useridindataprovider, cert_name, authority, cert_number, cert_startdate, cert_enddate)
        cursor_target.execute(cert_sql, cert_record)
    edu_uuid = randomN('edu_id', 12)
    school = choice(['PES Institute of Technology', 'Bangalore University', 'IIT Madras', 'NIT Calicut', 'Government Engg college, Thrissur', 'VIT','MIT', 'MSRIT', 'RVCE', 'UVCE'])
    schooltype = choice(['Engineering', 'Technical Education', 'Higher studies', 'Advanced studies'])
    fieldofstudy = choice(['Computer Science', 'Electronics and Communication', 'Civil engineering', 'Mechanical Engineering', 'Electrical Engineering', 'Production engineering'])
    degree = choice(['B.Tech', 'MS', 'M.Tech', 'BS', 'B.Sc', 'M.Sc'])
    startyear = str(randint(2000, 2010))
    endyear = str(int(startyear) + 4)
    education_record = (client, edu_uuid, dataprovidername, useridindataprovider, school, schooltype, fieldofstudy, degree, startyear, endyear)
    cursor_target.execute(education_sql, education_record)
    work_uuid = randomN('work_id', 12)
    company = choice(['SAP Labs India', 'IBM', 'CISCO', 'Microsoft', 'Google', 'Yahoo', 'Housing', 'Wipro', 'Infosys', 'TCS'])
    companyid = randomN(company[:3], 9)
    work_title = choice(['Senior developer', 'Developer Associate', 'Programmer', 'Coder', 'Hacker', 'Software Engineer', 'Data expert', 'Web developer', 'System programmer', 'UI Expert', 'Quality Assurance', 'Knowledge Management', 'Architect', 'Team Lead'])
    companysize = str(randint(5000, 500000))
    work_startdate = ''
    work_enddate = ''
    work_industry = 'Software'
    iscurrentcompany = choice(['X', ''])
    workexp_record = (client, work_uuid, dataprovidername, useridindataprovider, company, companyid, work_title, companysize, work_startdate, work_enddate, work_industry, iscurrentcompany)
    cursor_target.execute(workexp_sql, workexp_record)
    num_of_skills = randint(1, 10)
    for i in range(num_of_skills):
        skill_uuid = randomN('patent_id', 12)
        skill = choice(['Algorithms', 'Analytics', 'Android', 'Applications', 'Blogging', 'Business', 'Business Analysis', 'Business Intelligence', 'Business Storytelling', 'Content Management', 'Content Marketing', 'Content Strategy', 'Data Analysis', 'Data Analytics', 'Data Engineering', 'Data Mining', 'Data Science', 'Data Warehousing', 'Database Administration', 'Database Management', 'Digital Marketing', 'Hospitality', 'Human Resources', 'Information Management', 'Information Security', 'Legal', 'Leadership ', 'Management', 'Marketing', 'Market Research', 'Media Planning', 'Microsoft Office Skills', 'Mobile Apps', 'Mobile Development', 'Network and Information Security', 'Newsletters', 'Online Marketing', 'Presentation', 'Project Management', 'Public  Relations', 'Recruiting', 'Relationship Management', 'Research', 'Risk Management', 'Search Engine Optimization', 'Social Media', 'Social Media Management', 'Social Networking', 'Software', 'Software Engineering', 'Software Management', 'Strategic Planning', 'Strategy', 'Technical', 'Training', 'UI / UX', 'User Testing', 'Web Content', 'Web Development', 'Web Programming', 'WordPress', 'Writing'])
        skill_level = choice(['Beginner', 'Medium', 'Advanced', 'Expert'])
        skill_level_years_dict = {'Beginner': 0, 'Medium': 4, 'Advanced': 10, 'Expert': 20}
        skill_years = skill_level_years_dict[skill_level]
        skill_record = (client, skill_uuid, dataprovidername, useridindataprovider, skill, skill_level, skill_years)
        cursor_target.execute(skill_sql, skill_record)
    fav_uuid = randomN('work_id', 12)
    type = ''
    name = choice(['Eminem', 'Metallica', 'Led Zeppelin', 'Mother Jane', 'Avial', 'Lamb of God', 'Nirvana'])
    category = 'Music'
    favorite_record = (client, fav_uuid, dataprovidername, useridindataprovider, type, name, category)
    cursor_target.execute(favorite_sql, favorite_record)
    like_uuid = randomN('like_id', 12)
    type = ''
    name = choice(['Eminem', 'Metallica', 'Led Zeppelin', 'Mother Jane', 'Avial', 'Lamb of God', 'Nirvana'])
    category = 'Music'
    id = randomN('id', 7)
    likecreationtimstamp_utc = '123'
    like_record = (client, like_uuid, dataprovidername, useridindataprovider, name, category, id, likecreationtimstamp_utc)
    cursor_target.execute(like_sql, like_record)
    phone_uuid = randomN('work_id', 12)
    phonetype = choice(['mobile', 'telephone'])
    phonenumber = str(randint(9132323154, 9947931930))
    phone_record = (client, phone_uuid, dataprovidername, useridindataprovider, phonetype, phonenumber)
    cursor_target.execute(phone_sql, phone_record)
print('Done pushing data for ' + str(count) + ' users into ' + server + '!')


2. Run the script from your editor

3. Checking the Results in the database tables.

The script randomly chooses values for various fields from a specified set of values. For example:

countrycode will be chosen randomly from the list ['IN', 'DE', 'FR', 'US', 'CH', 'IT', 'RU'].

These lists can be modified as per the requirement for the demo.

Over and out!

Related Blog posts:

Demo Social and Sentiment data generation using Python script


For a particular Trade Promotion, accruals are calculated as per “accrual methods “configured.

The Funds Management application provides  accrual management capabilities, which means that accrual calculations can be done within the SAP CRM system and sent to SAP ERP where the amount is posted in SAP ERP Financials.

The Accrual Calculation job can use various reference data types, depending on what is defined in Customizing. Examples include sales volumes (SAP ERP), trade promotion management (TPM) planning data, or funds data. The accrual calculation results are stored within the accrual staging area.


In Accrual Posting it is possible to schedule an accrual posting run in the batch processing framework to post the accrual results as fund postings, which are transferred to SAP ERP financials as accounting documents


Below diagram explains configuration set up linkage for Accrual method for particular Trade promotion and spend types.





Configuration Path

  1. SPRO -> Customer Relationship management --> Fund management --> Accruals --> Accrual Calculation Method







Below are the overview of the six accrual methods delivered in SAP CRM Trade Promotion standard. However, it is possible to configure alternative accrual calculation methods on a project basis/requirement.




The information of accrual method   can be seen in Fund usage in Trade Promotion. Pls. refer below screenshot for the same.





'Analyze Sentiments' is a Fiori app that helps you perform Sentiment Analysis on the topics that interest you. To learn more about the app, please go check out these links:



Quick integration of Sentiment Analysis powered by Analyze Sentiments into your app

Ready to get your feet wet?!


Here are a few steps to add a chart control into a UI5 control that supports aggregations (like sap.m.List, etc) and to connect the OData service to this chart.

When you run the app, you will be able to see nice little charts added into each item in the aggregation that shows sentiment information.


Follow these steps to quickly integrate Sentiment Analysis capability into your already existing UI5 app:


1) Insert the chart into the appropriate location in your app. In the sample code below, the chart is embedded into a custom list item:

<List id="main_list" headerText="Vendors">
            <HBox justifyContent="SpaceAround">
                  <ObjectHeader title="playstation" />
                  <viz:VizFrame vizType="bar" uiConfig="{applicationSet:'fiori'}" height="250px" width="250px"> </viz:VizFrame>

2) In the controller code on initialization, add the following code to fill data in the chart that we added into the UI in the previous step:


//Get the reference to the Odata service
var oModel = new sap.ui.model.odata.ODataModel("http://localhost:8080/ux.fnd.snta/proxy/http/lddbvsb.wdf.sap.corp:8000/sap/hba/apps/snta/s/odata/sntmntAnlys.xsodata/", true);
//Get the reference of the control where you want the charts embedded
var oList = this.getView().byId("main_list");
//This code gets the Subjectname from the control in which the chart is going to get embedded. You can see that the subjectname is extracted from the Title of the items in the list
for (var i = 0; i < oList.getItems().length; i++) {
    var oChart = oList.getItems()[i].getContent()[0].getItems()[1];
    var sItemName = oList.getItems()[i].getContent()[0].getItems()[0].getTitle();
//Now we set the data for each item in the list as per the subject that we extracted from the listitem.
    oModel.read('/SrchTrmSntmntAnlysInSoclMdaChnlQry(P_SAPClient=\'' + self.sSAPClient + '\')/Results', null, ['$filter=SocialPostSearchTermText%20eq%20\'' + sItemName + "\' and " + "SocialPostCreationDate_E" + " ge datetime\'" + '2014-06-14' + '\'' + '&$select=Quarter,Year,SearchTermNetSntmntVal_E,NmbrOfNtrlSoclPostVal_E,NmbrOfNgtvSocialPostVal_E,NmbrOfPstvSocialPostVal_E'], false, function(oData, oResponse) {
            interaction: {
                selectability: {
                    mode: "single"
            valueAxis: {
                label: {
                    formatString: 'u'
            legend: {
                title: {
                    visible: false
            title: {
                visible: false
            plotArea: {
                dataLabel: {
                    visible: true
                colorPalette: ['sapUiChartPaletteSemanticNeutral', 'sapUiChartPaletteSemanticBad', 'sapUiChartPaletteSemanticGood']
        var oChartDataset = new sap.viz.ui5.data.FlattenedDataset({
            measures: [{
                name: "Neutral",
                value: '{NmbrOfNtrlSoclPostVal_E}'
            }, {
                name: "Negative",
                value: '{NmbrOfNgtvSocialPostVal_E}'
            }, {
                name: "Positive",
                value: '{NmbrOfPstvSocialPostVal_E}'
            data: {
                path: "/results"
        var oDim1 = new sap.viz.ui5.data.DimensionDefinition({
            name: "Year",
            value: '{Year}'
        var oDim2 = new sap.viz.ui5.data.DimensionDefinition({
            name: "Quarter",
            value: '{Quarter}'
        var oDataset = oChart.getDataset();
        var oChartModel = new sap.ui.model.json.JSONModel(oData);
            valueAxis: {
                title: {
                    visible: true,
                    text: "Mentions"
            categoryAxis: {
                title: {
                    visible: true,
                    text: "Quarter"
        var feedValueAxis = new sap.viz.ui5.controls.common.feeds.FeedItem({
            'uid': "valueAxis",
            'type': "Measure",
            'values': ["Neutral", "Negative", "Positive"]
        var feedCategoryAxis = new sap.viz.ui5.controls.common.feeds.FeedItem({
            'uid': "categoryAxis",
            'type': "Dimension",
            'values': [new sap.viz.ui5.controls.common.feeds.AnalysisObject({
                    'uid': "Year",
                    'type': "Dimension",
                    'name': "Year"
                new sap.viz.ui5.controls.common.feeds.AnalysisObject({
                    'uid': "Quarter",
                    'type': "Dimension",
                    'name': "Quarter"
    }, function() {
        sap.m.MessageBox.show("Odata failed", sap.m.MessageBox.Icon.ERROR, "Error", [

PS: Depending on how you add the chart into your app, the above chunk of code will have to be adjusted to get the subjectname and pass it to the chart.


In the above sample code, you can find that the chart in each custom list item is bound to data in a loop. If you have added the chart in a similar control with an aggregation, you would have to modify the lines highlighted above to get the list control and to get the chart reference and searchterm.



What else can you do with the Analyze Sentiments Odata services?

Here’s some more information on our existing Odata services for Analyze Sentiments and some ideas how you can use it in your apps.



What information it gives out


List of Channels (code and name)


List of Searchterms (code and name)


List of (number of mentions, number of positive, negative & neutral mentions, ‘netsentiment value’) for a searchterm given out in daily/weekly/monthly/quarterly/yearly period granularity


List of socialposts for a searchterm in a period


Net sentiment trend in percentage for a searchterm over a specified period.

PS: The last three services retrieves data for all subjects when filter is not applied on searchterms.



Calculations used:


Net sentiment  = P - N

P = sum of weights of positive posts. The weight could be +1(good), +2(very good)

N = sum of weights of negative posts. The weight can be -1(bad), -2(very bad)


Net sentiment trend percentage = (Net sentiment in last n days – Net sentiment in previous n days) / Net sentiment in previous n days.


So on the whole, we have the following information:

i) Number of positive, negative, neutral, total mentions about a Subject

ii) Net sentiment about a subject

iii) Net sentiment trend about a subject which is a percentage.


Here are some sample ways in which the external apps can right away start using our Odata assets:



Control that can be used

Collection to be used

Show the numbers (total, positive, negative neutral mentions or netsentiment) related to a subject



Show the socialposts related to a subject

Table, list, etc


Show the net sentiment trend of a subject



Show chart/graph with the numbers over a period






Related links:

One of the most overlooked aspects of contact management is the relationship between the contacts in your database and your sales process. It has been my experience that most companies develop their marketing databases with contact information independent and blind to their sales processes.

With more than 5.6 people on average being reported to now be involved in a purchase decision for a solution, you can’t develop a good database without first understanding how you sell.

A critical first step in helping customers through the buyer's journey is to understand who you need to communicate with along the way. Understanding the roles and how decisions get made supporting specific solutions and business processes is a prerequisite for developing the right kind of marketing database.  For example, if you sell complex solutions that require engagement with economic and technical buyers, then the contacts in your database needs to support these types of roles.

I once marketed to a very specialized audience to a defined number of accounts that could only purchase our solutions, if their companies met very specific purchasing criteria. While I was able to find resources for the specialty titles I was seeking, I was not able to meet my second objective of locating these titles for the accounts and criteria we were targeting.

In this case, I ended up developing a custom database using a marketing intern, leveraging an online contact repository by pulling contacts against predefined criteria. While this custom database required some initial development effort, our program responses, leads, and opportunity conversions grew exponentially.  We were now able to target and reach the roles we needed to reach, in the accounts where we needed to do business.

As, you begin to evaluate future contact list purchases, do so from the perspective of addressing your white space and gaps in roles supporting your sales processes. As you do, I'm confident you will begin to view your contacts in your marketing database in an entirely new manner while further appreciating its ultimate power.

To learn more about SAP's in-memory database, SAP HANA, and SAP solutions for Big Data, I invite you to click on the following link.






Harry E. Blunt

Director, North America Industry Field Marketing
SAP America, Inc.

3999 West Chester Pike 
Newtown Square, PA 19073
M: 302-740-8293

E: harry.blunt@sap.com

With future innovation and sales success tied so closely to the delivery of relevant and personalized customer experiences, companies must get closer and more intelligently connected with their customers while paying greater attention to the user experience. To meet these objectives, companies must develop a holistic framework for managing customer intelligence and their different sales channels while differentiating their offerings through flexible solution delivery models.


Competing successfully in the digital economy requires an “always on,” integrated approach for capturing and leveraging customer intelligence. Intelligence should be leveraged throughout all parts of the organization and needs to be visible and relevant at the point of a customer's transaction or engagement. By strategically combining transactional, qualitative, and social data with analytics and BigData, companies can better understand opportunities for future innovation while engaging with customers more personally by becoming much more prescriptive around audience targeting and messaging.


Because customers expect personalized relevant experiences regardless of the channels from where and how they engage, all organizations must have a holistic picture of customer engagement supported by a sound strategy focused on Omni-channel commerce. Providing customers with a unified and intelligently connected user experience grows customer relationships, and captures customer intelligence previously undetected by unifying previously disconnected, non-visible and fragmented customer experiences. Companies can dramatically improve their customers’ user experience and loyalty by offering personal, intelligently connected experiences over multiple channels of engagement and commerce.


Product and Software Innovators can draw closer to their customers by moving from consumption based purchase models to solution and subscription based models. While there are significant benefits with moving to solution sales and recurring revenue streams like subscriptions, there are also associated added complexities impacting how these new solutions need to be developed, communicated and delivered to the market.  Operationally, moving toward solution and subscription based business models impacts how solutions, are developed, orders get configured and ultimately how revenue is captured and realized. To fully capitalize on these new emerging opportunities for selling solutions and subscriptions, you need to have operational and billing systems that accommodate and support a large degree of custom order configuration and business requirements flexibility, extending from product development through solution delivery.


You can learn more about SAP solutions for Customer Engagement and Commerce and how leading manufacturing and software companies are providing differentiated value to their customers by accessing these complimentary resources


Warmest regards,


Harry E. Blunt

Director, North America Industry Field  Marketing
SAP America, Inc.

3999 West Chester Pike 
Newtown Square, PA 19073
M: 302-740-8293

E: harry.blunt@sap.com

While social selling, customer experience, and buying personas, grab the marketing headlines, I would like to pause and pay homage to the often ignored but equally important North American Industry Classification System (NAICS).


It’s my assertion that the use of NAICS codes and their predecessor SIC codes are some of the most misunderstood, and underutilized FREE resources available to marketers and business people.


Many of the challenges faced by marketers around content consumption and messaging with relevance can be greatly addressed by doing a better job with industry and audience segmentation prior to audience engagement. Whether you are marketing to the Fortune 1000 or an addressable market size over a million customers, those thousands or millions of customers do not all share the same business characteristics. The key to effective messaging and getting your message to the right people is reaching people where they “live” by targeting and messaging based on finding those unique characteristics.


Understanding and incorporating NAICS codes into your target audience strategy is a critical first step in setting future winning audience engagement tactics. Those six digit codes buried among all your other fields of customer data truly do matter.


To help you appreciate the magnitude and importance of these differences, I have attached a little light reading. It contains 508 pages of individual NAICS code descriptions from the US Census Department. As you will see, sub-sectors that are operating within the same general industry act and behave very differently. While a chemical provider of chlorine and a paint company operates under the same general classification of chemicals, the way they manufacture products and sell products is very different.


If you want another proof point for taking a more granular approach to audience targeting, consider that there are over 10,000 plus active associations and 1000’s of specialty trade journals. These associations, trade publications, and associated websites and social communities are successfully reaching audiences at the sub-sector level with very defined special interest. They continue to thrive and prosper in their niche markets because the content they provide and issues they address while being niche is extremely relevant to their audiences. These associations, niche trade publications and social communities are successfully reaching their target audiences where they “live.”


Marketing program returns naturally improve for companies when their content reaches and resonates with their intended audiences. Having well-defined audience segmentation, aided by the NAICS, is a good first step, companies can take to ensure they develop the right messages heard and then acted upon by the right people


To learn how SAP can help you engage more personally with your customers through Big Data  and  Omni-channel commerce, please check out the following complimentary resources.

Harry E. Blunt

Director, North America Industry Field Marketing
SAP America, Inc.

3999 West Chester Pike 
Newtown Square, PA 19073
M: 302-740-8293

E: harry.blunt@sap.com

If you ever read a murder mystery or watched a criminal investigation TV show, a common plot element is to start the story with a found dead body often referred to affectionately as “John Doe.”The balance of the show or book typically then focuses on a protagonist working toward uncovering John Doe’s anonymity and how and who ultimately caused this person’s demise.


The Investigator rarely just jumps in trying to solve the crime and more often than not the investigation starts with an autopsy of the body. With additional information gleaned from the autopsy, the protagonist then begins to solve the mystery. As the protagonist gathers more information about this dead body, the anonymous dead body quickly evolves into an identified person with distinct attributes leading to the point when the crime finally gets solved.


For marketers, working with incomplete and unrefined responder data is a bit like working with an anonymous dead body. If all a marketer knows about a potential prospect or responder is the person’s name, company and even email, there is not much a marketer can do to engage effectively with that individual. Like, the process of an initial autopsy, a marketer has to try to define and characterize this first responder to the best of his or her ability from the outset. Otherwise, there is no logical place from where to potentially engage with this individual with future activities.


Successful target marketing must begin with basic contact hygiene. Missing contact details like contact titles, emails, phone numbers and industry NAICS codes and supporting descriptions must be continuously appended and updated within databases. While it’s not always possible to have complete responder contact data initially, companies must make data hygiene a priority to ensure contact information is complete as possible prior to future use for targeting and analysis.


Once a customer record is defined with uniquely definable attributes, you can then move forward with identifying and segmenting future audience targets by creating responder profiles based on specific responder behaviors.None of this can take place until contact data is defined and managed as uniquely identifiable attributes. Two aspects of a customer’s record that make it potentially unique is a person’s title, and their Industry NAICS code. The third aspect relates to leveraging and tracking responder behavior, but you can’t move successfully to step three without first having a person’s title and correctly defined NAICS code and supporting description. Otherwise, you run the risk of jumping to conclusions based on inaccurate or incomplete data. To illustrate the point, let me provide a fictitious example. Suppose that for the last six months you have been running a marketing campaign on Big Data, In that campaign you always included responders from prior related activities including those responders without complete contact records. For the last six months any marketing activity you had related to Big Data you continually push to Big Data program responders


At the conclusion of the campaign, while participation has been steadily increasing you have seen marginal movement having responders converting to leads and then having leads moving into opportunities. As a postmortem, you decide to do some additional data hygiene around your responders with incomplete contact records. With a more comprehensive responder profile, this is what you find. A large percentage of the newly defined responders came from Life Sciences and particularly Medical Device companies with titles focused on quality and regulatory operations compliance. Having this new insight, you take some time and do some research on the topic of Big Data in the Medical Device community and discover that Big Data is actively being showcased as an issue and opportunity. You then tweak your programming and messaging and focus more on Medical Device operationally focused buying centers, during the second half of the year, and both leads and opportunities substantially improve.


While this is a fictitious example, here is the important point. Until, you’re able to develop a more comprehensive responder profile, there is little you can do to move confidently forward with meaningful future engagement and analysis. Target audiences and responders are just “anonymous dead bodies” until they can be characterized and grouped into uniquely definable attributes.


And, while data hygiene and segmentation rarely share the same reverence as “improving customer experience.” Like the “murder mystery autopsy,” they are critical disciplines for marketers first to master to ensure relevant audience conversations and future business opportunities.


Harry E. Blunt

Director, North America Industry Field Marketing
SAP America, Inc.

3999 West Chester Pike 
Newtown Square, PA 19073
M: 302-740-8293

E: harry.blunt@sap.com

SAP Canada hosted two events last week focusing on the theme of running simple and innovating business processes in a complex global world. They featured fascinating success stories from SAP customers and an eye-opening presentation from TED fellow and Complexity Scientist Eric Berlow on embracing complexity to come up with innovative answers to big data challenges.

Eric kicked off proceedings with his intriguing perspective on how we can leverage the explosion of data to build a ‘data-scope’ which allows us to connect the dots and see simple patterns that are invisible to the naked brain. He calls this ‘finding simple’ from complex causality and multi-dimensionality, a theory that can be applied across digital media and business strategy. He closed by saying that businesses need to focus on being intelligently simple rather than merely simplistic. In other words, IT has to be distilled down to offer real business insights, rather than simplified down to nothing at all.


Positioning SAP as the go-to intelligent business simplifier

Snehanshu Shah, Vice President, HANA Centre of Excellence, SAP Global, moved the conversation to the cost of growing complexity in business. He introduced SAP’s S/4HANA business suite as the on-premise and cloud solution designed to give organizations the freedom and drive to innovate their business processes. By taking the core functionality of R/4, simplifying it and applying the streamlined Fiori interface, S/4HANA delivers less hardware at lower cost while providing faster answers. This is business intelligence and analytics at the fingertips of every line of business.

Sam Masri, Managing Principal, Industry Value Engineering, SAP Canada, continued the discussion by calling complexity business’s most intractable challenge today – a rising view across many industries. While so many enterprises are throwing a vast portion of their IT spend at keeping the lights on, they’re missing out on achieving a lower TCO of up to 22% by failing to invest more in innovation.

Sharing our customers’ success in simplification and innovation

The event was rounded off with some valuable insights into how some of SAP’s key customers are using our software to run simple. First, Albert Deileman and Jason Leo of the Healthcare of Ontario Pension Plan (HOOPP) told us of their search for a ‘pixie dust’ solution to the organization’s complexity challenges. For them, the SAP HANA Enterprise Cloud (HEC) solution is all about simplicity with results; real-time data speed and agility, perfect replication, rapid queries and modelling, and faster time to market.

John Harrickey from CSA Group was next up, telling us how the transition from ERP to HEC fuelled the company’s growth and expansion into Europe and Asia. It has enabled better employee engagement by mobilizing applications and improving insights, and better customer engagement by enhancing collaborating and responsiveness. He explained how the company has been able to build HANA upon existing SAP technology to create a seamless experience with high functionality, resulting in reduced complexity and a more productive business.

Wally Council of HP Converged Solutions spoke of the company’s need to flip IT investment from keeping the lights on to funding innovation. He brought up the point that the operational complexity challenges faced by huge and geographically-diverse modern enterprises have to be tackled head-on by a major rethink of key business processes and user experiences.

To top it off, our own Mike Golz, CIO, SAP Americas, reminded us that SAP itself runs SAP, and remains our first and best reference customer.

The richly-attended talks were held in the Four Seasons Hotel in Toronto on March 12 and Flames Central in Calgary on March 10. If you would like to find out more about SAP’s Simplify to Innovate initiative and the S/4HANA business suite, please visit the event landing page and www.sap.com/HANA. The presentations from the event are available on SAP Canada’s JAM page.


This tutorial is intended as a guide for the creation of demo/test data only. The sample script provided is not intended for use in a productive system.


This tutorial explains how to create demo data for the Business Suite Foundation database tables SOCIALDATA and SMI_VOICE_CUST using a Python script. The data is saved as excel files. You can find more information about Analyze Sentiment, a Fiori app from Social Intelligence here - New videos on SAP Sentiment Analysis on YouTube available

It will help you to get the context of this post and also to have a basic idea on what is Social Intelligence about.

Make sure that the following prerequisites are met before you start out :

• Installation of Python 2.x for windows

Install Python 2.x  for your platform - Download Python | Python.org
PS: During installation, select the option to add Python's installation directory to Windows PATH variable.


Install the required python modules: setuptools, jdcal, openpyxl, xlrd.


Specifying Input and Customizing the scripts

There are two variations of the script that can be used depending on the use case.

Script 1 - gen_posts_count.py

When to use: This script can be used when you have a list of searchterms, the time range and the average number of posts per week for which you want to generate the demo data. If you use this script you cannot control the sentiment value in the posts. Sentiment indicates whether the social user is telling a good thing, neutral thing or a bad thing through the social post. So this script generates posts with random sentiment.


Input File: post_count_per_week.xlsx in which you have to maintain the products and the corresponding number of posts per week to be generated.

See the attached screenshot - post_count_total.PNG


Modification to the script: time range has to be specified in the python script at the end of the file. Open the script in a text editor and modify this line to give the start and end dates. - Number of weeks that the time span comprises of: gen_posts([1, 12, 2013], [29, 1, 2014], 8)



# Generates a collection of dummy social media data
from random import choice, randint, random
from time import strftime
from datetime import timedelta, datetime
from openpyxl import Workbook
import xlrd
def get_products_and_counts():
    book = xlrd.open_workbook('post_count_per_week.xlsx')
    sh = book.sheet_by_index(0)
    products = []
    counts = []
    for rownum in range(sh.nrows):
    return products, counts
def randomN(prefix, ndigits):
    range_start = 10**(ndigits-1)
    range_end = (10**ndigits)-1
    return prefix + str(randint(range_start, range_end))
def random_date(start, end):
    return start + timedelta(
        seconds=randint(0, int((end - start).total_seconds())))
def gen_posts(s_date, e_date, no_of_weeks):
    social_filename = 'SOCIALDATA' + '.xlsx'
    voice_filename = 'SMI_VOICE_CUST' + '.xlsx'
    social_book = Workbook(optimized_write = True)
    social_sheet = social_book.create_sheet()
    voice_book = Workbook(optimized_write = True)
    voice_sheet = voice_book.create_sheet()
    start_datetime = datetime(s_date[2], s_date[1], s_date[0], 0, 0, 0)
    end_datetime = datetime(e_date[2], e_date[1], e_date[0], 0, 0, 0)
    client_list = ['005']
    user_list = ['Ashwin', 'Saiprabha', 'Anupama', 'Debasish', 'Ajalesh', 'Raghav', 'Dilip', 'Rajesh', 'Saju', 'Ranjit', 'Anindita', 'Mayank', 'Santosh', 'Kavya', 'Jithu']
    #product_list = ['Oz Automotive', 'Samba Motors', 'Smoggy Auto', 'Camenbert Cars', 'Curry Cars', 'Driftar', 'eRacer', 'Rouble Motor Company', 'MoonRider', 'Bumble']
    channel_list = ['TW', 'FB']
    adj_set = {"good" : ['good', 'zippy', 'beautiful'],
          "very_good" : ['exuberant'],
          "neutral" : ['ok'],
          "bad" : ['bad', 'annoying'],
          "very_bad" : ['awful']}
    adj_kind_from_senti = { 2 : "very_good",
                1 : "good",
                0 : "neutral",
                -1 : "bad",
                -2 : "very_bad"}
    post_templates = {"very_good" : ["Hey guys, try {0}, it is {1}! Dont miss!",
                      "People, I got the new {0} - {1}!! Brilliant performance! Give a try!",
                      "If you havent yet, try {0}. The speed is fantastic, It is {1}!",
                      "The brandnew {0} - The product quality is impressive!! Verdict - {1}",
                      "{0} is {1}. Highly recommended"],
            "good"      : ["Today I tried {0}. It is {1}.",
                            "The new {0}. Product quality is top, is {1} and worth a try",
                            "Did you checkout {0}?, {1} thing.",
                            "Latest version of {0} is {1}. Excellent performance for me!",
                            "Didnt know {0} is {1} stuff. Superb speed!. Do try it."],
            "neutral"  : ["Checked out {0}. It is {1}",
                            "The new {0} is {1}. Dont expect much.",
                            "Difficult to judge the new {0}. It is {1}.",
                            "Heard the new {0} is {1}. Any first hand info on the performance?",
                            "Anyone know how is {0}, reviews say it is {1}. Quality is what matters"],
            "very_bad"  : ["OMG!! Tried {0}. Its performance is damn too low. It is {1}",
                            "Never go for {0}, the speed is very less, {1} thing.",
                            "Oh, such a {1} thing {0} is!",
                            "Dont ever think of getting a {0}, very bad product quality. It is {1}",
                            "Why do we have {1} products like {0}? :("],
            "bad"      : ["Tried the new {0}. It is not recommended - {1}",
                            "Shouldnt have gone for the {1} {0}. Pathetic product quality.",
                            "First hand experience: {0} is {1}!",
                            "My {0} is {1}. The speed is way too less. Is it just me?!",
                            "The new {0} is {1}. Performance is disappointing. Fail!!"]}
    products, counts = get_products_and_counts()
    for j in range(len(products)):
        product = products.pop()
        count = int(counts.pop()) * no_of_weeks
        print product, count
        for k in range(count):
            sentiment = randint(-2, 2)
            sentiment_valuation = sentiment + 3 if sentiment else sentiment
            adj_kind = adj_kind_from_senti[sentiment]
            adj = choice(adj_set[adj_kind])
            client = choice(client_list)
            guid = randomN('POB', 29)
            user = choice(user_list)
            channel = choice(channel_list)
            post_template = choice(post_templates[adj_kind])
            posted_on = random_date(start_datetime, end_datetime)
            post = post_template.format(product, adj)
            social_sheet.append([client, guid, channel[:2].upper() + str(randomN('',6)), 'English', channel, user, posted_on.strftime("%a, %d %b %Y %H:%M:%S +0000"),'','','','','','','','','','','', product,'', post])
            voice_sheet.append([client, guid, 'Text Analysis', 'Sentiment', '', sentiment, sentiment_valuation,'', '', posted_on.strftime("%Y%m%d%H%M%S")])
            voice_sheet.append([client, guid, 'Text Analysis', 'PRODUCT', product, sentiment, sentiment_valuation,'', '', posted_on.strftime("%Y%m%d%H%M%S")])
    print 'Demo data saved in SOCIALDATA.xlsx, SMI_VOICE_CUST.xlsx'
#modify this line => gen_posts(start_date, end_date, no.of weeks for which data is to be generated)
gen_posts([1, 12, 2013], [28, 1, 2014], 8)

PS: You can configure the other aspects like usernames, channels, countries, locations, adjectives, post templates also.



Script 2 - gen_senti_count.py

When to use: This script can be used when you have a list of searchterms, the time range and the number positive, negative and neutral posts to be generated for each product in that time span. If you use this script you can control the sentiment value in the posts.


Input File: senti_count_per_week.xlsx in which you have to maintain the products and the corresponding number of posts per week to be generated. See the attached screenshot - senti_count_total.PNG


Modification to the script: time range has to be specified in the python script at the end of the file. Open the script in a text editor and modify this line to give the start and end dates. - Number of weeks that the time span comprises of: gen_posts([1, 12, 2013], [29, 1, 2014], 8)




# Generates a collection of dummy social media data
from random import choice, randint, random
from time import strftime
from datetime import timedelta, datetime
from openpyxl import Workbook
import xlrd
#Reads lines "NIKE 23 14 45" from 7days.xlsx which is the count of pos, neg and neu posts to be generated for NIKE in the given period
def get_products_and_senti_num():
    book = xlrd.open_workbook('senti_count_total.xlsx')
    sh = book.sheet_by_index(0)
    products = []
    senti_num = []
    for rownum in range(sh.nrows):
    return products, senti_num
#Returns prefix + ndigits
def randomN(prefix, ndigits):
    range_start = 10**(ndigits-1)
    range_end = (10**ndigits)-1
    return prefix + str(randint(range_start, range_end))
def random_date(start, end):
    return start + timedelta(
        seconds=randint(0, int((end - start).total_seconds())))
def gen_posts(s_date, e_date):
    social_book = Workbook(optimized_write = True)
    social_sheet = social_book.create_sheet()
    voice_book = Workbook(optimized_write = True)
    voice_sheet = voice_book.create_sheet()
    start_datetime = datetime(s_date[2], s_date[1], s_date[0], 0, 0, 0)
    end_datetime = datetime(e_date[2], e_date[1], e_date[0] + 1, 0, 0, 0)
    client_list = ['001']
    user_list = ['John', 'William', 'James', 'Jacob', 'Ryan', 'Joshua', 'Michael', 'Jayden', 'Ethan', 'Christopher', 'Samuel', 'Daniel', 'Kevin', 'Elijah']
    channel_list = ['TW', 'FB']
    countries = ['India', 'Germany', 'France', 'The United States']
    locations = {"India" : ["Bangalore", "Chennai", "Delhi", "Mumbai"],
                "Germany": ["Berlin", "Munich", "Stuttgart", "Frankfurt"],
                "France": ["Paris", "Marseille", "Lyon"],
                "The United States": ["Florida", "Washington DC", "Texas", "Dallas"]}
    country_codes = {"India": "IN",
                    "Germany" : "DE",
                    "France" : "FR",
                    "The United States": "US"}
#The adj_set has the adjectives that will be used in the posts.
    adj_set = {"good" : ['good', 'nice'],
          "very_good" : ['refreshing', 'magical'],
          "neutral" : ['ok'],
          "bad" : ['not good', 'substandard', 'unpleasant', 'poor'],
          "very_bad" : ['awful', 'horrible', 'terrible']}
    adj_kind_from_senti = { 2 : "very_good",
                1 : "good",
                0 : "neutral",
                -1 : "bad",
                -2 : "very_bad"}
    post_templates = {"very_good" : ["Hey guys, try {0}, it is {1}! Dont miss!",
                      "People, I got the new {0} - {1}!! Brilliant! Give a try!",
                      "I'm loving {0}!!",
                      "Using {0} feels great!!",
                      "{0} is {1}. My body feels so refreshing",
                      "{0} - The product quality is impressive!! Verdict - {1}",
                      "{0} is {1}. Highly recommended",
                      "{0} gives instant refreshing moisturizing effect!"],
            "good"      : ["Today I tried {0}. It is {1}.",
                            "The new {0}. Product quality is top, is {1} and worth a try",
                            "Did you checkout {0}?, {1} thing.",
                            "I like {0}. It smells nice and so soft",
                            "Didnt know {0} is {1} stuff. Superb!. Do try it."],
            "neutral"  : ["Checked out {0}. It is {1}",
                            "The new {0} is {1}. Dont expect much.",
                            "Heard the new {0} is {1}. Any first hand info on the it?",
                            "Anyone know how is {0}, reviews say it is {1}. Quality is what matters"],
            "very_bad"  : ["OMG!! Tried {0}. Its not for you. It is {1}",
                            "Never go for {0}, the quality is very less, {1} thing.",
                            "Oh, such a {1} thing {0} is!",
                            "{0} is sold out in my area - Sad!",
                            "Couldnt find {0} in my local store. Bad that I cant get that.",
                            "Local stored have sold out {0}, please send in more!!",
                            "We need more stock of {0} in here. Out of stock everywhere I check",
                            "{0} is out of stock - So sad!",
                            "Dont ever think of getting a {0}, very bad product. It is {1}",
                            "Why do we have {1} products like {0}? :("],
            "bad"      : ["Tried the new {0}. It is not recommended - {1}",
                            "Shouldnt have gone for the {1} {0}. Pathetic product quality.",
                            "First hand experience: {0} is {1}!",
                            "10 stores and no {0}. I want it desperately",
                            "Tried finding {0}. Can't find it in any stores in my area.",
                            "My {0} is {1}. The quality is way too less. Is it just me?!",
                            "The new {0} is {1}. It is disappointing. Fail!!"]}
    products, senti_num = get_products_and_senti_num()
    for j in range(len(products)):
        product = products.pop()
        senti = senti_num.pop()
        pos = int(senti[0])
        neg = int(senti[1])
        neu = int(senti[2])
        print product, "-", pos, neg, neu, " posts created."
        for k in range(pos + neg + neu):
            if pos:
                sentiment = randint(1,2)
                pos -= 1
            elif neg:
                sentiment = randint(-2,-1)
                neg -= 1
                sentiment = 0
                neu -= 1
            sentiment_valuation = sentiment + 3 if sentiment else sentiment
            adj_kind = adj_kind_from_senti[sentiment]
            adj = choice(adj_set[adj_kind])
            client = choice(client_list)
            guid = randomN('POB', 29)
            user = choice(user_list)
            channel = choice(channel_list)
            post_template = choice(post_templates[adj_kind])
            posted_on = random_date(start_datetime, end_datetime)
            post = post_template.format(product, adj)
            num_of_votes = str(randint(0, 150))
            if channel == 'TW':
                post_link = 'http://twitter.com/' + user + randomN('', 5)
            if channel == 'FB':         
                post_link = 'http://facebook.com/' + user + randomN('', 5)
            post_type = choice(['Status', 'Link', 'Photo', 'Video'])
            country = choice(countries)
            location = choice(locations[country])
            country_code = country_codes[country]
            latitude = str(randomN("", 2) + '.' + str(randint(2, 20)))
            longitude = str(randomN("", 2) + '.' + str(randint(2, 20)))
            social_sheet.append([client, guid, channel[:2].upper() + str(randomN('',6)), 'English', channel, user, posted_on.strftime("%a, %d %b %Y %H:%M:%S +0000"), post_type, post_link, num_of_votes, location, country, latitude, longitude, '3', 'Demo post', user, 'Demo User Retrieval', product, posted_on.strftime("%Y%m%d%H%M%S"), post, posted_on.strftime("%Y%m%d%H%M%S"), 'Demo Post Parent', "DemoJ", country_code, 'DS'])
            voice_sheet.append([client, guid, 'TextAnalysis', 'Sentiment', 'DEMO', sentiment, sentiment_valuation, 'J', posted_on.strftime("%Y%m%d"), posted_on.strftime("%Y%m%d%H%M%S")])
            voice_sheet.append([client, guid, 'TextAnalysis', 'PRODUCT', product, sentiment, sentiment_valuation, 'J', posted_on.strftime("%Y%m%d"), posted_on.strftime("%Y%m%d%H%M%S")])
    print 'Demo data saved in SOCIALDATA.xlsx, SMI_VOICE_CUST.xlsx'
#modify this line => gen_posts(start_date, end_date)
gen_posts([22, 05, 2014], [05, 06, 2014])

Running the script

Both of the above scripts can be run in the following manner:

1) Save the script and input excel file in a directory.

2) Press hold Shift key and Right click.

3) Select – ‘Open command window here’

4) In the commandline type: python <scriptname>

5) Done. If everything worked as expected, you will have SOCIALDATA.xlsx and SMI_VOICE_CUST.xlsx files generated in that folder with the dummy data.



As mentioned in the disclaimer already, these scripts should be used only for demo purposes.


The screenshots attached show how the input excel files should look like.


If you run into any issues during the setup or execution of the script, please let me know in the comments section.

This blog highlights videos on SAP Sentiment Analysis and its usage in SAP Demand Signal Management which were published recently on YouTube.



SAP Analyze Sentiments - Introduction

This video under https://www.youtube.com/watch?v=HH8W7BOfL_s explains short and illustrative what Sentiment Analysis is about and how meaningful insights can be derived from it for your business.


More and more people are active in social networks. At the same time people are increasingly looking for products online before they make buying decisions. The Analyze Sentiments app helps you access and analyze unstructured social media content and derive meaningful insights for your business. Being an integral part of various business processes, for example, in the area of Sales and Procurement, the Analyze Sentiments app is easily accessible from the SAP Fiori Launchpad.

SAP Demand Signal Management - Supported by Analyze Sentiments

This video under https://www.youtube.com/watch?v=1D2nKGf1izA describes how Sentiment Analysis can be used in Demand Signal Management. You can consider this to be an example, as Sentiment Analysis can be used in various business processes.


SAP Demand Signal Management gives you real-time insights into market and sales shares of your own brands and products - and those of your competitors.

Sentiment Analysis is an integral part of Demand Signal Management processes. The Analyze Sentiments app helps you to analyze the latest social media sentiments and derive meaningful insights for your business. See how SAP Demand Signal Management and the Analyze Sentiments app work together to help you to make faster and better decisions for your business.

You can find more information on SAP Demand Signal Management under the link http://scn.sap.com/community/demand-signal-management/blog/2013/08/27/an-introduction-to-sap-demand-signal-management

I want to provide an overview about possible decimal issues in BPS Planning used in the CRM Marketing scenario. There are some known issues related to decimal settings in PBS planning. This blog should provide information about the design of the decimal validation and how to set up the planning layout correctly. Furthermore this should contain a collection of solutions for known issues.


When looking the the planning layout created for a trade promotion in CRM we can see a key figure in the plannning layout defined with 2 decimals.

planning layout .jpg

tpm planning layout2.jpg

I will take these example to explain the design.


General Settings


When setting up the planning layout the following 4 level dependencies need to be considered.


1. UPX Layout Definition

2. BPS0 Customizing

3. Key Figure

4. Data Element


When the planning layout is rendered the first level that is considered is the UPX Layout Definition. In transaction UPX_MNTN the number of decimals can be defined:

upx_mntn bonus display.jpg

  upx_mntn kf22.jpg

The decimals places set in the UPX layout defines the number of decimals displayed in the planning layout. This number is for display reasons only.


On the second level there is the BPS0 Customizing. This is the first level that defines how the key figures are stored. That means key figures are rounded to the number of decimals defined in BPS0 and stored as the rounded value.

bps0 dec.jpg

For data consistency reasons the number of decimals defined in UPX_MNTN must be smaller or equal to the number of decimals defined in BPS0. Otherwise an error will be raised.


If there are no decimals defined in BPS0 the same rule is valid for the key figure definition in RSD1.

rsd1 key fig.jpg

If there are no decimals defined in the key figure details the data element for the key figure is considered.

rsd1 key fig data element.jpg

rsd1 key figure data element2.jpg

The decimals defined in the UPX_MNTN are considered for displaying the key figures, whereas the decimals defined in the levels below BPS0 are considered for calculations and storing the values. You should not have more decimals in layout than what you can actually save in the database. The general rule is the following:


No of display decimals <= No of decimals used for calculation

Please refer to the following KBA for further information about the dependencies between the different levels:


1936500 - Enter key figure 0,000 with a valid format (2 decimal places)



Zero Decimal key figures


For key figures defined with having zero decimal places the following needs to be considered.


When having 0 decimal places defined in UPX_MNTN, system considers the BPS0 settings. To display the key figure with 0 decimals, both the UPX_MNTN and BPS0 decimals need to be set to zero.


upx_mntn zero decimals.jpg
bps zero decimals.jpg


In case UPX_MNTN has defined 0 decimals but BPS0 has 2 defined 2 decimals the settings from BPS0 will be considered and the key figure will be displayed with 2 decimals.


This design is valid for zero decimal key figures only. For further information please refer to the following note:


2021933 - Use decimals settings from BPS when Enh Layout is set to 0



Percentage based key figures


What needs to be considered for percentage based key figures?

tpm laoyut percentage.jpg

The number of displayed decimals is taken from the UPX_MNTN settings as well.

upx_mntn percentage.jpg

This is similar to any other key figure definition. The difference is the way the system stores the percentage values. Depending on the parameter UPX_KPI_KFVAL_PERC_CONV_TO_BW the percentage value is stored as divided by 100. A value of 10% is therefore stored as 0,01. This requires the settings for the percentage key figure to have 2 more decimals defined in BPS0 than in UPX_MNTN not to lose precission.

bpd0 percentage.jpg

This is documented in the following SAP note:


1407682 - Planning services customizing for percentage key figures


With the parameter UPX_KPI_KFVAL_PERC_CONV_TO_BW set the percentage key figure values is stored in BW as 10 for 10%. If the parameter is set the above decimal setting is not required. Information about the UPX_KPI_KFVAL_PERC_CONV_TO_BW parameter in UPC_DARK2 table is available in the following SAP note:


1867095 - Planning Services Customizing Flags in the UPC_DARK2 Table


There are some known issues for percentage key figures, those are solved with the following SAP notes:


1523793 - Wrong rounding of percentage key figures with classic render

1370566 - Rounding error for Percentage Key Figures


If percentage key figures need to be diplayed without any decimals the following settings are to be applied:


UPX_MNTN: the key figure needs to be set to 0 decimals

BPS0: the key figure needs to be set to 2 decimals


This fulfills the rule for zero decimals in addition to the percentage key figure rule to require 2 more decimals as displayed.


Currency key figures


Since most currencies use 2 decimals per design there should not be any issues for the most currencies. However there are some known issues for exceptional currencies, so currencies with other than 2 decimal places such as JPY. In case of issues with those currencies the following SAP notes are required in the system:


2126484 - Correct CHECKMAN error introduced with the note 2099874

2106896 - Decimal issues in Planning Layouts when working with exceptional currencies

2099874 - Missing conversion for exceptional currencies in UPX_KPI_KF_PLEVEL_READ2
2021933 - Use decimals settings from BPS when Enh Layout is set to 0

1962963 - Planning Layout issues with exceptional currencies with more than two decimals

1535708 - Plan data for currencies without decimals

Rounding issues with Conditions Generation in CRM


When generating condition in a CRM trade promotion using BI rates the BPS key figure values are retrieved for getting the conditions amount. This may lead to rounding issues. The following note should solve those rounding issues:


2196545 - Discounts are getting rounded while generating conditions



Using master and dependent profiles


When using master and dependent profiles the decimal settings need to be exactly the same for the key figures in the master and the dependent profiles. It is the master profile that is synchronized and that is rendered for calculating the key figures. Therefore the key figures hold the values with decimals from the master profile. However for display reasons the rendering happens for the displayed profile, so for the depending profile. Therefore the decimal settings need to be in sync in the master and the dependent profiles.


Campaign Cost Planning


2181291 - Marketing Cost Planning rounds the key figure values for currencies with less than 2 decimals

Known issues

There are some known issues that are corrected with the following SAP notes:

2119191 - Decimals getting rounded for virtual and calculated key-figures

2106896 - Decimal issues in Planning Layouts when working with exceptional currencies

2085223 - Decimals issue in Planning Layouts rendered with the class CL_UPX_LAYOUT_RENDER

2080064 - Incorrect error message for UPX key figure decimal settings

1817554 - ASSIGN_DECIMALS_TOO_HIGH when synchronizing occurs


The blog will be updated on a regulary basis. If you find any information missing please let me know.


Social data harvesting connector enables harvesting posts, write ups, and social user data from different social media channels such as Facebook, Twitter, Wikipedia, Blogs and so on through DataSift.


In the first release of social data harvesting connector, the approach was to fetch social data from different social media channels.


In the latest release of SP02, the main approach is to consider the consent from the social user and take the appropriate action accordingly on the social posts during harvesting. The configuration for consent handling and related actions is configurable in SAP Business Suite system.


Social User Consent Handling function is available only when the business function FND_SOMI_CM is switched on.


A quick list of new features and enhancements include:


  • Social User Consent Handling during data harvesting and taking the appropriate action accordingly on the social posts

        The consent types which are supported in the connector are as follows:


             - No Consent Required, Store Anonymously

             - No Consent Required, Store Complete User Information


  • Enhanced DataSift Mapper files to fetch the data from Facebook Pages (Managed Source)


  • Updated DataSift Mapper file with fields provided by DataSift to fetch the data from channel Facebook public



Release Information

The new features of release SP02  is available from release SAP Business Suite Foundation 7.47 SP 06 (SAP_BS_FND 747) onwards.




You should have the software component SAP SOMI DS CONT. You should download the component from Software Download Center in SAP Service Market Place. You must have a valid license/API key from DataSift.


To access the Software Download Center from the SAP Service Marketplace homepage, service.sap.com, choose SAP Support Portal → Software Downloads → Software Download Center.


To search for the software component SAP SOMI DS CONT, proceed as follows:

  • Select Search for Software Downloads in the left navigation bar.
  • Search for the software component SAP SOMI DS CONT 1.0
  • Download the latest SP- SP02 for SAP SOMI DS CONT 1.0


The Installation Guide for the Social Data Harvesting connector can be found at https://websmp110.sap-ag.de/instguides -> SAP In-Memory Computing -> SAP Customer Engagement Intelligence -> Installation Guide Social Data Harvesting Connector

For detailed documentation, refer the attached pdf in the SAP Note 2079650.

Note: The updated help portal documentation is available only after SAP Business Suite Foundation 7.47 SP07 release to customer.

SAP Trade Promotion Optimization (TPO)

Recently I was involved in SAP TPO Proof of Concept (PoC) for a TOP FMCG company in US region. This project I believe may be one of its kinds for exploring SAP TPO capabilities and predict accurate volume and lift which involved Modern Trade POS data. We received last 3 years POS data with Account planning, promotion and sales data. I wanted to share learnings & highlighting few features of SAP TPO.


Background: Research trends indicate trade promotion related spends accounts for 8-12 % of overall turnover of a CPG company and up to 60% of CPG marketing budgets for stimulating channel demand. While trade promotion spending as a percentage of marketing budgets has increased dramatically, the inefficiency of trade promotion represents the "number-one concern" among manufacturers. Yet, there is little visibility into where this spending actually goes, or how effectively it increases revenues, expands market share, or creates brand awareness among consumers.  With millions of dollars being spent to stimulate demand a marginal improvement in the fund allocation and recalibration of promotion processes could have a disproportionate impact on sales uplift and promotion ROI. SAP TPO uses advanced analytical constructs like optimization, predictive analytics; What-if analysis can provide significant visibility into the effectiveness of this trade promotion spends. The information attained can provide insights in terms of sales uplift contributions and can help in optimizing the same in the face of many real world constraints during the fund allocation process.

What is Trade Promotion Optimization?

TPO assists CPG manufacturer strategically to optimize the trade spending across their total product portfolio. Trade Promotion Optimization is an approach that uses business rules, constraints, and goals to mathematically create a trade calendar that can meet all of these requirements. Optimization is helpful for strategic questions, such as “what combination of promotional events (feature price, frequency, timing & depth of deal allowances) will meet or beat my revenue and/or profit goals and still stay within my trade promotion budget?” Right TPO models can also solve for ratio mix of revenue, volume and/or profitability, as well as profit contribution for both the manufacturer and retailer. SAP TPO enables trade marketing and sales teams to leverage advanced predictive modeling to suggest optimal price and merchandising decisions based on goals and objectives, or to assess revenue, volume and profitability.

SAP TPO: It’s a SAPCRM Add-on, which comprises a forecasting and modeling engine. The TPO science is dependent on DMF. SAP TPO enables users to understand the demand baseline (Sell out baseline) prediction. SAP TPO predicts the regular volume, revenue, profit margin etc. for manufacturer and planning account for agreed duration.

SAP CRM: Supports all processes involving direct customer contact throughout the entire customer relationship life cycle- from market segmentation, sales lead generation and opportunities to post sales and customer service. Includes business scenarios such as account and trade promotion management.

SAP DSiM: Demand data is loaded into DSiM system which harmonizes the data as per the original master data system (ERP). SAP delivers few methods to harmonize the Syndicated (Market Research) / POS / external data.

SAP BW: Receives harmonized data from DSiM and send it to DMF system for demand modelling.

SAP DMF: Demand Management Foundation provides predictive demand driven forecasts and optimization simulations for all promotion planning across channels and customer segments. In DMF you can do model and forecast for set of customers, channels and markets.  By using demand data, DMF systems help to forecast and optimize the predictions as per the requirement. It’s a science engine, which transforms historical demand data into models for generating forecasting & optimization. SAP TPO uses ‘Bayesian ‘science techniques. A forecast run is created for each call of science system (DMF). The forecast run can be used to see the parameters and results of each prediction that adds to the what-if scenario.

Data: Historical data plays a major role in TPO. Prediction / forecasting results of SAP TPO depend on historical data. SAP TPO supports mainly POS, Syndicated (market research) or internal data that can be uploaded into DMF directly or through DSiM. DSiM harmonizes the data based on your primary data (product hierarchies in ERP).

Analytics: Historical sales and promotion data is used for building predictive models which is used for planning future promotions. Bayesian Hierarchical Modeling (BHM) techniques are used for building these models. BHM not only consider the individual product and markets behavior while modeling instead it also considers the learning from category or brand sales trend. The main advantage with BHM is that it provides better accuracy even with small data sets and the accuracy can be further improvised by correctly specifying settings for priors factors like price, promotional lift etc.,

Accurate promotional uplift could be derived by correctly specifying the demand patterns of promotional sales in different days of the week.

Predictive models not only captures the impact of factors like price, holiday, distribution and sales trend but it also provides a flexibility to capture the impact of dynamic demand behavior of the product by classifying them into various homogeneous groups based on their demand pattern.

SAP TPO has inbuilt analytics which is visible from CRM TPO screen.

User Interface / Integration options:

  • TPO integrated in Trade promotion planning without additional assignment block
  • TPO integration assignment block
  • Promotion optimization can be created independently of any trade promotion (prediction & simulation are also available)

TPO Forecast types controls whether to predict, SAP TPO has 2 types of forecasts

What-if Analysis forecast types

  • Prediction: It analyses past promotions performance for a given price and promotional vehicles (like displays, features, price reduction, and multi-buys) and predicts one outcome in line with trend.
  • Simulation: Most of times, the challenge is not just getting results but getting them with in constraints, what can be a best option in such case. Simulation, in addition to price and promotional vehicles can also consider objectives like profit optimization & sales volume optimization and more importantly constraints like trade spending limits and forecast multiple optimal scenarios. The best suitable one can be chosen after analyzing all scenarios.

What-if Analysis results: SAP TPO presents forecasted results in intuitive graphical dashboards which makes it easy to view and compare different forecast outcomes in a single view. As of version TPO 2.0, it will depict forecasted results in 5 dashboards with a different perspective in each. On one dashboard the user has the option to change 'trade spends' and see impact instantly.  More dashboards can be added through enhancements. These dashboards not only present data but also the insights.  This can reduce the strain of going through various details on each forecast scenario to make a choice.

Dashboards: SAP TPO screen has got few dashboards like Basic analysis( provides with key figures like Volume uplift, non -Promo Revenue, Promo revenue, retailer revenue) , Volume decomposition (Provides volume uplift with respect to base demand, tactic lift, price lift, seasonality, holiday, cannibalization) , win-win assessment(Promo margin and promo profit). SAP TPO Agreement screen has got few dashboards like weekly review (base line & total volume) Price & Volume decomposition, Profit and loss.

Integration with SAP TPM: SAP TPO is tightly integrated with SAP TPM. Few additional assignment blocks, fields and buttons are provided. Assignment blocks like Promotion causals, What-if analysis, and optimization scenario etc.,

Learnings: Data quality is the most important and critical element of any forecast as it will influence the forecast results. It is essential to have complete and accurate data without gaps. When external data like syndicated or research data is used it is crucial to check if they are true or close representative of retailers being used in all required locations.

One important  lesson learnt from experience is do not underestimate how much effort it will take to source, clean, format and load the data.

Within SAP TPO, each forecast has a forecast confidence indicator, which represents system model confidence in the forecast and is based on past data.

Suggest having an exercise called “KNOW BUSINESS INSIGHTS” which will generate business insights for any organizations. SAP DSiM on HANA can help you here.

Conclusion: SAP TPO can be implemented on its own as a standalone tool but implementing in conjunction with SAP TPM can realize the true potency of each other. SAP TPO can plan promotion strategy and TPM can execute it smoothly with its integration to other processes i.e. funds management & claims management.

SAP TPO requires consultants with DMF knowledge getting experienced people is a challenge. It would be really helpful to have statisticians to model and improve the models based on external factors.


Filter Blog

By author:
By date:
By tag: