1 4 5 6 7 8 34 Previous Next

SAP Solution Manager

501 Posts

A belated Happy New 2015! As a belated Christmas present I want  write this blog about the new key figure content that was shipped in December 2014. On Monday December 15, 2014 the new ST-A/PI 01R support package 1 plug-in was shipped to customers and this means that many new key figures have been shipped for Business Process Monitoring and Business Process Analytics in SAP Solution Manager. This blog will give a short overview about what is new. The plug-in contains (besides others)

  • New key figures related to a 3rd party sales process (often also called drop ship process)
  • New key figures where business documents (e.g. purchase requisitions, purchase orders, MM scheduling agreements) are brought togehter with MRP list information
  • New Automation rate key figures for WM and PM
  • New item related CRM key figures
  • New transportation lane related key figures for SCM APO


The new ST-A/PI 01R support package 1 is available for download and can be found under (SMP login required) http://service.sap.com/supporttools.A complete list/catalog of all available out-of-the-box key figures is available as MS PowerPoint presentation at (SMP login required in both cases)



  1. On slides 2,3 and 4 (Table of Content) you can find hyperlinks where you can directly access the respective chapter of interest.
  2. For application related key figures you find some of the listed Selection Options in bold letters. Those Selection Options are available as "Group by" fields in Business Process Analytics.
  3. Those key figures with '€' as bullet point support also a value benchmarking as part of the "Advanced Benchmarking" functionality.

Key figure news summary for selected areas

New key figures have been developed for 3rd party (drop ship) processes:
  • 3rd party sales document items without purchase requisition items
  • Overdue 3rd party purchase requisition items with sales information
  • 3rd party purchase order items overdue for goods receipt (only relevant for customer that post the statistical GR)
  • 3rd party purchase order items overdue for invoice receipt


New key figures where business documents are brought togehter with MRP list information to allow better insights in supply chain planning:

  • Overdue purchase requisition items with MRP list
  • Overdue purchase order schedule lines with MRP list
  • Overdue MM scheduling agreements with MRP list


New outbound delivery key figures bringing delivery and shipment information together:

  • Overdue outbound deliveries without shipment assignment
  • Lead time from outbound delivery creation --> shipment assignment


New Automation rate key figures for WM and PM:

  • Automation rate: Inbound transfer order items  (how many inbound transfer order items are created automatically vs manually)
  • Automation rate: Outbound transfer order items (how many outbound transfer order items are created automatically vs manually)
  • Automation rate: PM/CS notifications (how many PM/CS notifications are cleared automatically vs manually)
  • Automation rate: PM/CS orders (how many PM/CS orders are cleared automatically vs manually)
New CRM related key figures:
  • Sales document items in status 'open' or 'in process'
  • Service document items in status 'open' or 'in process'
  • Lead time from sales document creation --> Taking document 'in process'
  • Lead time from sales document creation --> Completing the document
  • Lead time from service document creation --> Taking document 'in process'
  • Lead time from service document creation --> Completing the document
  • Lead time from business activity/task creation --> Taking activity/task 'in process'
  • Lead time from business activity/task creation --> Completing the activity/task



New transportation lane related key figures for SCM APO:
  • Transportation lanes per product
  • Transportation lanes per location


Further reading

You can find all necessary information about Business Process Analytics in this document. Frequently Asked Questions about Business Process Monitoring and Business Process Analytics are answered under http://wiki.sdn.sap.com/wiki/display/SM/FAQ+Business+Process+Monitoring andhttp://wiki.sdn.sap.com/wiki/display/SM/FAQ+Business+Process+Analytics respectively. The following blogs (in chronological order) provide further details about Business Process Analytics and Business Process Monitoring functionalities within the SAP Solution Manager.



You are interested in the integrations of your solution Manager with the Signavio editor but during your integrations you have difficulties with the setting up your signavioconnector propoertes file.


User Guide for integration you can find in the Signavio online help  documentations where you have detail  descriptions how to implement the link between your SAP Solution Manager and Signavio SAP Solution Manager Connector 7.1.


My bog will help you just with the additional explanation of the User Guide sections which I've found not clear  during my integration effort.



In the User Guide online documentation  starting on  the sections  5.   there is difference in SP12 screens . It  differs from the online User Guide  documentations !


Sectiuon 5 part:




5.The Signavio SAP Solution Manager 7.1 Connector uses the BSI Enterprise Services (SOAP web service) to communicate with the SAP Solution Manager. This web service has to be enabled and configured before the connector can work properly:


a.Logon to your SAP GUI and start transaction se80.

b.Search for the package BSI_SERVICE_API:




in SP12 there is different screens for the BSI enterprise Services therefore please follow my blog in order to make your integrations properly.



If you will follow User guide in section f.  (f. Open the tab Transportation Settings and find the URL of the service binding. Please store the URL for later usage when configuring the connector. ) you will find that  URL from the section f. is missing:





In SP12  BSI Enterprise Services (SOAP web service)  looks a bit different. In order to get requested URL you should go


In SOAMANAGER follow steps:



on the next screen you will get URL you are looking for:



The above URL should be later used in your solmanconnector PROPERTIES files in the signavio HOME directory on your solman host. My URL example is following:






in order to fill out your solmanconnector properties file you must derive two parameteres from your WDSL URL Link:



solman.bsiservice.binding            =  /100/signavioconnect/binding_1

solman.bsiservice.endpoint          =   /sap/bc/srt/rfc/sap/bsiprojectdirectroyinterface 







To find out what is wrong with your solman connector the best place where to look is in the log file of the signavio HOME installation directory. The log file is located in the log directory!





in order to be able to sync your signavio editor content with your solman projects after each  restart you solution manager server -  you have to repeat step 8) from the Signavio SAP Solution Manager Connector 7.1.  user guide!!!


     8.Run the setup:

          In this step the connector establishes an Oauth connection to the Signavio Process Editor and asks for the a Signavio user to authenticate against the      Signavio Process Editor.    

Proceed as follows:



if you have questions do not hesitate to contact me.


Boris Milosevic

There are a lot of questions and discussions around the Technical Architecture that should be used for Technical Monitoring so I decided to start write blog post series as requested by community members. I want to keep the blog reasonable in length so I’ll write up parts on Technical Monitoring.

A first reasonable question is, how many SAP Solution Manager systems do I need?

This blog represents my opinion. If you have a different opinion, feel free to share and discuss it with the community at large as it can be of interest to all of us so please feel free to comment.


How many SAP Solution Manager systems do I need?


One of the first questions in terms of architecture for Technical Monitoring is “How many SAP Solution Manager systems do I need?”. The answer can differ greatly depending on what your plans are,  what you are trying to achieve and how large your landscape is.


Small size


A small customer with a small SAP landscape (one ERP system landscape) will often run a single SAP Solution Manager instance. When it comes to Technical Monitoring, it would mean that all systems get connected to this one SAP Solution Manager instance.


Having only one SAP Solution Manager system comes with typical advantages and disadvantages as you would have them in a traditional ERP landscape if you would only have a productive ERP system.


When it is time to update the SAP Solution Manager system, to avoid direct impact, you could clone or copy the SAP Solution Manager system and process the update on the clone or copy to test out the procedure before doing the actual update in a weekend for example (to avoid downtime / impact as much as possible).


Medium size



Many customers only have two SAP Solution Manager systems so often I see DEV – PRD landscapes at customers as we have a good amount of medium sized customers. This is a very common configuration that I’ve seen.

The discussion starts here with two SAP Solution Manager instances, which SAP systems (talking ERP now), do I connect where? Do I connect all DEV and perhaps ACC systems to the DEV SAP Solution Manager and only PRD systems to the PRD SAP Solution Manager system?

Well, I’ve said this before and about to say it again. SAP Solution Manager wasn’t really designed to have this kind of split so I only connect SAP Solution Manager DEV to itself as well as one or more sandbox ERP systems.

All other systems (DEV, ACC, PREPROD, PRD, …) get connected to SAP Solution Manager PRD in order to benefit from having all that data in one place.  One alert inbox for the support team, one single source of truth for reporting purposes, one single source of truth for specific scenario’s that require data and connectivity of all the systems that belong to a specific landscape.

The advantage of having a DEV SAP Solution Manager is that you have a place where you can try scenario’s out, perform support package stacks updates which makes it easier to minimize the impact on the PRD SAP Solution Manager.

Large size




You probably guessed I was going to say, large customers have three SAP Solution Manager systems but that’s not the general rule of thumb. It’s a SAP recommendation most likely but that doesn’t mean it’s really a necessity. It can make sense in case you are going heavy on custom development and want to invest in ITSM scenario’s where you really want to have a ACC system in place but what I see is that many SAP customers max out at two SAP Solution Manager instances (DEV, PRD) as their SAP Solution Manager landscape.

Larger customers can have larger landscapes (potentially scaled out). Some have a split landscape where they decide to go DEV1 – PRD1 for Technical Monitoring and DEV2 – PRD2 for IT Service Management and by doing it, separating out those scenario’s from running on the same SAP Solution Manager instance.

Why? Because they want to avoid impact of one scenario on the other scenario so they would like to patch DEV – PRD faster compared to DEV2 – PRD2 for example.

The drawing above shows an example of such a split landscape. You use one SAP Solution Manager landscape for Technical Monitoring purposes while you use a second SAP Solution Manager landscape for IT Service Management purposes. In the IT Service Management landscape, you don't use diagnostics agent. You only connect the managed SAP systems through RFC connections thus you ignore some red traffic lights in managed SAP system setup. Thet second SAP Solution Manager landscape can be monitored by the first SAP Solution Manager landscape where Technical Monitoring is implemented.

SAP Solution Manager – Diagnostics Agent architecture guide


I haven't gone into any kind of detail on agents or other elements yet here that make up the architecture. That might be content for a future blog post. In SAP note 1365123 - Installation of Diagnostics Agents there is a guide attached that provides you with insight on possible architectural options for Technical Monitoring. The PDF document goes through numerous possibilities and options which ventures outside of what I covered in this blog post.

I prefer to keep things simple in terms of not cross using elements as you can see in above “simplified” architecture schema’s. Why? Because complexity adds additional effort on multiple fronts, configuration, maintenance, support and troubleshooting to give some examples. The split landscape option translates into more maintenance effort but lower risks and it makes it easier to keep the SAP Solution Manager landscape up to date that is mostly used for technical scenario’s since you don’t impact ITSM processes that way.

Diagnostics agent on the fly as a default


At the moment, I advise to install Diagnostic Agents on the fly (as opposed to a regular, standalone Diagnostic Agent) as a default even if you only have a single SAP system per server because in SAP Solution Manager 7.1 SP12 it is a prerequisite to use automated reconfiguration .

Automated reconfiguration allows the system to reconfigure certain managed system setup steps and some other steps like automatically assigning new, default SAP templates in technical monitoring after the SAP product version has been updated.

If you haven’t seen or read about it yet, you can find a nice presentation on the SCN wiki: http://wiki.scn.sap.com/wiki/display/SMSETUP/Home

where you can also find the Sizing Toolkit which can help you calculate the need for scale-outs for example.

Under 7.1 SP12 (NEW) check out the presentation on Automatic Managed System Reconfiguration (PDF)

The Service Marketplace pages for Business Process Operations will stop being available in the very near future. This means that accessing information via the pages

will no longer be possible.


Therefore, all documentation for Business Process Operations (including overview presentations and setup guides) is now accessible via the SCN Wiki Page




This page gives a general overview about Business Process Operations. Each area of BPOps is shortly explained and links to a sub-page with more details are provided. These sub-pages per area are directly accessible  via the following URLs:


In these sub-pages you have access to the existing setup guides and further documents currently available in the Service Marketplace. All future documentation will also be made available here.


In the coming weeks we will further extend our documentation in these wiki pages and we will keep you informed in case of major updates.

The Part I of this blog gives the some tips in how to enhance the original urgent change flow to generate transport of copies. Today I will explain how to activate this customizing in your ChaRM Project. You can check it out on this link How-to enable Transport of Copies on Urgent Changes Flow (PART I)


Create a new Project under transaction SOLAR_PROJECT_ADMIN or close the current Project Cycle.


Before you create the new tasklist, you should perform the following steps:




Push button Show Avaiable Variants for Tasklist and select the Y/ZSAP0 tasklist variant. Without this step, transport of copies at urgent change will not work! If you don't make this change in before generate the Tasklist you will receive some errors when the urgent change flow when you stay at the status E0004.

Last month, I had the pleasure in colaborate with a brazilian food company (the world's tenth-largest food company)  speaking about some of our experiences, best practices and provide on demand consulting for their ChaRM Change Request Management Solution.


During our conversation the customer tell to me your wish to deploy Transport of Copies as part of the Urgent Changes flow. As we know, the Change Request Management cover a standard workflow containing the transport of copies procedure available only on Normal Changes.


In this blog I provide some hints in how you could set it up, however there is no guarantee and also standard support from SAP for this configuration.




Urgent Changes have their own tasklist (Type "H") to coordinate all transport requests. The original tasklist type H does not contain the action "Create Transport of Copies". In this case we need to enhace the tasklist type H using a custom tasklist variant. After this configuration, some ajustments must to be applied to control the TMS of managed system.

Just to clarify when the ToC will be generated and when the original Transport Request is released, I made the following pictures showing the "AS-IS" and "TO-BE" solution. You can adapt for your needs (e.g. creating additional status).


Standard Process Flow: Urgent Change (SMHF)


Enhanced Process Flow: Urgent Change (Y/ZMHF)


Configuration Procedure


Create a Tasklist Variant


          Access the IMG activity using the following navigation options:





          Push button "New Entries":




          Create the tasklist variant "Y/ZSAP0 ":






Define Tasks for Tasklist Variant


          Access the IMG activity using the following navigation options:





     Select all entries from Tasklist Variant "SAP0" and copy to Tasklist Variant  "Y/ZSAP0":



     Create a new record adding the task "Create Transport of Copies"  for the Project Type "H Urgent Change":






Define Header / Footer Tasks for Tasklist Variant


     Access the IMG activity using the following navigation options:






Repeat the procedure 2 from Define Tasks from Tasklist Variant configuration:




Register Tasklist Variant into Project Cycle


     Apply the SAP Note 927124.



Adjusting Conditions and Actions (TSCOM Tables)


     Some activities regarding the transport management system and consistency checks are triggered when  the change document is assigned to a specific status value.       To enable the urgent change flow to generate transport of copies, we need to change some actions and their conditions based on status value.



     Access the IMG activity using the following navigation options:




     On folder "Create Procedure Type" choose "Y/ZMHF" transaction type and their status profile:




     On folder  "Assign Actions"  make the follow ajustments for the User Status "E0004 - To be Tested":    



     On folder  "Assign Actions"  make the follow ajustments for the User Status "E0005 - Successfully Tested":




     On folder "Define Execution Times of Actions", make the follow adjustments for the User Status "E0004 - To be Tested":



     On folder "Assign Consistency Checks", make the follow adjustments for the User Status "E0004 - To be Tested":




     On folder "Assign Consistency Checks", make the follow adjustments for the User Status "E0005 - Sucessfully Tested":







Project cycles powered by the custom tasklist variant will be able to generate Transport of Copies. In my next blog I will describe how to use this feature.

Here's what I thought before using CHARM:


Charm will:

  • Remove Conflicts between developers
  • No more missing objects when transporting to production
  • No more keeping track of transport dependencies
  • Allow to bundle transports outside of SAP
  • Keep defects with original requests
  • There will be less transports


The above is living in Michelle's world of what CHARM will do.  NOT WHAT SAP or CHARM Claims to do.


So here's a scenario:

I would make changes to an object.  There would be changes to an outside system.  Developer 2 makes changes to a different object that is a part of my project.  All of the previous transports/objects will be bundled in one CHARM request.  Emergency and non-emergency transports will be taken into consideration.


Dum, Dum, Dum, Da, Dum - Drum roll please.  Charm to the rescue.


See below:




So was my vision correct?


In practice:

  • A regular transport is created.   Table 1 is not changed.
  • The transport and CHARM ticket are released for an emergency change.  It is immediately moved to production. (After testing in quality)
  • The regular transport has fields removed from table 1, and the emergency transport object is changed so it no longer requires those fields.
  • The emergency change is moved to production again.
  • The regular change is moved.  Now when the programs are regenerated - the table is generated, and then the program.    The emergency program is generated with errors - so it goes with error - 8, and the regeneration stops.


If the above confuses you.  You are not alone.  It confuses me and my BASIS people.   So the only solution I found was to create a new CHARM ticket with just the table.   Transport it first.  Re-transport the 2 Charm tickets.  They will go into the system clean.

In theory:

All transports are moved to production with the release.


In Practice:

  • Not all transports move to production.
  • The changes are backed out of the object, and the object is changed by the developer.  The developer ignores the conflict and can create the new transport request.
  • At this point the changes can't be moved without BASIS help.  Why?  Because there is a conflict.


In theory:

Only one developer works on an object at a time.  Or if more than one developer is working on it, then it's for the same project.


In Practice:

  • There can be more than one developer working on an object.  And yes, it is for two different projects.
  • So there are two options - add the object to the two different CHARM tickets.   Leave the object in the CHARM ticket that has it in it.  Either one will cause one CHARM ticket to be dependent on the other.  It will be a manual task to keep track of that.


In theory:

When a new table is created, all your developer's will know it is new and won't use it in their objects.


In Practice:

  • Developer's miss that the table was created in a different CHARM ticket.   They have no idea on the dependencies.
  • CHARM doesn't notify of the dependencies.
  • The move to production has errors.


OK - I'm done with the things CHARM doesn't do well.    There are some things that it does very well.


CHARM is amazing at:


  • Limiting the number of transports.   For a regular CHARM ticket that goes with a release, only the transport task will need released.   When the task is released, it moves in the background to the test system.  If there are problems, then I just create another task.  The transport request is never really moved until the move to production.
  • It is easy to create a configuration transport request and a development transport request.   Since they are both on the same CHARM ticket, they will move to production together.
  • If your CHARM ticket has been released,  and an error is found.  It is easy to create a defect request and attach it to your CHARM ticket.  This will keep the transports together in one CHARM ticket.
  • The test environment is easily locked down when the system is moved to testing.  This will stop everything except for emergency transports from moving to the test client/system.
  • The approval process is at the front end.  A CHARM ticket is not created until the CHARM request is approved.  That means a transport request can't be created.
  • Outside objects - I'm not sure as we haven't used CHARM for that yet.


So there you have it, my personal thoughts on CHARM.  Keep in mind, like all SAP products, different companies will have CHARM configured differently.  So some (or none) of what I've written may apply to you.


Does CHARM do what it claims to do?  Yes.  Does it do what you think it should?  You be the judge of that.  Personally, I think it does make my job easier.  It's not a silver bullet.  It doesn't fix all transport issues.


Please comment with some pros and cons.  And do let me know if I'm losing my mind with some of my comments. 



Closing the current Change Cycle and open a new One

SAP highly recommends, that customers close their Maintenance Cycle on a regular basis.

  • This allows a meaningful Reporting on Change Activities per Change Cycle.
  • On the other hand closing the Maintenance Cycle regularly helps to avoid a potential performance impact on the long run.

A Change Cycle is closed by processing the Change Cycle Document to the final CRM User Status:

  • SMMN for a Maintenance Cycle with Task List Variant SAP0,
  • SMMM for a Maintenance Cycle with Task List Variant SAP1,
  • SMDV for a Project Cycle.

Take over open Change Documents to the next Change Cycle

When you close the existing Change Cycle, for instance a Maintenance Cycle and open a new one, you are not forced to close all Change Documents, which belong to this Change Cycle.

SAP Change Request Management allows to take over open Change Documents to the next Change Cycle.

Project Completion and complete Closure of all related Change Documents

In SAP Change Request Management no automation for the Closing of Change Documents, which belong to a ChaRM Project is available.

However ChaRM offers program CRM_SOCM_SERVICE_REPORT as Standard Solution for Closing of Change Requests and Change Documents, which belong to the Change Cycle.

Before utilizing program CRM_SOCM_SERVICE_REPORT, you should check the following:

  • Status Profile Customizing (for the referring Change Cycle Document: SMMN, SMDV),
  • ChaRM Condition: SUB_ITEMS is defined for Change Cycle Documents SMMN and SMDV.

With the help of program CRM_SOCM_SERVICE_REPORT you can search for any kind of open Change Documents with various search criteria as:

  • Open Change Documents per Business Partner, Team, etc.
  • Open Change Documents per CRM User Status,
  • Different Service Process related search criteria, as 'Transaction Type', 'Posting Date', etc.


After having made your selection, you can let the program further process the open Change Documents up to their final CRM User Status.



In addition the program offers a 'Test run mode'.


Of course, the program CRM_SOCM_SERVICE_REPORT can be also utilized in order to close IT Service Management related documents such as Incidents or Problems.

It is essential to read this blog before you proceed further.


Since 7.1 SP10, there exist two work centers in technical monitoring namely BI monitoring and Job Monitoring.


There are some inherent design shortcomings with the BI Monitoring application in 7.1 that SAP decided to invest in a renewed work center to overcome these deficiencies. As a part of this approach, we decided to unify the collectors of Business process monitoring and technical monitoring to be based
out of the same infrastructure namely MAI.


However from the runtime and reliability of the solution, Job monitoring offers robustness with regard to alerting mechanism.


What does the MIGRATION report do? & When to execute this?


So when you are actively using BI Monitoring to monitor BW PC (process chains), SBOP and SAP Data services jobs in the past (earlier to SP12) and have
several managed objects configured, we provide a report program to transfer these configurations to managed objects of type Job Monitoring to utilize the
new unified job monitoring collection mechanism and hence overcome the known limitations of BI Monitoring.


What objects are migrated?


If the technical scenario in BI Monitoring has managed objects of type BW PC, SBOP, SAP Data service jobs; this migration report acts on them


Which objects are not migrated?


Bex queries and templates are not migrated




1. What happens to the old BI monitoring objects?


The old BI Monitoring scenario would remain as is. There are options to decide what should happen to the objects in this scenario.


A drop down exist to execute this migration one scenario at a time.


There are two options


a) Migrate BI monitoring Objects :


We create a new Job Monitoring technical scenario. '_BIMONIT' will be added as a suffix to the existing BI Monitoring scenario name. Then in the chosen
scenario, all the objects of type Process chains, SBOP jobs and SAP Data services jobs are referred and the existing configuration (Metrics, Thresholds,
Notification, incident settings) migrated by creating new job monitoring objects of respective sub type in the new job monitoring scenario.


In this case, the bi monitoring is still functioning and now the job monitoring is also functional.


b) Migrate and Deactivate BI Monitoring Objects


We create a new Job Monitoring technical scenario. '_BIMONIT' will be added as a suffix to the existing BI Monitoring scenario name. Then in the chosen
scenario, all the objects of type Process chains, SBOP jobs and SAP Data services jobs are referred and the existing configuration (Metrics, Thresholds,
Notification, incident settings) migrated by creating new job monitoring objects of respective sub type in the new job monitoring scenario.


In this case, only these objects are deactivated from monitoring via BI Monitoring and are now available via the job monitoring.


2. what happens to the old BI Monitoring scenario?


The old BI Monitoring scenario would remain as is. Active and functional and


3. What happens to the other parts of the BI MON scenario, that are not migrated


The old BI Monitoring scenario would remain as is. The other parts of the BI Monitoring scenario like the BW Bex queries & templates, all the systems
included in the scope selection in define scope step 4 would remain in the BI Monitoring Scenario. Depending on the option chosen for migration, the objects
of types BW PC, BO jobs, DS jobs would be deactivated.


4. Where to check the result of the report program that performed the migration


Please check in SLG1,


Object type: E2E_ALERTING




5. Where to check for the logs of this background job?


Owing to the time intensive operation this program would execute in background. In transaction SM37, check with the job name MIGRATE_BI_JOB_* with
the user who triggered the migration report for the logs and the status of the job

6.Explain the advantages of this migration?


The Job monitoring work center is sophisticated in the collection and would hence avoid grey alerts. Also starting SP12, the BW reporting is available for

job monitoring collected metrics.


7. Explain which feature in BI Monitoring setup is not available in JOB Monitoring

There is also a compromise in this migration. BI Monitoring work center evolved over the last 10 SPs to make a feature rich SOLMAN_SETUP. Certain functionalities that were developed in BI Monitoring configuration typically caters to the mass handling requirements. These are 'threshold mass maintenance', 'Job Details', 'Excel import and export' and 'take from schedule'. These features are not yet available in the Job Monitoring configuration. But the trade-off exist. The collection is robust. Hence the alerting and monitoring UI are more dependable when using the job monitoring.




Execute the report program AC_JOBMON_MIGRATION to migrate (BW PC, SBOP, DS jobs) of BI Monitoring to Job Monitoring

Continue exploring Partner determination with default or dependent values, with certain conditions to have more flexible ITSM / CHARM procedures. There is topic on how to setup Partner determination via BRF+ and very good blogpost  from Vivek . But  why not think about other possibilities? And here comes crm framework called Rule policies, it is mainly used in solman as Dispatch tool, but if you digg it deeper the true possibilites are opens to your eyes


Example Scenario: Rule policy ITSM / CHARM partner determination (NO ABAP REQUIRED).



1.    Assign Rule modeler to Categorization schema of CR

2.    Create Rule policy type SRQ ZMCR_DEFAULT_BP

       Mapping If category = CAT_1 then route developer = 11.


4.    Assign policy

5.    Assign Service Maanger Profile to Change Request transaction

       Change Request = ZCR_SRQMROUTING

6.    Test



1. Assign Rule modeler to Cat schema of CR


First let’s go to Solman’s CRM WEB UI and setup things we need

Tcode SM_CRM – Service Operations – Categorization Schemas

Choose your Schema that assigned to Change Request

Add new version, go to Application Areas and press New and add

Appilcation ID – Rule Modeler

Parameter – Context

Value – Service Request Management

We need to have a row like a last row in pic below:



2. Create Rule policy type Service Request


Tcode SM_CRM – Service Operations – Rule Policy, now here we need to create a new Rule policy



Context – Service Request Management.

Give some name to Rule Policy



This technology will work for any type of Solman transactions both ITSM or CHARM.

Now its most interesting part – the design part!

Choose Draft Rules row, press Subnode


Name it as you like f.e. Category = Partners and hit Subnode again



Again give proper name to avoid confusion when you read policies and press Add Entry in Conditions block






Attribute – Order Category

Operator – Contains

Value – choose category you wish to map the partner functions f.e. our popular Change Manager in the example it is ATH


Now in Action block press Add Entry

Choose Action – Route to a Partner, Partner Function – SDCR0002 Change Manager, Partner – who we need to assign as Change Manager in this case


For example I have setup all needed partner functions to be filled for the category ATH see below:

Developer, Tester, Custom partner function and etc.


Do not hurry to go to the next topics, take some time and sit here, because here you can make any scenario you need.

For example you can combine multiple checks for any situation: User status, Priority or Change category with other condition like check the category. You may Match conditions with AND / OR operators.


3. Copy SAP_SRQMROUTING Service Manager Profile to Z/Y


Tcode SPRO - Customer Relationship Management - E-Mail Response Management System - Service Manager - Define Service Manager Profiles. Choose SAP_SRQMROUTING and press Copy button on the top, name it like ZCR_SRQMROUTING


4. Assign Rule policy to Service Manager Profile


Stay on your ZCR_SRQMROUTING – double click Directly Called Services – double click Properties

Policy = your created policy in our case YALM_ZMCR_2


5. Assign Service Manager Profile to Change Request transaction


Transactions - Additional Settings - Assign Dispatching Rule Profile to Transaction Types.




6. Test


Now go to SM_CRM and create or pick any Change Request and press More – Dispatch all partners will be filled as mapped like here: Choosed category = ATH, pressing Dispatch


All partners filled


This will work even if Partner Function is not empty


Have Fun J


With 7.1, the capabilities of central monitoring has immensely improved in Solution Manager. For setting up of monitoring of systems in the BI landscape and objects in these systems, we have certain options available now.
1. Via Technical Monitoring - BI Monitoring
2. Via Business Process Monitoring (BP MON) - BW process chain monitoring.
3. Via Unified Job Monitoring

I would like to highlight what is a good option to do and what are the pros and cons with each approach.


What is BI Monitoring In technical Monitoring

To provide central monitoring and alerting capabilities. Integrated with Guided Procedure for alert resolution path. This is to cater to the need of an administrator to get an overview of the health of the systems participating in the BI landscape in addition to the objects specific to the data flows within the landscape.
Target audience:
BI administrator
BOBJ administrator
Application Support
BI operations Team


1. Overview Monitor

A single screen overview of the
Health of all systems participating in the BI landscape
  • Availability
  • Performance
  • Exception
View of the health of data flow entities in the landscape
o BW Process chains (ETL)
o BOBJ jobs (Reporting)
o DS Jobs (Replication)
o Bex Queries & Templates (Ad-hoc Reporting)

2. Detail Monitors

Provides specific information on the health of these monitored objects by monitoring certain metrics per instance of these recurring jobs which are representative of the health of the data flows across the systems.
Overall health of the job
o Status
o Error logs * (managed system login maybe required)
Scheduling metrics
o Start delay
o Not Started on Time
Runtime metrics
o Duration
o End delay
o Out of time Window
Data integrity metrics
o Records processed
o Data packages processed
o Rows_read
o Rows_written * Data services

Supported system types in BI Monitoring
System Monitoring Metrics are integrated in the overview monitor of BI for following system types

  • SAP HANA Database
  • BWA(TREX system)
  • ABAP Source system
In addition to the system monitoring metrics, we can configure jobs/reports on top of these systems to be monitored.
  1. SBOP  (SAP Business objects platform jobs in CMC)
    1. 3.x
    2. 4.x
    1. 4.1
    2. 4.2
  3. BW ABAP server (Process chains, BeX Queries, Templates)


As you can see, these screens have a designated flow for navigation. A single overview screen to show the health of all participating systems in the landscape and subsequently drill down capability by system-type and then to the monitored objects per system and then to its instance details!

Some key features in BI Monitoring (Configuration time in SOLMAN_SETUP) which caters to handle mass objects in configuration UI

  1. Mass maintenance of Thresholds
  2. Take from managed system to assist in configuring thresholds
  3. Excel Upload & Download
  4. Provides the managed object details in the configuration to assist in providing thresholds value for metrics like  (Duration, records processed etc.)
Nevertheless, this application has certain limitations.
1. Runtime (Monitoring UI) is not always colured!

The Monitoring UI could report grey rating for certain monitored objects which are not frequently executed in the managed system. Like a chain that is executing only once a week in the managed system. The collection frequency is set to 5 minutes in solution manager. So the status of these chains in the monitoring UI will turn grey,10 minutes after the chain ends today. And kicks in only for the next execution, that is in the next week! However the alert (if any) would remain in the alert inbox with the history of measurements.
{Increase the collection frequency. Instead of (once every) 5 mins, make it once every hour. Esp for longer running chains. This would mean the monitoring stays colourful for twice the collection frequency. so instead of 10 minutes, its available for 2 hours.  This has a flip side, "delay in alerting" bad news that of failure.}

2. Alert inbox has multiple alert groups open, for instance, for the same BW process chain LOG_ID due to
    1. Grey metrics (collector issue, MAI extractor issue, Engine design
    2. If an open alert of a chain failure is confirmed in the alert inbox, the next collection will again report this error and an alert is opened up again! (owing to a fixed look back time of 36 hours for the ST-A/PI collector)



=> SERIOUS CONSEQUENCE : Multiple (duplicated) email automatic notifications

Workaround for reducing the number of duplicate emails. (this does NOT eliminate the duplicate alerts. This would only reduce the duplicate emails)

1. Reduce the collection interval to a small and relevant window (to reduce the occurrence of grey metric from collector)

    1. However, in the advanced tab in scheduling of data collection, Managed system time zone is  not handled in ST< SP10 (UTC is considered)
    2. If the chain is  executed  over the midnight, it is not possible to configure this restriction in the design time

2. Increase the collection frequency. Instead of (once every) 5 mins, make it once every hour. Esp for longer running chains. This reduces the probability of a grey alert!  This has a flip side, "delay in alerting" bad news that of failure.

3.  Starting SP10, 2118848 can be implemented and the report program AC_BIMON_MASS_AUTOCONFIG to be executed with 'set retention time' to cicumvent this problem.


3. No analytics capabilities of the collected metrics. vis-a-vis Interactive reporting. No BW reporting!

What is it to monitor BW process chains Via BPMon - BW process chain monitoring?

    In order to support the end to end monitoring of Business processes, that could span across several systems and can internally comprise of different entities like interfaces, jobs, process chains, in Solution Manager, we have the possibility to orchestrate a Business Process and setup monitoring for the participating entities. In such a context, Process chains can be setup for monitoring.

    => A clear Business process driven approach for monitoring
    => Support of extended schedule and multiple not started on time etc,.
    -  has no overview of the underlying technical systems health
    -  has no contextual navigation to the underlying system monitoring.
    -  Always require a BP solution to be orchestrated in Solution Manager to setup monitoring!

    What is Unified Job Monitoring?

    Over the last one year, we have been pondering on means to fix these known issues and develop an application that serves the customer requirements by closing the existing gaps. Starting SP10, we unveiled the new work center -> Unified Job Monitoring.
    • There is a consistent approach to monitor all type of jobs (BW process chains, ABAP jobs, SBOP jobs, SAP Data Services jobs). Also monitoring jobs scheduled from EXTERNAL scheduler that use the SMSE interface. for instance, SAP CPS (REDWOOD) scheduled jobs.
    • Reporting of background jobs without requiring direct access to production systems using the collected metrics (BW analytics)
    • Powerful monitoring capabilities with factory calendar awareness, job log content, and Business process context so on


    We have developed a brand new monitoring UI keeping in the interest to transition to the new html5 technology (SAPUI5). Find below a glimpse of this monitoring UI.
    Key features
    In order to remove redundant collection in the managed system and to provide a persona-specific runtime view, we have ensured to unify the configuration, data persistency, collection and monitoring UI.

    Design Time:

    1. Reuse of Monitoring Objects from the three entry points - BP Monitoring solution, Technical Monitoring scenarios, Job Documentation.
    2. Pattern-based MO is supported for ABAP, BO, and DS. However, BW Process Chain has to be fully qualified name

    1. Intermittent grey alerts are avoided
    2. Multiple email notifications are overcome
    3. Support of BW reporting.

    When to use what?
    1. If a need for Overview Monitor exist:     Technical Monitoring - BI Monitoring. With certain workarounds for the known limitations.
    2. For a pure Business process context:   BP MON - BW Process chain monitoring
    3. For a harmonized approach towards monitoring, Starting SP12, please migrate to BPMON and MAI integrated Unified Job Monitoring.
    SAP would continue to invest only in this option. Overall, Unified Job Monitoring addresses the known limitations of BI Monitoring and integrates
    BP monitoring based process chain monitoring.
    However, there are still gaps owing to the time required to develop. We intend to bring the best of both worlds together in this Unified job monitoring.
    Starting SP12, you could migrate existing BPMON Solution to MAI based solution. This would ensure automatic usage of Unified job monitoring if there are jobs, Process chains available in the classical solutions.
    Details: Execute the migration report R_AGS_BPM_MIGRATE_SOLU_TO_MAI via SE38. Use the F4 help to identify your solution.

    Similarly, there exist also a means to migrate relevant objects of existing Business Intelligence Monitoring scenarios to Job Monitoring Scenario.
    Details: In transaction SE38: AC_JOBMON_MIGRATION. Migrate or copy from existing BI Monitoring scenarios, job type objects (BW Process Chains , SBOP jobs, DS jobs) to a new Job Monitoring scenario to utilize the new collection framework and monitoring UI
    In 7.1 SP12 , all the three monitoring work centers co-exist. A comparison chart of features between Technical Monitoring - BI Monitoring versus Unified Job Monitoring is as below.

    FeatureBI MonitoringUnified Job Monitoring
    ABAP Jobs and stepsNAX
    BW Process chains and stepsXX
    SAP Business Objects Jobs (SBOP)XX
    SAP Data services jobsXX
    External Scheduler (SAP CPS REDWOOD)NAX
    BW Bex Reports & TemplatesXNA
    MAI features
    • Notifications
    • Incidents
    • Alert Inbox
    • Third party
    Integration to MAI System Monitoring and contextual navigation to System MonitoringXPlanned
    Contextual Navigation to Managed system analysis tools from Monitoring UIXPlanned
    Overview monitor For viewing the overall scenario health of all landscape entities & JobsXPlanned
    (MAI) Work mode awareness
    Mass handling of monitored objects & thresholdsXPlanned
    Integration to Job DocumentationNAX
    Guided Procedure for Alert ResolutionXX
    Please write to me regarding:
    1. How is job monitoring done today?
    2. Which job scheduling tools are used in the landscape (embedded schedulers from managed systems, CPS, UC4,CA WLA, AUTOSYS Solution Manager JSM)?
    3. What are the relevant/important job types?
    4. Is SAP Solution Manager-based Job/BI monitoring used?What is the feedback? Which functionality is missing?


    Regards, Raghav, S

    Development Manager, SAP Solution Manager

    How to configure and Trouble Shooting DBA COCKPIT Configuration in Manage System Setup Solution Manager 7.1

    In Managed System Setup - Step 4 (Enter System Parameters) highlighted below



    Please provide all the required Details for DB Parameters


    DB Host

    Service Name

    Port Number

    TNS Name


    And User Name will be your ABAP Schema User/ For Java (SAPSR3DB)



    Once provided all the required information then save. you will see that log message saying


    The DBA cockpit
    connection %_******** is OK. DB Extractors can be activated




    Once this step is completed we can activate the DB Extractors in Step 8 ( Configure Automatically) . We can check the successful connection entry in

    DBACOCKPIT T-Code. Below is the screen shot for your reference.




    Trouble Shooting Connection Error



    If DB cockpit connection cannot be established then you will get below message.





    We can see that Error cannot establish DBcockpit connection




    as we can see the same connection entry in DBCO T-code. Delete these existing entries



    And delete all the entries in DBACOCKPIT and MSS and check the in Operating System Level

    tnsnames.ora at both location it should be the same as per the managed system

    entries. If not then change the entries and check the tns ping.



    Then configure the DBACOCKPIT in MSS in the same way shown above. Once the DBA
    extractors are activated check the connection in T-Code DBACOCKPIT in SOLMAN


    DBACOCKPIT Connection in SOLMAN à connection should be
    successfully established.


    Thank You,


    The cross-system object lock functionality ensures that when an object is changed in a managed system, a lock entry is created for this object in the central SAP Solution Manager system. Depending on the selected conflict analysis scenario, this lock entry prevents changes being made to this object by any other change (transport request). This applies to all managed systems and clients for which the cross-system lock has been activated.

    Once the cross-system object lock has been activated, the system can detect conflicts between objects in transport requests that have the same production system or the same production client as their transport target.

    The meaning of the cross system object lock function is to protect your production system from “passing developments”.


    Inside a Change Request Management maintenance project all changes (Normal, Preliminary, Urgent and Defect) will consolidate with the project. As the import method is IMPORT_PROJECT_ALL “passing developments” inside a project can never happen.


    An exception to this is that Preliminary Changes & Urgent Changes can pass each other within a project. Therefore the use of CSOL is necessary to protect the PROD system from downgrades.


    Also if more than one project is available for the same system landscape, CSOL can protect the PROD system from downgrades.


    Automatic Categorization of Objects to retrofit (Auto Import, Retrofit and Manual) is based on the Cross System Object Lock entries in Solution Manager
    If the Enhanced Retrofit function does not detect a Cross System Object Lock entry for an object of a transport requests that should be retrofitted, the object will be flagged as Auto Import object.


    A change to object A is performed in the DEV system. This change is recorded in the CSOL table of Solution Manager. Now it happens that in the PRD system a fix is needed. The fix will be performed in the MAINT system and has to change object A as well. As the CSOL entry blocks the second change (fix) of object A the only solution to go on is to delete the CSOL entry as the fix is necessary to solve the issue in PRD.

    If now the transport request in MAINT is released and the retrofit categorization is calculated the retrofit will not detect an entry for object A and therefore calculate a green case.

    If now retrofit is performed the version of object A in the DEV system is overwritten!


    How can we avoid this behavior?


    You can customize how CSOL shall behave.



    You will find default mode and expert customizing.

    We will need to use the "expert" customizing as the default mode does not protect you 100% from the issue described above.

    csol cust.jpg

    The "Project Relation" customizing is key for the enhanced retrofit scenario. In default it's set to "cross" which means conflicts from different projects as well as conflicts within the same project will stop the process.

    What we want to avoid is exactly that conflicts from different projects will end in a termination of the process. Therefore the project relation has to be set to "Specific". This means that only conflicts within the same project will result in a termination and for different project will only appear as warning.

    The other settings do not influence the enhanced retrofit behavior, so Change type relation and object type can be set however you need. But it's necessary that the project relation is only set to "specific" in the case you have the enhanced retrofit scenario active in your landscape.

    One exception comes if you can for sure exclude Maintenance projects in the DEV landscape. In this case urgent changes cannot be created (this is only allowed when using maintenance projects) which means the default mode comes back into the play again.

    Also possible is the warning only setting which results in that all conflicts will ever be detected as warning only and the process is never terminated.

    In this case it's necessary to also activate the downgrade protection (DGP). This will ensure that if you get a warning in CSOL you can still not get passing developments as it checks again for release and every import.


    So with these allowed settings you will never need to delete an entry from the CSOL list because of Urgent Changes needing to be implemented to PRD as fast as possible. Also in any other conflict situation you will never need to delete entries from the CSOL list to go on with your process.

    Like this you will never get a wrong "green" retrofit categorization which will end up in an over write in DEV.



    When using enhanced retrofit in Solution manager the use of cross system object lock is mandatory for the correct behavior of the tool.
    You cannot use the enhanced retrofit without having CSOL setup and activated for the retrofit relevant projects.
    With some of the available conflict analysis customizing settings in cross system object lock  the danger of downgrading your Implementation work appears.

    When using the enhanced retrofit, you should only use project relation "specific" . Any “cross-project” setting is not allowed, because a terminating cross system object conflict would require the deletion of the corresponding lock entry. But that lock entry is required for the correct  analysis of the enhanced retrofit.



    When using the enhanced retrofit scenario make sure your CSOL customizing is set to "specific" from the project relation point of view.

    Also "warning only" is a valid setup if on top DGP is activated. The default mode can also be valid for the enhanced retrofit scenario when it's ensured that no Urgent changes can ever be created in the implementation landscape (DEV).


    The looping capability are planned to be shipped with SAP Solution Manager 7.1 SP13

    Alternatively you can implement following notes in advance:

    • 2088536 - Downport CBTA Default Components
    • 2088525 - IF and LOOP Default Components for CBTA
    • 2029868 - CBTA - Runtime Library - Fixes & improvements



    A test script may need to perform actions against an unknown number of entries in a table. The script may therefore need to:

    • Start at first row and check if there is an entry
    • If entry exists perform one or more actions on the current row
    • Continue with next row




    Keyword: DO

    It can be used to iterate over several steps. It defines where the loop starts.

    • It must be used together with the LOOP keyword which defines where the loop ends.
    • The EXIT_DO keyword must be used as well to determine when to stop the loop.


    The CounterName parameter provides the name of the iteration counter. This counter is incremented automatically at runtime while iterating over the included steps. The actual value of the counter can be retrieve using the regular token syntax.

    For instance, when CounterName is set to "COUNTER" its value can be reused in the subsequent steps using %COUNTER% (or $COUNTER$ for specific situations where the percent character is ambiguous).


    If you plan to use nested loops please make sure to declare a different counter names.


    Component Parameters


    CounterName: Specifies the the name of the iteration counter.


    Keyword: EXIT DO

    It must be used within a loop that has been defined using the DO and the LOOP keywords. The EXIT_DO keyword interrupts the loop as soon as the condition is met.

    A typical use case is to check the value of iteration counter that has been declared via the CounterName parameter of the DO keyword.

    For instance, when CounterName is set to "COUNTER" its value can be checked using the %COUNTER% token.


    Component Parameters


    • Specifies the value of the left operand that is to be checked.


    • Specifies the boolean operator to use.

    The operators supported are the ones below:

      • = for "Equal to"
      • < for "Less than"
      • > for "Greater than"
      • <= for "Less than or equal to"
      • >= for "Greater than or equal to"
      • <> for "Not equal to"
      • {contains} for "Contains"
      • {startsWith} for "Starts with"
      • {endsWith} for "Ends with"

    An additional operator is supported when testing WEB applications (i.e.: applications running in the browser):

      • {matches} for checking whether the value matches a regular expression. The regular expressions are expressed using the .NET syntax.


    • Specifies the value of the right operand that is to be compared with the left operand.



    The options parameter lets you perform some adaptations or conversions of both the left and right operand before comparing them.

    The supported options are:

    • /u (for uppercase) - Both values are converted to upper-case before being compared
    • /t (for trimmed) - Both values are trimmed before being compared
    • /i (integer) - Both values are converted to an integer before being compared
    • /f (float) - Both values are converted to a float (or double) before being compared
    • /b (bool) - Both values are converted to a Boolean before being compared


    Keyword: LOOP

    It defines the end of the loop and must be used together with the DO keyword which defines where the loop starts.




    The following scripts was created for transaction VA02 (Change Sales Order) to add shipping information for each line item of an existing sales order.



    With DO Keyword the loop starts and the counter is set to ‘1’.


    To be able address the row number starting at ‘0’ we take the counter number minus ‘1’ using the CBTA_A_SETINEXECUTIONCTXT component.


    Then the scripts reads the value of the first row in the first column to check if an entry exists.



    If the value is empty we exit the loop with the EXIT_DO keyword.


    Otherwise the scripts performs the required actions for the current row

    • Select row


    • Menu Goto --> Item --> Shipping
    • Enter the required shipping information using the related screen component
    • Go back to main screen

    With the LOOP keyword the script goes back to the DO keyword while increasing the counter and processing further line items of that sales order.

    Mateus Pedroso

    MOPZ Framework 3.0

    Posted by Mateus Pedroso Nov 28, 2014

    Dear followers


    My name is Mateus Pedroso from MOPZ/LMDB/Solman Configuration team and I'll start to write some posts about these topics. I would like to start writing about MOPZ framework 3.0.


    MOPZ Framework 3.0 is the standard for Solution Manager 7.1 SP12, but you can apply note 1940845 to enable MOPZ 3.0 in Solution Manager 7.1 SP05-SP11. Note 1940845 must be always implemented in the latest version and this note fix some bugs in mopz 3.0, so it's very important to ensure that the latest version of note 1940845 is implemented even in Solman 7.1 SP12. The following points changed in MOPZ 3.0.


    - UI and performance.

    - Integration of the Maintenance Optimizer with the Landscape Planner.

    - Add-on installation procedure.


    You can check more details about MOPZ 3.0 in the pdf attached in note 1940845.


    One of the most important improvements is the Add-on installation. Here's a screenshot showing that now you can apply add-ons in step 2.



    Now it's easier to apply add-ons.


    In the next posts, I'll explain some LMDB/SLD topics related with MOPZ and how to fix some well known issues.


    Filter Blog

    By author:
    By date:
    By tag: