There are a lot of questions and discussions around the Technical Architecture that should be used for Technical Monitoring so I decided to start write blog post series as requested by community members. I want to keep the blog reasonable in length so I’ll write up parts on Technical Monitoring.

A first reasonable question is, how many SAP Solution Manager systems do I need?


This blog represents my opinion. If you have a different opinion, feel free to share and discuss it with the community at large as it can be of interest to all of us so please feel free to comment.

 

How many SAP Solution Manager systems do I need?

howmanylicksdoesittake.png

One of the first questions in terms of architecture for Technical Monitoring is “How many SAP Solution Manager systems do I need?”. The answer can differ greatly depending on what your plans are,  what you are trying to achieve and how large your landscape is.

 

Small size

smallsized.png

A small customer with a small SAP landscape (one ERP system landscape) will often run a single SAP Solution Manager instance. When it comes to Technical Monitoring, it would mean that all systems get connected to this one SAP Solution Manager instance.

 

Having only one SAP Solution Manager system comes with typical advantages and disadvantages as you would have them in a traditional ERP landscape if you would only have a productive ERP system.

 

When it is time to update the SAP Solution Manager system, to avoid direct impact, you could clone or copy the SAP Solution Manager system and process the update on the clone or copy to test out the procedure before doing the actual update in a weekend for example (to avoid downtime / impact as much as possible).

 

Medium size

 

mediumsized.png

Many customers only have two SAP Solution Manager systems so often I see DEV – PRD landscapes at customers as we have a good amount of medium sized customers. This is a very common configuration that I’ve seen.


The discussion starts here with two SAP Solution Manager instances, which SAP systems (talking ERP now), do I connect where? Do I connect all DEV and perhaps ACC systems to the DEV SAP Solution Manager and only PRD systems to the PRD SAP Solution Manager system?


Well, I’ve said this before and about to say it again. SAP Solution Manager wasn’t really designed to have this kind of split so I only connect SAP Solution Manager DEV to itself as well as one or more sandbox ERP systems.


All other systems (DEV, ACC, PREPROD, PRD, …) get connected to SAP Solution Manager PRD in order to benefit from having all that data in one place.  One alert inbox for the support team, one single source of truth for reporting purposes, one single source of truth for specific scenario’s that require data and connectivity of all the systems that belong to a specific landscape.


The advantage of having a DEV SAP Solution Manager is that you have a place where you can try scenario’s out, perform support package stacks updates which makes it easier to minimize the impact on the PRD SAP Solution Manager.


Large size

 

largesizepartone.png

largesizeparttwo.png

You probably guessed I was going to say, large customers have three SAP Solution Manager systems but that’s not the general rule of thumb. It’s a SAP recommendation most likely but that doesn’t mean it’s really a necessity. It can make sense in case you are going heavy on custom development and want to invest in ITSM scenario’s where you really want to have a ACC system in place but what I see is that many SAP customers max out at two SAP Solution Manager instances (DEV, PRD) as their SAP Solution Manager landscape.


Larger customers can have larger landscapes (potentially scaled out). Some have a split landscape where they decide to go DEV1 – PRD1 for Technical Monitoring and DEV2 – PRD2 for IT Service Management and by doing it, separating out those scenario’s from running on the same SAP Solution Manager instance.


Why? Because they want to avoid impact of one scenario on the other scenario so they would like to patch DEV – PRD faster compared to DEV2 – PRD2 for example.


The drawing above shows an example of such a split landscape. You use one SAP Solution Manager landscape for Technical Monitoring purposes while you use a second SAP Solution Manager landscape for IT Service Management purposes. In the IT Service Management landscape, you don't use diagnostics agent. You only connect the managed SAP systems through RFC connections thus you ignore some red traffic lights in managed SAP system setup. Thet second SAP Solution Manager landscape can be monitored by the first SAP Solution Manager landscape where Technical Monitoring is implemented.


SAP Solution Manager – Diagnostics Agent architecture guide

 

I haven't gone into any kind of detail on agents or other elements yet here that make up the architecture. That might be content for a future blog post. In SAP note 1365123 - Installation of Diagnostics Agents there is a guide attached that provides you with insight on possible architectural options for Technical Monitoring. The PDF document goes through numerous possibilities and options which ventures outside of what I covered in this blog post.


I prefer to keep things simple in terms of not cross using elements as you can see in above “simplified” architecture schema’s. Why? Because complexity adds additional effort on multiple fronts, configuration, maintenance, support and troubleshooting to give some examples. The split landscape option translates into more maintenance effort but lower risks and it makes it easier to keep the SAP Solution Manager landscape up to date that is mostly used for technical scenario’s since you don’t impact ITSM processes that way.


Diagnostics agent on the fly as a default

 

At the moment, I advise to install Diagnostic Agents on the fly (as opposed to a regular, standalone Diagnostic Agent) as a default even if you only have a single SAP system per server because in SAP Solution Manager 7.1 SP12 it is a prerequisite to use automated reconfiguration .


Automated reconfiguration allows the system to reconfigure certain managed system setup steps and some other steps like automatically assigning new, default SAP templates in technical monitoring after the SAP product version has been updated.


If you haven’t seen or read about it yet, you can find a nice presentation on the SCN wiki: http://wiki.scn.sap.com/wiki/display/SMSETUP/Home

where you can also find the Sizing Toolkit which can help you calculate the need for scale-outs for example.


Under 7.1 SP12 (NEW) check out the presentation on Automatic Managed System Reconfiguration (PDF)

The Service Marketplace pages for Business Process Operations will stop being available in the very near future. This means that accessing information via the pages

will no longer be possible.

 

Therefore, all documentation for Business Process Operations (including overview presentations and setup guides) is now accessible via the SCN Wiki Page
http://wiki.scn.sap.com/wiki/display/SM/SAP+Solution+Manager+WIKI+-+Business+Process+Operations.

 

SCN_Wiki.jpg

 

This page gives a general overview about Business Process Operations. Each area of BPOps is shortly explained and links to a sub-page with more details are provided. These sub-pages per area are directly accessible  via the following URLs:

 

In these sub-pages you have access to the existing setup guides and further documents currently available in the Service Marketplace. All future documentation will also be made available here.

 

In the coming weeks we will further extend our documentation in these wiki pages and we will keep you informed in case of major updates.

The Part I of this blog gives the some tips in how to enhance the original urgent change flow to generate transport of copies. Today I will explain how to activate this customizing in your ChaRM Project. You can check it out on this link How-to enable Transport of Copies on Urgent Changes Flow (PART I)

 

Create a new Project under transaction SOLAR_PROJECT_ADMIN or close the current Project Cycle.

 

Before you create the new tasklist, you should perform the following steps:

 

pic01.png

 

Push button Show Avaiable Variants for Tasklist and select the Y/ZSAP0 tasklist variant. Without this step, transport of copies at urgent change will not work! If you don't make this change in before generate the Tasklist you will receive some errors when the urgent change flow when you stay at the status E0004.

Last month, I had the pleasure in colaborate with a brazilian food company (the world's tenth-largest food company)  speaking about some of our experiences, best practices and provide on demand consulting for their ChaRM Change Request Management Solution.

 

During our conversation the customer tell to me your wish to deploy Transport of Copies as part of the Urgent Changes flow. As we know, the Change Request Management cover a standard workflow containing the transport of copies procedure available only on Normal Changes.

 

In this blog I provide some hints in how you could set it up, however there is no guarantee and also standard support from SAP for this configuration.

 

Background

 

Urgent Changes have their own tasklist (Type "H") to coordinate all transport requests. The original tasklist type H does not contain the action "Create Transport of Copies". In this case we need to enhace the tasklist type H using a custom tasklist variant. After this configuration, some ajustments must to be applied to control the TMS of managed system.


Just to clarify when the ToC will be generated and when the original Transport Request is released, I made the following pictures showing the "AS-IS" and "TO-BE" solution. You can adapt for your needs (e.g. creating additional status).

 

Standard Process Flow: Urgent Change (SMHF)


pic01.png


Enhanced Process Flow: Urgent Change (Y/ZMHF)


pic02.png

Configuration Procedure

 

Create a Tasklist Variant

 

          Access the IMG activity using the following navigation options:


               spro_img1.jpg

               pic03.jpg

         

 

          Push button "New Entries":

      pic04.jpg

        

 

          Create the tasklist variant "Y/ZSAP0 ":

        

           pic05.jpg   

 

 

 

Define Tasks for Tasklist Variant

 

          Access the IMG activity using the following navigation options:

 

          spro_img2.jpg

          pic06.jpg

 

     Select all entries from Tasklist Variant "SAP0" and copy to Tasklist Variant  "Y/ZSAP0":

    

     pic07.jpg


     Create a new record adding the task "Create Transport of Copies"  for the Project Type "H Urgent Change":

 

     pic08.jpg

 

 

 

Define Header / Footer Tasks for Tasklist Variant

 

     Access the IMG activity using the following navigation options:

   

     spro_img3.jpg

     pic09.jpg

 

 

Repeat the procedure 2 from Define Tasks from Tasklist Variant configuration:


     pic10.jpg

 

 

Register Tasklist Variant into Project Cycle

 

     Apply the SAP Note 927124.

   

 

Adjusting Conditions and Actions (TSCOM Tables)

 

     Some activities regarding the transport management system and consistency checks are triggered when  the change document is assigned to a specific status value.       To enable the urgent change flow to generate transport of copies, we need to change some actions and their conditions based on status value.

 

 

     Access the IMG activity using the following navigation options:

 

     spro_img4.jpg

 

     On folder "Create Procedure Type" choose "Y/ZMHF" transaction type and their status profile:

 

     pic11.png

        

     On folder  "Assign Actions"  make the follow ajustments for the User Status "E0004 - To be Tested":    


      pic12.png

 

     On folder  "Assign Actions"  make the follow ajustments for the User Status "E0005 - Successfully Tested":

 

     pic13.png

 

     On folder "Define Execution Times of Actions", make the follow adjustments for the User Status "E0004 - To be Tested":


     pic14.png

 

     On folder "Assign Consistency Checks", make the follow adjustments for the User Status "E0004 - To be Tested":

 

     pic15.png

 

     On folder "Assign Consistency Checks", make the follow adjustments for the User Status "E0005 - Sucessfully Tested":


     pic16.png

 

 

 

Result

 

Project cycles powered by the custom tasklist variant will be able to generate Transport of Copies. In my next blog I will describe how to use this feature.

Here's what I thought before using CHARM:

 

Charm will:

  • Remove Conflicts between developers
  • No more missing objects when transporting to production
  • No more keeping track of transport dependencies
  • Allow to bundle transports outside of SAP
  • Keep defects with original requests
  • There will be less transports

 

The above is living in Michelle's world of what CHARM will do.  NOT WHAT SAP or CHARM Claims to do.

 

So here's a scenario:

I would make changes to an object.  There would be changes to an outside system.  Developer 2 makes changes to a different object that is a part of my project.  All of the previous transports/objects will be bundled in one CHARM request.  Emergency and non-emergency transports will be taken into consideration.

 

Dum, Dum, Dum, Da, Dum - Drum roll please.  Charm to the rescue.

 

See below:

 

charm4.JPG

 

So was my vision correct?

 

In practice:

  • A regular transport is created.   Table 1 is not changed.
  • The transport and CHARM ticket are released for an emergency change.  It is immediately moved to production. (After testing in quality)
  • The regular transport has fields removed from table 1, and the emergency transport object is changed so it no longer requires those fields.
  • The emergency change is moved to production again.
  • The regular change is moved.  Now when the programs are regenerated - the table is generated, and then the program.    The emergency program is generated with errors - so it goes with error - 8, and the regeneration stops.

 

If the above confuses you.  You are not alone.  It confuses me and my BASIS people.   So the only solution I found was to create a new CHARM ticket with just the table.   Transport it first.  Re-transport the 2 Charm tickets.  They will go into the system clean.

In theory:

All transports are moved to production with the release.

 

In Practice:

  • Not all transports move to production.
  • The changes are backed out of the object, and the object is changed by the developer.  The developer ignores the conflict and can create the new transport request.
  • At this point the changes can't be moved without BASIS help.  Why?  Because there is a conflict.

 

In theory:

Only one developer works on an object at a time.  Or if more than one developer is working on it, then it's for the same project.

 

In Practice:

  • There can be more than one developer working on an object.  And yes, it is for two different projects.
  • So there are two options - add the object to the two different CHARM tickets.   Leave the object in the CHARM ticket that has it in it.  Either one will cause one CHARM ticket to be dependent on the other.  It will be a manual task to keep track of that.

 

In theory:

When a new table is created, all your developer's will know it is new and won't use it in their objects.

 

In Practice:

  • Developer's miss that the table was created in a different CHARM ticket.   They have no idea on the dependencies.
  • CHARM doesn't notify of the dependencies.
  • The move to production has errors.

 

OK - I'm done with the things CHARM doesn't do well.    There are some things that it does very well.

 

CHARM is amazing at:

 

  • Limiting the number of transports.   For a regular CHARM ticket that goes with a release, only the transport task will need released.   When the task is released, it moves in the background to the test system.  If there are problems, then I just create another task.  The transport request is never really moved until the move to production.
  • It is easy to create a configuration transport request and a development transport request.   Since they are both on the same CHARM ticket, they will move to production together.
  • If your CHARM ticket has been released,  and an error is found.  It is easy to create a defect request and attach it to your CHARM ticket.  This will keep the transports together in one CHARM ticket.
  • The test environment is easily locked down when the system is moved to testing.  This will stop everything except for emergency transports from moving to the test client/system.
  • The approval process is at the front end.  A CHARM ticket is not created until the CHARM request is approved.  That means a transport request can't be created.
  • Outside objects - I'm not sure as we haven't used CHARM for that yet.

 

So there you have it, my personal thoughts on CHARM.  Keep in mind, like all SAP products, different companies will have CHARM configured differently.  So some (or none) of what I've written may apply to you.

 

Does CHARM do what it claims to do?  Yes.  Does it do what you think it should?  You be the judge of that.  Personally, I think it does make my job easier.  It's not a silver bullet.  It doesn't fix all transport issues.

 

Please comment with some pros and cons.  And do let me know if I'm losing my mind with some of my comments. 

 


:


Closing the current Change Cycle and open a new One

SAP highly recommends, that customers close their Maintenance Cycle on a regular basis.

  • This allows a meaningful Reporting on Change Activities per Change Cycle.
  • On the other hand closing the Maintenance Cycle regularly helps to avoid a potential performance impact on the long run.

A Change Cycle is closed by processing the Change Cycle Document to the final CRM User Status:

  • SMMN for a Maintenance Cycle with Task List Variant SAP0,
  • SMMM for a Maintenance Cycle with Task List Variant SAP1,
  • SMDV for a Project Cycle.

Take over open Change Documents to the next Change Cycle

When you close the existing Change Cycle, for instance a Maintenance Cycle and open a new one, you are not forced to close all Change Documents, which belong to this Change Cycle.

SAP Change Request Management allows to take over open Change Documents to the next Change Cycle.

Project Completion and complete Closure of all related Change Documents

In SAP Change Request Management no automation for the Closing of Change Documents, which belong to a ChaRM Project is available.

However ChaRM offers program CRM_SOCM_SERVICE_REPORT as Standard Solution for Closing of Change Requests and Change Documents, which belong to the Change Cycle.

Before utilizing program CRM_SOCM_SERVICE_REPORT, you should check the following:

  • Status Profile Customizing (for the referring Change Cycle Document: SMMN, SMDV),
  • ChaRM Condition: SUB_ITEMS is defined for Change Cycle Documents SMMN and SMDV.

With the help of program CRM_SOCM_SERVICE_REPORT you can search for any kind of open Change Documents with various search criteria as:

  • Open Change Documents per Business Partner, Team, etc.
  • Open Change Documents per CRM User Status,
  • Different Service Process related search criteria, as 'Transaction Type', 'Posting Date', etc.

 

After having made your selection, you can let the program further process the open Change Documents up to their final CRM User Status.

 

Snagit1.png

In addition the program offers a 'Test run mode'.

 

Of course, the program CRM_SOCM_SERVICE_REPORT can be also utilized in order to close IT Service Management related documents such as Incidents or Problems.

It is essential to read this blog before you proceed further.

 

Since 7.1 SP10, there exist two work centers in technical monitoring namely BI monitoring and Job Monitoring.

 

There are some inherent design shortcomings with the BI Monitoring application in 7.1 that SAP decided to invest in a renewed work center to overcome these deficiencies. As a part of this approach, we decided to unify the collectors of Business process monitoring and technical monitoring to be based
out of the same infrastructure namely MAI.

 

However from the runtime and reliability of the solution, Job monitoring offers robustness with regard to alerting mechanism.

 

What does the MIGRATION report do? & When to execute this?

 

So when you are actively using BI Monitoring to monitor BW PC (process chains), SBOP and SAP Data services jobs in the past (earlier to SP12) and have
several managed objects configured, we provide a report program to transfer these configurations to managed objects of type Job Monitoring to utilize the
new unified job monitoring collection mechanism and hence overcome the known limitations of BI Monitoring.

 

What objects are migrated?

 

If the technical scenario in BI Monitoring has managed objects of type BW PC, SBOP, SAP Data service jobs; this migration report acts on them

 

Which objects are not migrated?

 

Bex queries and templates are not migrated

 

FAQ:

 

1. What happens to the old BI monitoring objects?

 

The old BI Monitoring scenario would remain as is. There are options to decide what should happen to the objects in this scenario.

 

A drop down exist to execute this migration one scenario at a time.

 

There are two options

 

a) Migrate BI monitoring Objects :

 

We create a new Job Monitoring technical scenario. '_BIMONIT' will be added as a suffix to the existing BI Monitoring scenario name. Then in the chosen
scenario, all the objects of type Process chains, SBOP jobs and SAP Data services jobs are referred and the existing configuration (Metrics, Thresholds,
Notification, incident settings) migrated by creating new job monitoring objects of respective sub type in the new job monitoring scenario.

 

In this case, the bi monitoring is still functioning and now the job monitoring is also functional.

 

b) Migrate and Deactivate BI Monitoring Objects

 

We create a new Job Monitoring technical scenario. '_BIMONIT' will be added as a suffix to the existing BI Monitoring scenario name. Then in the chosen
scenario, all the objects of type Process chains, SBOP jobs and SAP Data services jobs are referred and the existing configuration (Metrics, Thresholds,
Notification, incident settings) migrated by creating new job monitoring objects of respective sub type in the new job monitoring scenario.

 

In this case, only these objects are deactivated from monitoring via BI Monitoring and are now available via the job monitoring.

 

2. what happens to the old BI Monitoring scenario?

 

The old BI Monitoring scenario would remain as is. Active and functional and

 

3. What happens to the other parts of the BI MON scenario, that are not migrated

 

The old BI Monitoring scenario would remain as is. The other parts of the BI Monitoring scenario like the BW Bex queries & templates, all the systems
included in the scope selection in define scope step 4 would remain in the BI Monitoring Scenario. Depending on the option chosen for migration, the objects
of types BW PC, BO jobs, DS jobs would be deactivated.

 

4. Where to check the result of the report program that performed the migration

 

Please check in SLG1,

 

Object type: E2E_ALERTING

 

Sub Type: JOB_CONFIG

 

5. Where to check for the logs of this background job?

 

Owing to the time intensive operation this program would execute in background. In transaction SM37, check with the job name MIGRATE_BI_JOB_* with
the user who triggered the migration report for the logs and the status of the job


6.Explain the advantages of this migration?

 

The Job monitoring work center is sophisticated in the collection and would hence avoid grey alerts. Also starting SP12, the BW reporting is available for

job monitoring collected metrics.

 

7. Explain which feature in BI Monitoring setup is not available in JOB Monitoring


There is also a compromise in this migration. BI Monitoring work center evolved over the last 10 SPs to make a feature rich SOLMAN_SETUP. Certain functionalities that were developed in BI Monitoring configuration typically caters to the mass handling requirements. These are 'threshold mass maintenance', 'Job Details', 'Excel import and export' and 'take from schedule'. These features are not yet available in the Job Monitoring configuration. But the trade-off exist. The collection is robust. Hence the alerting and monitoring UI are more dependable when using the job monitoring.

 

Solution

 

Execute the report program AC_JOBMON_MIGRATION to migrate (BW PC, SBOP, DS jobs) of BI Monitoring to Job Monitoring

Continue exploring Partner determination with default or dependent values, with certain conditions to have more flexible ITSM / CHARM procedures. There is topic on how to setup Partner determination via BRF+ and very good blogpost  from Vivek . But  why not think about other possibilities? And here comes crm framework called Rule policies, it is mainly used in solman as Dispatch tool, but if you digg it deeper the true possibilites are opens to your eyes

 

Example Scenario: Rule policy ITSM / CHARM partner determination (NO ABAP REQUIRED).

 

Steps:

1.    Assign Rule modeler to Categorization schema of CR

2.    Create Rule policy type SRQ ZMCR_DEFAULT_BP

       Mapping If category = CAT_1 then route developer = 11.

3.    Copy SAP_SRQMROUTING to Z/Y

4.    Assign policy

5.    Assign Service Maanger Profile to Change Request transaction

       Change Request = ZCR_SRQMROUTING

6.    Test

 

 

1. Assign Rule modeler to Cat schema of CR

 

First let’s go to Solman’s CRM WEB UI and setup things we need

Tcode SM_CRM – Service Operations – Categorization Schemas

Choose your Schema that assigned to Change Request

Add new version, go to Application Areas and press New and add

Appilcation ID – Rule Modeler

Parameter – Context

Value – Service Request Management

We need to have a row like a last row in pic below:

Снимок1.PNG

 

2. Create Rule policy type Service Request

 

Tcode SM_CRM – Service Operations – Rule Policy, now here we need to create a new Rule policy

Снимок2.PNG

 

Context – Service Request Management.

Give some name to Rule Policy

 

Снимок3.PNG

This technology will work for any type of Solman transactions both ITSM or CHARM.

Now its most interesting part – the design part!

Choose Draft Rules row, press Subnode

Снимок4.PNG

Name it as you like f.e. Category = Partners and hit Subnode again

 

 

Again give proper name to avoid confusion when you read policies and press Add Entry in Conditions block

 

Снимок6.PNG

Снимок7.PNG

 

Choose

Attribute – Order Category

Operator – Contains

Value – choose category you wish to map the partner functions f.e. our popular Change Manager in the example it is ATH

Снимок8.PNG

Now in Action block press Add Entry

Choose Action – Route to a Partner, Partner Function – SDCR0002 Change Manager, Partner – who we need to assign as Change Manager in this case

Снимок9.PNG

For example I have setup all needed partner functions to be filled for the category ATH see below:

Developer, Tester, Custom partner function and etc.

Снимок10.PNG

Do not hurry to go to the next topics, take some time and sit here, because here you can make any scenario you need.

For example you can combine multiple checks for any situation: User status, Priority or Change category with other condition like check the category. You may Match conditions with AND / OR operators.

 

3. Copy SAP_SRQMROUTING Service Manager Profile to Z/Y

 

Tcode SPRO - Customer Relationship Management - E-Mail Response Management System - Service Manager - Define Service Manager Profiles. Choose SAP_SRQMROUTING and press Copy button on the top, name it like ZCR_SRQMROUTING

Снимок11.PNG

4. Assign Rule policy to Service Manager Profile

 

Stay on your ZCR_SRQMROUTING – double click Directly Called Services – double click Properties

Policy = your created policy in our case YALM_ZMCR_2

Снимок12.PNG

5. Assign Service Manager Profile to Change Request transaction

 

Transactions - Additional Settings - Assign Dispatching Rule Profile to Transaction Types.

 

 

 

6. Test

 

Now go to SM_CRM and create or pick any Change Request and press More – Dispatch all partners will be filled as mapped like here: Choosed category = ATH, pressing Dispatch

Снимок14.PNG

All partners filled

Снимок15.PNG

This will work even if Partner Function is not empty

 

Have Fun J

D.K.

With 7.1, the capabilities of central monitoring has immensely improved in Solution Manager. For setting up of monitoring of systems in the BI landscape and objects in these systems, we have certain options available now.
1. Via Technical Monitoring - BI Monitoring
2. Via Business Process Monitoring (BP MON) - BW process chain monitoring.
3. Via Unified Job Monitoring

I would like to highlight what is a good option to do and what are the pros and cons with each approach.

 

What is BI Monitoring In technical Monitoring

To provide central monitoring and alerting capabilities. Integrated with Guided Procedure for alert resolution path. This is to cater to the need of an administrator to get an overview of the health of the systems participating in the BI landscape in addition to the objects specific to the data flows within the landscape.
Target audience:
BI administrator
BOBJ administrator
Application Support
BI operations Team
centralmontiroing_BIMON.PNG

Runtime:

1. Overview Monitor

A single screen overview of the
Health of all systems participating in the BI landscape
  • Availability
  • Performance
  • Exception
View of the health of data flow entities in the landscape
o BW Process chains (ETL)
o BOBJ jobs (Reporting)
o DS Jobs (Replication)
o Bex Queries & Templates (Ad-hoc Reporting)

2. Detail Monitors

Provides specific information on the health of these monitored objects by monitoring certain metrics per instance of these recurring jobs which are representative of the health of the data flows across the systems.
Overall health of the job
o Status
o Error logs * (managed system login maybe required)
Scheduling metrics
o Start delay
o Not Started on Time
Runtime metrics
o Duration
o End delay
o Out of time Window
Data integrity metrics
o Records processed
o Data packages processed
o Rows_read
o Rows_written * Data services

DETAIL_BIMON.PNG
Supported system types in BI Monitoring
System Monitoring Metrics are integrated in the overview monitor of BI for following system types

  • BW JAVA
  • SAP HANA Database
  • SAP SLT
  • BWA(TREX system)
  • ABAP Source system
  • BOE WAS (TOMCAT, WebSPhere, SAP_J2EE)
In addition to the system monitoring metrics, we can configure jobs/reports on top of these systems to be monitored.
  1. SBOP  (SAP Business objects platform jobs in CMC)
    1. 3.x
    2. 4.x
  2. SAP DATA SERVICES (Jobs)
    1. 4.1
    2. 4.2
  3. BW ABAP server (Process chains, BeX Queries, Templates)

MONITORS_BI_MON.PNG

As you can see, these screens have a designated flow for navigation. A single overview screen to show the health of all participating systems in the landscape and subsequently drill down capability by system-type and then to the monitored objects per system and then to its instance details!

Some key features in BI Monitoring (Configuration time in SOLMAN_SETUP) which caters to handle mass objects in configuration UI

  1. Mass maintenance of Thresholds
  2. Take from managed system to assist in configuring thresholds
  3. Excel Upload & Download
  4. Provides the managed object details in the configuration to assist in providing thresholds value for metrics like  (Duration, records processed etc.)
Nevertheless, this application has certain limitations.
1. Runtime (Monitoring UI) is not always colured!

The Monitoring UI could report grey rating for certain monitored objects which are not frequently executed in the managed system. Like a chain that is executing only once a week in the managed system. The collection frequency is set to 5 minutes in solution manager. So the status of these chains in the monitoring UI will turn grey,10 minutes after the chain ends today. And kicks in only for the next execution, that is in the next week! However the alert (if any) would remain in the alert inbox with the history of measurements.
{Increase the collection frequency. Instead of (once every) 5 mins, make it once every hour. Esp for longer running chains. This would mean the monitoring stays colourful for twice the collection frequency. so instead of 10 minutes, its available for 2 hours.  This has a flip side, "delay in alerting" bad news that of failure.}

2. Alert inbox has multiple alert groups open, for instance, for the same BW process chain LOG_ID due to
    1. Grey metrics (collector issue, MAI extractor issue, Engine design
    2. If an open alert of a chain failure is confirmed in the alert inbox, the next collection will again report this error and an alert is opened up again! (owing to a fixed look back time of 36 hours for the ST-A/PI collector)

 

 

=> SERIOUS CONSEQUENCE : Multiple (duplicated) email automatic notifications

Workaround for reducing the number of duplicate emails. (this does NOT eliminate the duplicate alerts. This would only reduce the duplicate emails)

1. Reduce the collection interval to a small and relevant window (to reduce the occurrence of grey metric from collector)

    1. However, in the advanced tab in scheduling of data collection, Managed system time zone is  not handled in ST< SP10 (UTC is considered)
    2. If the chain is  executed  over the midnight, it is not possible to configure this restriction in the design time

2. Increase the collection frequency. Instead of (once every) 5 mins, make it once every hour. Esp for longer running chains. This reduces the probability of a grey alert!  This has a flip side, "delay in alerting" bad news that of failure.


3.  Starting SP10, 2118848 can be implemented and the report program AC_BIMON_MASS_AUTOCONFIG to be executed with 'set retention time' to cicumvent this problem.


 


3. No analytics capabilities of the collected metrics. vis-a-vis Interactive reporting. No BW reporting!


What is it to monitor BW process chains Via BPMon - BW process chain monitoring?


    In order to support the end to end monitoring of Business processes, that could span across several systems and can internally comprise of different entities like interfaces, jobs, process chains, in Solution Manager, we have the possibility to orchestrate a Business Process and setup monitoring for the participating entities. In such a context, Process chains can be setup for monitoring.
    Advantages:

    => A clear Business process driven approach for monitoring
    => Support of extended schedule and multiple not started on time etc,.
    Limitations:
    -  has no overview of the underlying technical systems health
    -  has no contextual navigation to the underlying system monitoring.
    -  Always require a BP solution to be orchestrated in Solution Manager to setup monitoring!


    What is Unified Job Monitoring?


    Over the last one year, we have been pondering on means to fix these known issues and develop an application that serves the customer requirements by closing the existing gaps. Starting SP10, we unveiled the new work center -> Unified Job Monitoring.
    • There is a consistent approach to monitor all type of jobs (BW process chains, ABAP jobs, SBOP jobs, SAP Data Services jobs). Also monitoring jobs scheduled from EXTERNAL scheduler that use the SMSE interface. for instance, SAP CPS (REDWOOD) scheduled jobs.
    • Reporting of background jobs without requiring direct access to production systems using the collected metrics (BW analytics)
    • Powerful monitoring capabilities with factory calendar awareness, job log content, and Business process context so on

    MOTIVATION_JOB.PNG

    We have developed a brand new monitoring UI keeping in the interest to transition to the new html5 technology (SAPUI5). Find below a glimpse of this monitoring UI.
    jobMON-MOnUI_Sp12.PNG
    Key features
    In order to remove redundant collection in the managed system and to provide a persona-specific runtime view, we have ensured to unify the configuration, data persistency, collection and monitoring UI.


    Design Time:

    1. Reuse of Monitoring Objects from the three entry points - BP Monitoring solution, Technical Monitoring scenarios, Job Documentation.
    2. Pattern-based MO is supported for ABAP, BO, and DS. However, BW Process Chain has to be fully qualified name

    entry_points.PNG
    Runtime:
    1. Intermittent grey alerts are avoided
    2. Multiple email notifications are overcome
    3. Support of BW reporting.
    reporting_JOB_MON_SP12.PNG

    When to use what?
    1. If a need for Overview Monitor exist:     Technical Monitoring - BI Monitoring. With certain workarounds for the known limitations.
    2. For a pure Business process context:   BP MON - BW Process chain monitoring
    3. For a harmonized approach towards monitoring, Starting SP12, please migrate to BPMON and MAI integrated Unified Job Monitoring.
    SAP would continue to invest only in this option. Overall, Unified Job Monitoring addresses the known limitations of BI Monitoring and integrates
    BP monitoring based process chain monitoring.
    However, there are still gaps owing to the time required to develop. We intend to bring the best of both worlds together in this Unified job monitoring.
    Starting SP12, you could migrate existing BPMON Solution to MAI based solution. This would ensure automatic usage of Unified job monitoring if there are jobs, Process chains available in the classical solutions.
    Details: Execute the migration report R_AGS_BPM_MIGRATE_SOLU_TO_MAI via SE38. Use the F4 help to identify your solution.

    Similarly, there exist also a means to migrate relevant objects of existing Business Intelligence Monitoring scenarios to Job Monitoring Scenario.
    Details: In transaction SE38: AC_JOBMON_MIGRATION. Migrate or copy from existing BI Monitoring scenarios, job type objects (BW Process Chains , SBOP jobs, DS jobs) to a new Job Monitoring scenario to utilize the new collection framework and monitoring UI
    Comparison
    In 7.1 SP12 , all the three monitoring work centers co-exist. A comparison chart of features between Technical Monitoring - BI Monitoring versus Unified Job Monitoring is as below.

    FeatureBI MonitoringUnified Job Monitoring
    ABAP Jobs and stepsNAX
    BW Process chains and stepsXX
    SAP Business Objects Jobs (SBOP)XX
    SAP Data services jobsXX
    External Scheduler (SAP CPS REDWOOD)NAX
    BW Bex Reports & TemplatesXNA
    MAI features
    • Notifications
    • Incidents
    • Alert Inbox
    • Third party
    XX
    Integration to MAI System Monitoring and contextual navigation to System MonitoringXPlanned
    Contextual Navigation to Managed system analysis tools from Monitoring UIXPlanned
    Overview monitor For viewing the overall scenario health of all landscape entities & JobsXPlanned
    (MAI) Work mode awareness
    NAPlanned
    Mass handling of monitored objects & thresholdsXPlanned
    Integration to Job DocumentationNAX
    Guided Procedure for Alert ResolutionXX
    Please write to me regarding:
    1. How is job monitoring done today?
    2. Which job scheduling tools are used in the landscape (embedded schedulers from managed systems, CPS, UC4,CA WLA, AUTOSYS Solution Manager JSM)?
    3. What are the relevant/important job types?
    4. Is SAP Solution Manager-based Job/BI monitoring used?What is the feedback? Which functionality is missing?

     

    Regards, Raghav, S

    Development Manager, SAP Solution Manager


    How to configure and Trouble Shooting DBA COCKPIT Configuration in Manage System Setup Solution Manager 7.1


    In Managed System Setup - Step 4 (Enter System Parameters) highlighted below

    Page1.png

     

    Please provide all the required Details for DB Parameters

     

    DB Host

    Service Name

    Port Number

    TNS Name

     

    And User Name will be your ABAP Schema User/ For Java (SAPSR3DB)

    Page0.png

     

    Once provided all the required information then save. you will see that log message saying

     

    The DBA cockpit
    connection %_******** is OK. DB Extractors can be activated

     

    image12.jpg

     

    Once this step is completed we can activate the DB Extractors in Step 8 ( Configure Automatically) . We can check the successful connection entry in

    DBACOCKPIT T-Code. Below is the screen shot for your reference.

     

     

    Page77.png

    Trouble Shooting Connection Error

     

     

    If DB cockpit connection cannot be established then you will get below message.

     

     

    Page33.png

     

    We can see that Error cannot establish DBcockpit connection

    Page44.png

     

    page55.png

    as we can see the same connection entry in DBCO T-code. Delete these existing entries

    Page66.png

     

    And delete all the entries in DBACOCKPIT and MSS and check the in Operating System Level

    tnsnames.ora at both location it should be the same as per the managed system

    entries. If not then change the entries and check the tns ping.

     

     

    Then configure the DBACOCKPIT in MSS in the same way shown above. Once the DBA
    extractors are activated check the connection in T-Code DBACOCKPIT in SOLMAN

     

    DBACOCKPIT Connection in SOLMAN à connection should be
    successfully established.

     

    Thank You,

    Nahid

    The cross-system object lock functionality ensures that when an object is changed in a managed system, a lock entry is created for this object in the central SAP Solution Manager system. Depending on the selected conflict analysis scenario, this lock entry prevents changes being made to this object by any other change (transport request). This applies to all managed systems and clients for which the cross-system lock has been activated.

    Once the cross-system object lock has been activated, the system can detect conflicts between objects in transport requests that have the same production system or the same production client as their transport target.

    The meaning of the cross system object lock function is to protect your production system from “passing developments”.

     

    Inside a Change Request Management maintenance project all changes (Normal, Preliminary, Urgent and Defect) will consolidate with the project. As the import method is IMPORT_PROJECT_ALL “passing developments” inside a project can never happen.

     

    An exception to this is that Preliminary Changes & Urgent Changes can pass each other within a project. Therefore the use of CSOL is necessary to protect the PROD system from downgrades.

     

    Also if more than one project is available for the same system landscape, CSOL can protect the PROD system from downgrades.

     

    Automatic Categorization of Objects to retrofit (Auto Import, Retrofit and Manual) is based on the Cross System Object Lock entries in Solution Manager
    If the Enhanced Retrofit function does not detect a Cross System Object Lock entry for an object of a transport requests that should be retrofitted, the object will be flagged as Auto Import object.

    error.jpg

    A change to object A is performed in the DEV system. This change is recorded in the CSOL table of Solution Manager. Now it happens that in the PRD system a fix is needed. The fix will be performed in the MAINT system and has to change object A as well. As the CSOL entry blocks the second change (fix) of object A the only solution to go on is to delete the CSOL entry as the fix is necessary to solve the issue in PRD.

    If now the transport request in MAINT is released and the retrofit categorization is calculated the retrofit will not detect an entry for object A and therefore calculate a green case.

    If now retrofit is performed the version of object A in the DEV system is overwritten!

     

    How can we avoid this behavior?

     

    You can customize how CSOL shall behave.

    csol.jpg

    csol2.jpg

    You will find default mode and expert customizing.

    We will need to use the "expert" customizing as the default mode does not protect you 100% from the issue described above.

    csol cust.jpg

    The "Project Relation" customizing is key for the enhanced retrofit scenario. In default it's set to "cross" which means conflicts from different projects as well as conflicts within the same project will stop the process.

    What we want to avoid is exactly that conflicts from different projects will end in a termination of the process. Therefore the project relation has to be set to "Specific". This means that only conflicts within the same project will result in a termination and for different project will only appear as warning.

    The other settings do not influence the enhanced retrofit behavior, so Change type relation and object type can be set however you need. But it's necessary that the project relation is only set to "specific" in the case you have the enhanced retrofit scenario active in your landscape.

    One exception comes if you can for sure exclude Maintenance projects in the DEV landscape. In this case urgent changes cannot be created (this is only allowed when using maintenance projects) which means the default mode comes back into the play again.

    Also possible is the warning only setting which results in that all conflicts will ever be detected as warning only and the process is never terminated.

    In this case it's necessary to also activate the downgrade protection (DGP). This will ensure that if you get a warning in CSOL you can still not get passing developments as it checks again for release and every import.

     

    So with these allowed settings you will never need to delete an entry from the CSOL list because of Urgent Changes needing to be implemented to PRD as fast as possible. Also in any other conflict situation you will never need to delete entries from the CSOL list to go on with your process.

    Like this you will never get a wrong "green" retrofit categorization which will end up in an over write in DEV.

     

    Conclusion:

    When using enhanced retrofit in Solution manager the use of cross system object lock is mandatory for the correct behavior of the tool.
    You cannot use the enhanced retrofit without having CSOL setup and activated for the retrofit relevant projects.
    With some of the available conflict analysis customizing settings in cross system object lock  the danger of downgrading your Implementation work appears.

    When using the enhanced retrofit, you should only use project relation "specific" . Any “cross-project” setting is not allowed, because a terminating cross system object conflict would require the deletion of the corresponding lock entry. But that lock entry is required for the correct  analysis of the enhanced retrofit.

     

    Summary:

    When using the enhanced retrofit scenario make sure your CSOL customizing is set to "specific" from the project relation point of view.

    Also "warning only" is a valid setup if on top DGP is activated. The default mode can also be valid for the enhanced retrofit scenario when it's ensured that no Urgent changes can ever be created in the implementation landscape (DEV).

    PREREQUISITES

    The looping capability are planned to be shipped with SAP Solution Manager 7.1 SP13

    Alternatively you can implement following notes in advance:

    • 2088536 - Downport CBTA Default Components
    • 2088525 - IF and LOOP Default Components for CBTA
    • 2029868 - CBTA - Runtime Library - Fixes & improvements

     

    USE-CASE FOR LOOP FUNCTIONALITY:

    A test script may need to perform actions against an unknown number of entries in a table. The script may therefore need to:

    • Start at first row and check if there is an entry
    • If entry exists perform one or more actions on the current row
    • Continue with next row

     

    REQUIRED DEFAULT COMPONENTS: DO, EXIT_DO, LOOP

     

    Keyword: DO

    It can be used to iterate over several steps. It defines where the loop starts.

    • It must be used together with the LOOP keyword which defines where the loop ends.
    • The EXIT_DO keyword must be used as well to determine when to stop the loop.

     

    The CounterName parameter provides the name of the iteration counter. This counter is incremented automatically at runtime while iterating over the included steps. The actual value of the counter can be retrieve using the regular token syntax.

    For instance, when CounterName is set to "COUNTER" its value can be reused in the subsequent steps using %COUNTER% (or $COUNTER$ for specific situations where the percent character is ambiguous).

     

    If you plan to use nested loops please make sure to declare a different counter names.

     

    Component Parameters

     

    CounterName: Specifies the the name of the iteration counter.

     

    Keyword: EXIT DO

    It must be used within a loop that has been defined using the DO and the LOOP keywords. The EXIT_DO keyword interrupts the loop as soon as the condition is met.

    A typical use case is to check the value of iteration counter that has been declared via the CounterName parameter of the DO keyword.

    For instance, when CounterName is set to "COUNTER" its value can be checked using the %COUNTER% token.

     

    Component Parameters

    LeftOperand

    • Specifies the value of the left operand that is to be checked.

    Operator

    • Specifies the boolean operator to use.

    The operators supported are the ones below:

      • = for "Equal to"
      • < for "Less than"
      • > for "Greater than"
      • <= for "Less than or equal to"
      • >= for "Greater than or equal to"
      • <> for "Not equal to"
      • {contains} for "Contains"
      • {startsWith} for "Starts with"
      • {endsWith} for "Ends with"

    An additional operator is supported when testing WEB applications (i.e.: applications running in the browser):

      • {matches} for checking whether the value matches a regular expression. The regular expressions are expressed using the .NET syntax.

    RightOperand

    • Specifies the value of the right operand that is to be compared with the left operand.

     

    Options

    The options parameter lets you perform some adaptations or conversions of both the left and right operand before comparing them.

    The supported options are:

    • /u (for uppercase) - Both values are converted to upper-case before being compared
    • /t (for trimmed) - Both values are trimmed before being compared
    • /i (integer) - Both values are converted to an integer before being compared
    • /f (float) - Both values are converted to a float (or double) before being compared
    • /b (bool) - Both values are converted to a Boolean before being compared

     

    Keyword: LOOP

    It defines the end of the loop and must be used together with the DO keyword which defines where the loop starts.

     

      

    EXAMPLE – PROCESS LINE ITEMS IN SALES ORDER

    The following scripts was created for transaction VA02 (Change Sales Order) to add shipping information for each line item of an existing sales order.

    script.png

     

    With DO Keyword the loop starts and the counter is set to ‘1’.

    DO.png

    To be able address the row number starting at ‘0’ we take the counter number minus ‘1’ using the CBTA_A_SETINEXECUTIONCTXT component.

    SETINEXECONTEXT.png

    Then the scripts reads the value of the first row in the first column to check if an entry exists.

    GETCELLVALUE.png

     

    If the value is empty we exit the loop with the EXIT_DO keyword.

    EXIT_DO.png

    Otherwise the scripts performs the required actions for the current row

    • Select row

    SELECT_ROW.png

    • Menu Goto --> Item --> Shipping
    • Enter the required shipping information using the related screen component
    • Go back to main screen

    With the LOOP keyword the script goes back to the DO keyword while increasing the counter and processing further line items of that sales order.

    Mateus Pedroso

    MOPZ Framework 3.0

    Posted by Mateus Pedroso Nov 28, 2014

    Dear followers

     

    My name is Mateus Pedroso from MOPZ/LMDB/Solman Configuration team and I'll start to write some posts about these topics. I would like to start writing about MOPZ framework 3.0.

     

    MOPZ Framework 3.0 is the standard for Solution Manager 7.1 SP12, but you can apply note 1940845 to enable MOPZ 3.0 in Solution Manager 7.1 SP05-SP11. Note 1940845 must be always implemented in the latest version and this note fix some bugs in mopz 3.0, so it's very important to ensure that the latest version of note 1940845 is implemented even in Solman 7.1 SP12. The following points changed in MOPZ 3.0.

     

    - UI and performance.

    - Integration of the Maintenance Optimizer with the Landscape Planner.

    - Add-on installation procedure.

     

    You can check more details about MOPZ 3.0 in the pdf attached in note 1940845.

     

    One of the most important improvements is the Add-on installation. Here's a screenshot showing that now you can apply add-ons in step 2.

    mopzaddon.png

     

    Now it's easier to apply add-ons.

    mopzaddon1.png

    In the next posts, I'll explain some LMDB/SLD topics related with MOPZ and how to fix some well known issues.

    Part 1 is Solution Manager 7.2 Roadmap Webcast Summary Part 1

     

    The usual legal disclaimer applies that things in the future are subject to change.

     

    Cloud Adoption

    1fig.png

    Figure 1: Source: SAP

     

    When systems are on a Private Cloud such SAP HEC there is no real difference for SolMan

     

    For a Public Cloud – such as SuccessFactors, Ariba , the plan is to allow SolMan to provide services for public

     

    LMDB  will have new interface

    2fig.png

    Figure 2: Source: SAP

     

    Public cloud performance monitoring  will be in Solman and will be in 7.2

     

    It is also included on SolMan 7.1 SP12

    3fig.png

    Figure 3: Source: SAP

     

    Figure 3 shows how to register cloud service with a guided procedure

    4fig.png

    Figure 4: Source: SAP

     

    Figure 4 shows interface and connection monitoring that is “In the pipeline”

     

    Solution Manager with In Memory Technology

    5fig.png

    Figure 5: Source: SAP

     

    All customers who have a valid support contract can use Solman on HANA without additional licenses

     

    This will come with transition support, standards, and more

    6fig.png

    Figure 6: Source: SAP

     

    Figure 6 is not part of 7.2 but a strategic message for the future

     

    SolMan will have a “Fiori like experience”

     

    Notice too that the reports will come from HANA Live, not BI/BW

    7FIG.png

    Figure 7: Source: SAP

     

    Install, register, do an upgrade will be improved with maintenance planner shown in Figure 7

    8fig.png

    Figure 8: Source: SAP

     

    Figure 8 shows there is a ramp-up for maintenance planner today

     

    SAP is looking for ramp-up customers

     

    Maintenance optimizer will be gone in 7.2

    9fig.png

    Figure 9: Source: SAP

     

    Figure 9 shows there is no new CRM release in 7.2; only an enhancement package

    10fig.png

    Figure 10: Source: SAP

     

    SOLAR01 02 are going away

    12fig.png

    Figure 11: Source: SAP

     

    Figure 11, was already previously announced

     

    Question & Answer

    Q: Java stack for SolMan – 7.1 – some functionality is removed from Java stack – going forward could we have Solman without the Java stack

    A: 7.2 will not be possible and Java is still required

     

    Q: Business process design – 7.1 – Advanced Business Process Blueprinting – how will it work in 7.2?

    A: First customer came back with feedback with lots of feedback and checked architecture and decided to rethink process modeling which is why coming out of new environment

     

    Advanced blueprint will not work on 7.2; only a few customers are using it

     

    Q: What is the planned support for current 3rd Party Tools: HPQC, Redwood CPS, Wily, Productivity Pak?

    A: Will continue to support interfaces in 7.1; will not have interface support during ramp-up – there are changes coming  - architecture is changing, want it stable before offer interfaces

     

    Q: What happens if upgrade from 7.1 to 7.2 and have open projects with transports?

    A: Current feedback – will have to close project; they are revisiting with development organization – customers are unhappy – customers want to continue to work after the upgrade

     

    Q: Will a planner Excel sheet be available, allow them to find risk and timelines?

    A: Not at this point but plan to offer the system – need to upgrade and see what the situation is

     

    Related

    Hopefully we'll learn more details at SAP Insider's Basis  SAP Administration 2015

    The official roadmap is at https://websmp106.sap-ag.de/~sapidb/011000358700001435482012E.pdf (SMP logon is required)

     

    The usual disclaimer applies that things in the future are subject to change.

     

    Matthias Melich, SAP, provided this webcast

     

    Current release is 7.1 with maintenance commitment to 2017

     

    Solution Manager 7.2 will into ramp-up middle of next year; by Q4 2015 SAP expects to be  GA

     

    Then if you are on Solution Manager 7.1 you have 2 years to transition

    1fig.png

    Figure 1: Source: SAP

     

    In the past there have only done a few investments in implementation.  The next release will see a big investment – “pragmatic business process management” – most of this presentation is on this topic

     

    Most customers have large on-premise

     

    SAP is actively driving cloud

     

    Lifecycle shown in Figure 1 means most of customers will be in a hybrid situation to support on-premise and cloud solutions and integrated

     

    SAP wants customers to use SolMan for hybrid environment

     

    There is a difference: Solution Manager for HANA and on HANA

     

    Solman 7.2 will be available on SAP HANA

     

    SolMan 7.1 is IT and less for business

     

    SolMan 7.2 is a more business balanced SolMan

    2fig.png

    Figure 2: Source: SAP

     

    Figure 2 on the left in Development system – planned, Solution Manager 7.2 will provide “state of the art” process modeling

     

    Picture shows what is typical in modeling environments

     

    PowerDesigner , which SAP acquired during Sybase acquisition, is being used

     

    Today – 7.1 need a landscape to enter business process steps; business process experts can’t use

     

    SAP wants to decouple this

     

    Use 7.2 early in project and hand over to business process experts, wants to make it easier to use for documentation for business processes

     

    SAP wants to extend diagnostic and analytics framework in Solman for managing business case

     

    SAP wants to extend framework to innovations area of SolMan  - relate business case to KPI’s

     

    SAP is investing in pre-configured solutions – have RDS’s (rapid deployment solutions)

    3fig.png

    Figure 3: Source: SAP

     

    Figure 3 shows processes will have more than 3 levels

     

    Figure 3 shows a screen shot,  non-graphical view of Solman

     

    SAP wants openness to other modeling tools

     

    SAP will have a marketplace on SCN; will allow vendors to certify interface similar to Service Desk, with a bi-directional interface

     

    This will be for business process – not full-blown UML ; for full-blown look at PowerDesigner

     

    SAP hopes to have interface added by then; but will not be part of the ramp-up scope

    4fig.png

    Figure 4: Source: SAP

     

    SAP will do away with some of the restrictions today

     

    Figure 4 shows a technical object library – transactions, reports, all objects in system only once – e.g. VA01 only once

     

    SAP will structure this library according to application component hierarchy

     

    Library is based on usage – object only goes to Technical Objects Library (TOL) if used

     

    It will generate this library automatically

     

    Process Step Library or PSL is based on usage – using application component hierarchy as a reference.  The PSL is the home for documents, test cases – can have multiple occurrences of technical objects – PSL is available per system. It is generated automatically; built on top of TOL

     

    E2E documents business processes across systems – business process library – can’t build automatically – this is optional – pull steps from individual systems

    5fig.png

    Figure 5: Source: SAP

     

    Figure 5 shows several paths

     

    If customer has no solution documentation today, then the libraries are generated and build up to end to end

     

    If have solution documentation today, all documentation is in read-only after upgrade, customers migrate projects to new environment – not automated fashion

    7fig.png

    Figure 6: Source: SAP

     

    Figure 6 shows the link to business case; once technical implementation is done, look at how implementation is by usage of systems – business view in pink

    IT view – requirements, test, change, application usage verification

     

    Part 2 of my notes is coming; focusing on the Cloud, HANA, future direction and question & answer

     

    Related

    Hopefully we'll learn more details at SAP Insider's Basis  SAP Administration 2015

    Actions

    Filter Blog

    By author:
    By date:
    By tag: