1 2 3 9 Previous Next

SAP Identity Management

122 Posts
Matt Pollicove

The Future of SAP IDM

Posted by Matt Pollicove Jul 21, 2014

I was recently made aware that SAP is planning some changes in how they are going forward with SAP IDM. SAP is moving IDM development from Norway to Bulgaria during the third quarter of this year. While it is not unusual for companies to make these changes, there are some questions and concerns that have been passed my way as someone closely identified with SAP IDM.


So what does this mean for IDM? Overall, SAP is planning on further embedding their IDM initiatives to the Cloud and analytics, which we will see in future releases of SAP IDM. One would assume that this would mean a tighter integration with HANA, which should be well received in SAP-centric organizations.

However what is most concerning to me is that it has proven difficult to obtain any official statements regarding the road-map or the changes in the SAP IDM team. However Gerlinde Zibulski, Director Product Management Security at SAP HQ in Walldorf has been most helpful to me in providing information about how this will affect the product. I had a chance to have a discussion with Gerlinde, who was able to answer some of the questions that were on my mind and that I’ve heard from the SAP IDM community.


  • What are the plans for development of the Attestation module? – It’s out and available in IDM 7.2 via Service Pack. This module will remain accessible via REST only as an option for those customers who do not plan to integrate with GRC.
  • What are the plans for the future of the GRC integration? – SAP remains committed to the integration of IDM and GRC. The roadmap indicates that there will be tighter integration with HANA and analytic \s for GRC.
  • Will SAP continue to support the non-SAP connectors and the change generic connector architecture that allows for connectivity to non-Landscape Systems? - Yes, this is a core part of SAP IDM and there are no plans to change this.
  • How do these changes affect the overall road-map? – Not at all. If you’re interested in learning more about the road-map, you can view it here. (SMP Login required)


Gerlinde pointed me towards the official SAP IDM road-map which confirmed that HANA is indeed a large part of IDM’s future. Additionally, the inclusion of Analytics to understand all incoming security information coming from GRC. One of the other interesting things about SAP IDM’s future is its integration with the cloud. As the IT and ERP world is all about the cloud right now, it should be interesting to see what comes about.

She also confirmed to me that SAP IDM 8 is on track to be released just before d-code 2014. There are also plans in place to hire more developers in the expanded Sofia office to work on SAP IDM.


To be fair, SAP has done an excellent job with its recent CEI program for the purpose of obtaining feedback from customers and consultants so that they may better understand how SAP IDM is used in the field. I look forward to attending and participating in more of these sessions in the future. (I’d also be very interested in seeing a CEI for GRC, but more on that another time)


Finally, I would like to send my best wishes to the Trondheim Labs (formerly MaXware) team. For almost 20 years, they have been working on IDM and VDS, while assisting with the definition and execution of what Identity Management means to the SAP Landscape and the overall Business Enterprise. I have been proud to be associated with them as a co-worker and as a partner. I know whatever you work on will be vastly enhanced by your participation. I wish you all the best in your future endeavors and hope that we get to work again in the future.


If anyone has questions regarding SAP IDM, they may contact her via email. Additionally, ideas for the future of SAP IDM can always be logged at the SAP IDM Idea Place.

Matt Pollicove

Did you know?

Posted by Matt Pollicove Jul 8, 2014

So I've found out a few things recently...


As I mentioned in my SCN Status, I'm happy to say that I'll be speaking this year at d-code in Las Vegas! If you're planning to go, please do your best to attend ITM118 - SAP Identity Management – Experience from a Customer Implementation Project. I don't know the exact schedule yet, but I'll be sure to let you all know. You can find more information here.


While the title speaks about Implementation, I'll have some good information for folks in all phases of an IDM project from a Best practices based on my 10 years expereince with IDM going back to the MaXware Identity Center.  As they say, the more things change, the more they stay the same, and some things about implementing IDM haven't changed much, if at all.


Also, did you know the name of our favorite SAP module has changed? Based on a post from Harald Nehring, we are now referred to as SAP Identity Management (SAP IDM)  I like it.  Much more compact and efficient.


So let's see over the years it's been:


MaXware MetaCenter (~2003)

MaXware Identity Center (~2004)

SAP NetWeaver Identity Management (2007)

SAP Identity Management (2014)


Nice to see the product is still changing and maturing. There's some other things changing with regards to SAP IDM and I'll be speaking more about that soon.

Taking the blog Assignment Notification task, customization and custom messages one step further, here is an explanation how to send emails, which are based on the message templates editor in Web Dynpro, in all different kind of tasks.


The basis therefore is the chapter Sending custom messages.


There is no real need to create your own message class for your custom emails, so first, create an email message template in Web Dynpro:




Create a custom task like described in Sending custom messages. As recommendation, put the NOTIFICATIONTASKID into the NOTIFICATION Repository as task reference, and reference to it in the job.






That's all.

So what's this blog all about?

An (really) important part of IDM is the provisioning to other systems. And it's a part that loves to keep us on our toes, as the amount of threads and blogs about this topic show. Through my time working with IDM the provisioning stopped working quite a few times due to different reasons. And every step of the way I learned a new reason and a new solution to get it going again. To keep up with all of that and to help me solve it faster when it decides to get stuck yet again I started to write down a checklist.


Some time ago I posted the better part of my little checklist in a thread and through some encouragement by Matt I decided to create a blog post out of it to share the whole thing to a wider audience and explain the steps a bit more. This is my first technical blog here on SCN so I'm a little nervous and excited at the same time. ^^

Just to be clear: Not all of the points of the checklist might work for you, since we're on a Windows server with the IDM management console and use an Oracle database.





My tools

  • Access to the Management Console (MMC) of the IDM
  • Access to the Monitoring-tab via http://<portalurl:port>/idm/admin
  • SQL developer with database connection to IDM database
  • Permission to access the Windows services of the IDM-server and to start/stop the services
  • Permission to reboot the IDM-server
  • Really good connections to the database administrators




How do I know it's that time again?

There are three signs that I check if the provisioning is really stuck again:

  1. I look at the "Job log" in the MMC to see if the last entry for the provisioning-dispatchers is from more that 15 - 20 minutes ago (even through it was triggered in the last minutes).
  2. The provisioning queue on http://<portalurl:port>/idm/admin is only growing.
  3. The dispatchers, that are assigned to do the provisioning, are shown as running in the MMC under "Dispatchers > Status" and the timestamp for "Last check" is updated when I click on refresh.


If all those steps come back with a "yes", I'll get...


The Checklist

  1. Check the "Status"-overview in the MMC to see, if a job is showing the state "Error".
  2. Restart the provisioning dispatchers in the "Services"-menu of the server.
  3. Check for waiting tasks via the SQL developer.
  4. Check the "Windows RT conn. string" on the Database-tab of the Identity Center configuration in the MMC.
  5. Reboot the server the MMC and dispatchers are installed on.
  6. Restart the IDM database.

That's the checklist in short, if you just need a little reminder or an idea for what to look at next. I'll explain the points a bit more in detail now.


1. Check the "Status"-overview in the MMC to see, if a job is showing the state "Error"


In the MMC you'll find the "Status"-overview as the first entry under "Management".


It shows all the jobs for that Identity Center connection (in this case it's named IMP). To check for jobs that have the current state "Error", just click on the column header "State" and it will be sorted by content. If you have checked the box "Show only enabled jobs" at the bottom of the page, the jobs with error-state should be shown in red font at the top or end of that list.

If you find a job that is associated with provisioning and it's on "Error", right-click on it and start it with "Run now".



2. Restart the provisioning dispatchers in the "Services"-menu of the server.


Go to "Start > Administrative Tools > Services" on the IDM server and look for your dispatchers, that are assigned to the provisioning, in that list. They should be shown as started. Right-click on them and choose "Restart".

This is what gets our provisioning going most of the time.



3. Check for waiting tasks via the SQL developer.


Open the SQL developer and work your way through the following SQL statements:

select * from mxp_provision where msg like 'Wait for%'

This checks for tasks, that wait for the successful state of other tasks. The MSG-column gives you an auditid for the next SQL-statement. It's the ID of the task, that is blocking the execution of the first task.


select * from mxp_audit where auditid=<auditid>

The MSG-column now shows information about the state (e.g. failed) of the blocking task, the reason and for which user and which assigment. With these information you can decide to leave it alone to handle itself (because it's got nothing to do with the blockage) or you can use the next SQL-statement.


update mxp_audit set provstatus=1100 where auditid = <auditid>

This last statement sets the blocking task to the state "OK" (= 1100) and therefor the waiting task (the one you found with the first statement) can finally be executed.



4. Check the "Windows RT conn. string" on the Database-tab of the Identity Center configuration in the MMC.

When you click on the IC configuration (the "IMP" in the first screenshot), the "Database"-tab will be displayed. At the end of it you'll find the string under "Runtime engine".


Open it and test the connection on the "Connection"-tab. If it comes back as failed, correct the name of the data source and/or the logon information. Then test it again to see if it's successful now.



5. Reboot the server the MMC and dispatchers are installed on.

Well, that's pretty self-explanatory. ^^

If you don't have permission to do this yourself, have the contact data of the administrator(s) at hand, who can reboot the server for you (just like with the restart of the services in #2).



6. Restart the IDM database.

This was only necessary once (until now), but to complete my checklist I'll include it here, too. Since I don't have direct access to our oracle database, I let our database administrators handle this one for me.




What's more to say?

Well, that's it! I hope the list can help you when your provisioning decides to take a break again. This is - of course - nowhere near a complete list, but a result of my experience with the issue.

If you have some tips of your own to add, I absolutly welcome it! As you know "Sharing is caring". So leave a comment when you have your own little checklist for this issue or if you want to give some feedback for my blog (which I'm really looking forward to, because I am a big fan of constructive criticism and it is my first technical blog).


Thank you for your time and attention. I hope it was not wasted! *g*




In order to encrypt the communication between IDM and AS Java during the Initial load or any other jobs, you may want to use HTTPs instead of HTTP for a JAVA server. However, if you choose the https protocol, you may get an error in Initial Load job. Error message looks like this


javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target


This is because the Java‘s server certificate is not trusted by your IDM java program.  All you need to do is to add server's (or root) certificate into JRE's default trust store.


The JRE's trust store is located under jre/lib/security. The file name is cacerts without extension.


Try command

keytool -importcert -file RootCA.crt -keystore cacerts



re-run the job. You will find the error is gone.


There are other ways to solve the problem. But I guess this is the easiest. The solution is also suitable for communicate with any other HTTPs server or LDAPs server.

HTML Reporting

This is an article focused on the reporting functionality of Identity Management with the main focus on HTML Reporting.  The article is applicable to both Identity Management 7.2 and 7.1 versions.

Identity Management needs to satisfy certain reporting requirements, such as :
  • What are the attributes of a given user?
  • What are the business roles assigned to a given user?
  • What systems does a given user has access to?
  • Which business roles are available in the system?
  • How many users/business roles, etc. are available in
    the system?
In this article I will show you how to use Identity Management’s functionality in order to create pretty, complete and useful HTML Reports.  With Identity Management you can create reports using the information which is available in the Identity Center.  The core of HTML Reporting is actually very simple – SQL Queries, which get values for a given entry type from the database.  Wrapping up those queries with some HTML and CSS code makes a fully customizable appearance for the generated reports, which gives flexibility and better user experience to the end users.  Let us start this magical tour into the wonderful world of HTML reporting with the entry typeitself:


The MX_REPORT entry type is the entry type for report requests and exists in Identity Management as of 7.1 SP2. The report is executed as an action task on the MX_REPORT entry type. As the report is a task, the task status indicates the progress of the report, i.e. pending, ok or error. By default, the MX_REPORT is not listed as a searchable entry type.  Note that the report task will always create an entry of type MX_REPORT, regardless of the provided entry type in the task definition.

Let’s take a look at the most important attributes of the MX_REPORT entry type:
  • DISPLAYNAME- display name of the report.
  • MX_REPORT_DATE– date on which the report was requested.
  • MX_REPORT_FORMAT– format of the report (this could be PDF,HTML,DOC, etc.)
  • MX_REPORT_RESULT– this attribute holds the full report result. It is saved as a binary in the database.
  • MX_REPORT_RESULT_REF– this attribute holds a reference to the report result, in case it is stored in a separate file server.
Reports are shown in the View Reports tab of the Identity Management User Interface.  In this tab, we have a table with 5 columns:
  • Entry – This corresponds to the value of the MX_REPORT_ENTRY attribute.
  • Status - Status of the report task (pending, OK or Error). This status is calculated based on the status of the task execution, taken from the MCMV_AUDIT view.
  • Report Date - This corresponds to the value of the MX_REPORT_DATE attribute.
  • Report Name - The name of the report. If DISPLAYNAME has a value, this value is stored, otherwise the value of MSKEYVALUE is stored here.
  • Report Result - This corresponds either to the MX_REPORT_RESULT or to the MX_REPORT_RESULT_REF attribute.
In order for a report to be shown in the View Reports tab, it must have a value for either MX_REPORT_RESULT  or MX_REPORT_RESULT_REF.  Otherwise, the report will be stored in the database, but it won’t be visible in the User Interface, as it has no result file attached to it.
We’ve seen the entry type for reporting, now let’s get into some real action.  Let’s create a report task, which returns all the assigned privileges and roles for a given user. To make things more beautiful, we will wrap it with some CSS and HTML, to create a good-looking report.  Let’s begin…

HTML Reporting Task
We start by creating the task – let’s name it “Create Report”.  We make it a UI task, and on the attributes tab we mark “Report task”.  This will automatically display all the attributes for the MX_REPORT entry type and create a new MX_REPORT entry type, regardless of what we have selected as the entry type of the task itself.  We select the task entry type to be MX_PERSON (which means what it will be executed on a person, but will create a report)and select some attributes to be displayed. (check the screenshot below.) The last step is to add access control to the task – let’s say we want only the administrator user to be able to create reports for entries. So we give access type – logged-in-user, with value-“Administrator” and on behalf of everybody relation.  Here is how the task definition should look like:

So far we have only a UI task, that can be executed by an Administrator and creates a new entry of type – MX_REPORT.  As there is no report_result, we don’t have an actual report ready. In order to create the report, we would attach a new empty job, with one To Identity Store pass.  In the pass destination tab, we will add MX_REPORT as an entry type, because the pass will make modifications to the newly created MX_REPORT entry.  The idea
here is, that after we create the report in the UI, we would receive the value of the MX_REPORT_ENTRY attribute, which is actually the mskey of the user, we are running the Create Report task on, and use this value as MSKEY in our queries, which will return the assigned privileges and roles. We will need MSKEYVALUE, to point to the created report, MX_REPORT_FORMAT, which will be with value – “HTML”, because if it’s in another format, when
opened in the UI, it will not be opened in the browser as needed; and MX_REPORT_RESULT, which will get its value from a custom script. In the script, we will use the magic of HTML5 + SQL Queries in order to create the report itself. Let’s create the script.

HTML Reporting Script

We create a new local script for the action task. Let’s name it HTMLReport. The script will take as input parameter the value of MX_REPORT_ENTRY (the mskey of the user, we are reporting on) and will return a binary representation of the report, which will be stored in the Identity Center.  I will explain the process of creating the script:
    1. We create a Header and Footer, which will be just static CSS and HTML code in the beginning and in the end of the HTML file. We do this to delimitate the static parts of the HTML code, so that they won’t interfere with our main logic.  The header contains the opening HTML tags and the CSS used, and the footer just contains the closing HTML tags.  For the sake of simplicity in this article, I won’t add the real CSS code I used in this example. If you want
      to use the same CSS, you can find a reference to the file at the end of the article.  Nevertheless, your header should look like this:       
    2.           <html>
                <style type="text/css">
                <….........CSS here……………..>
              And your footer should be like this:

                2. Now, let’s create the script itself. For now, we have the HTML opening tags and the CSS, and the HTML closing tags, along with the table closing                  tag. Now, let’s populate the table itself.  We will create 2 rows and 3 columns. The 1st row will contain the table headings: User Name, Assigned                Privileges and Assigned Roles. The  second row will contain the username, privileges and roles, all extracted from the Identity Center.
                    Privileges are extracted with the following query:
      select  mcothermskeyvalue from idmv_link_ext2 where mcAttrName='MXREF_MX_PRIVILEGE' and 
                    For roles, the query is:
      select mcothermskeyvalue from idmv_link_ext2 where mcAttrName='MXREF_MX_ROLE'
      and mcthismskey="+Par+"
      Since Par(which is the value of MX_REPORT_ENTRY) , contains the mskey of the entry, we have to extract the mskeyvalue of that user to be shown under User Name. This is done via the following query:
      select mcmskeyvalue from idmv_entry_simple where mcmskey="+Par+"
      Getting the results from those queries as values is done via the uSelect() function.
      We store the results of those queries in variables and add them to the table elements and store this into a variable (oHTML), which represents the table body.
      The last thing left to do is to return the binary representation (hex code). This is done via the uToHex() function. We also need to add  “{HEX}” as prefix.
                  This is how our example script looks like in the end:
      Main function: HTMLReport
      function HTMLReport(Par){
      var oHeader = uFromFile("C:\\Reporting\\Template\\header.html","-1","false");
      var oFooter =uFromFile("C:\\Reporting\\Template\\footer.html","-1","false");
      var AssignedPrivileges=uSelect("select mcothermskeyvalue from idmv_link_ext2 where mcAttrName='MXREF_MX_PRIVILEGE' and
      var oList="";
      var oArray=AssignedPrivileges.split("!!");
      for(var i=0; i<oArray.length; i++){
      var AssignedRoles = uSelect("select mcothermskeyvalue from idmv_link_ext2 where mcAttrName='MXREF_MX_ROLE' and 
      mcthismskey='"+Par+"' ");
      var oList2="";
      var oArray2 = AssignedRoles.split("!!");
      for(var i=0; i<oArray2.length; i++){
      oList2 = oList2 + oArray2[i] + '<br>';
      var userName = uSelect("select mcmskeyvalue from idmv_entry_simple where mcmskey="+Par+" ");
      var oHTML='<div class="HTMLReport"><table><tr><td>UserName:</td><td>Assigned Privileges</td><td>Assigned Roles</td></tr>';
      var oHex="{HEX}"+uToHex(oHeader+oHTML+oFooter);
      return oHex;
      We save and go back to the To Identity Store pass definition

          3. As a final step, we add the value of MX_REPORT_RESULT to be calculated via the HTMLReport  script, with MX_REPORT_ENTRY as                input  parameter. The Pass definition should look like this:



      4. Save the task and log on to the User Interface with the “Administrator” user.


Starting the task via the User Interface


Let’s execute the task on an entry. We go to manage tab and search for a person.  We select the person, go to “Choose task” and select the “Create Report” task.


The “Create Report” task is opened and we fill the attributes needed, then click “Save”.

This will save a new entry of type MX_REPORT, with MSKEYVALUE and Display Name – Superman Report, MX_REPORT_DATE – 25.04.2014, and values for DESCRIPTION and MX_REPORT_ERROR.  Saving this will execute the action task and the To Identity Store pass, which will set HTML as value to the MX_REPORT_FORMAT attribute, and will calculate the value of MX_REPORT_RESULT, using the Script we created.  To see our result, we will go to the View Reports tab:


We can see our newly generated report.  We can see that it is for the Entry Clark Kent, the report task is successfully executed (Status – “OK”), the report name and the report result. If we click on the Result, we will see our report in the browser:



Pretty, isn’t it?  It contains the UserName of the user and all the assigned privileges and business roles.



You can find the header and footer files, which I used with the abovementioned script to create this example report, attached to the article.  If you like them, they are available for free usage




Yours truly,



A consultant asked me about the inner working of group assignment functionality in the SAP provisioning framework 7.2 and why we are using the MX_GROUP + MX_PRIVILEGE coupling and so on.


So I decided it might be worth sharing it with the bigger audience and put it in a blog, after all it is non-trivial stuff and perhaps one of the more complex parts of the SAP Provisioning Framework in 7.2.


Anyway it been quite some time since I took on task of starting of the SAP Provisioning Framework 7.2 and it has been progressing quite a bit since then, but still the main concepts still remains pretty much the same.


Many of concept changes from the 7.1 framework and 7.2 actually stems from the early Notes Framework.


Both the way driving events by privilege assignments and dealing with modify events and group handling where first tested out in the Notes framework. Other concepts such as validation tasks roots back to origin from the GRC framework, where the need for checking assignments head of provisioning was introduced.


In 7.1 SAP Provisioning framework as you might know the add member task was used for this and approvals while provision was done by provision tasks, this meant dealing with rollback in case of erroneous assignment and there also would exist a time slot where you had the privilege before it was assigned in the backend.


There were already new features in the product for dealing with these issues so basically the 7.1 framework was growing old and not closely enough integrated with development to take early adoption for new upcoming features. So a new 7.2 SAP Provisioning framework was called for.


Following benefits we considered as the motivator factor for the 7.2 SAP Provisioning Framework



Main benefits

  • Proper privilege handling (you don't get privileges before you are entitled to the them

                      7.1 : rollback on privilege events 

                 vs 7.2 : pending privileges

  • Universal framework that handles complicated wait-for logic for you          
  • Hook-ins, gives new framework implementer and foundation to build on.   
  • Better visualization of what goes on.

                7.1 :  Used entry type events triggering firing, and static scripts that hard set attributes 

                7.2 :  Used a layer out task structure. Avoiding the hidden events in scripts.

  • Maintainability: less dependencies between frameworks and underlying 3rd party frameworks controlled by plugin points
  • Less hardwired. Provisioning triggering relates to privileges not entry types and attributes.

        Making it friendly to co-exist with other independent provisioning solutions that does not acquire entrytype or attribute triggering.


Of course there been many significant feature down the road since then such as group of privilege that was particularly important to avoiding bottleneck of assignment provisioning in 7.1. Todays context concept, attestation and advanced approval just to mention a few.

Group assignments

But today I wanted to focus on the group assignments.

In AS Java and also the Notes connector (now part of SAP Provisioning Framework), we use MX_GROUP object which is coupled with a group privilege.


Basically the nature of MX_GROUP is that it does not have modify events, it considered a container object.


If you are somewhat familiar with the SAP Provisioning framework 7.2 you know that there are account privileges (PRIV:<REP>:ONLY) for each repository,

and perhaps you also is aware that there is an internal privilege PRIV:SYSTEM:<rep> that is assigned as a result of account assignment.

The account privilege controls provisioning and deprovisiong to the repository while the system privilege controls modify events.

The event task are by default inherited from repository,

so all privileges except system privileges will disable the modify event by explicitly setting the MX_MODIFY_TASK to -1.

And likewise the PRIV:SYSTEM:<rep> will disable all event task except the modify task (which is inherited, hence not specified on the privilege).


The same applies for MX_GROUPs, you assign an account privilege to a group and it gets created in the repository and obtains the system privilege.

In addition however it is assigned a group privilege (PRIV:<REP>:GROUP:<IDENTIFIER> that controls the assignment of membership.


So given that you have an MX_GROUP with account and system privilege assigned. There is two ways of assigning members to the group.

1)     1)  Either you assign the user as a member of the group,     or        2)  you assign the group privilege to the user



So let’s take it step by step.


1. You assign a member to a group

This causes MXREF_MX_GROUP attribute to be updated on the user (with mskey of the group).

Since the user has the PRIV:SYSTEM:<REP> privilege and this privilege has the modify attribute MXREF_MX_GROUP enabled this will trigger a modify event in the provisioning framework.




2. The modify task fires.

On newer version of SAP Provisioning Framework 7.2 there is checked of “User context variables”.



This enables the generation of MX_EVENT* context variables for the sessions, a prerequisite in newer framework for detecting modification event.

(Basically you can halt the execution by deselecting a job on a task in the task there, pick up the audit id from provision queue, and you can see these audit variables)

While we before used SAP_CHANGENUMBER to track these changes this is much safer and faster approach.


3. Update of the group privilege.


The task “process modify type” set context variable for different operations, and the conditionals evaluates them.


In this case, since the MXREF_MX_GROUP was changed the operation flag will be set, and the ‘Update group privilege’  task will run.


The ‘Update group privilege’ task will for each member of the group assign the group privilege.


If you have a bunch of users in the group they will already have this privilege so nothing will happened, for the newly added group members however the group privilege assignment will fire of a provision event.




4. Provisioning by group privilege.

If it has been a nested group assignment (AS Java) you would now go in the MX_GROUP switch,

but since this is a persons and not an account privilege assignment you follow to the plugin execution of 4. Exec plugin – Assignment User Membership.

Firing of the plugin task of the repository. Basically how that works is that the task “inherits” the repository from the privilege (group privilege) and

the execute plugin task obtains the plugin task through the repository hook variable as a runtime resolution. And execute a basic provision on the hook.

The execute plugin task and hook task utilizes the wait-for/after concept to halt further execution in the task tree until the plugin task completes.



After having executed the respective backend operation. The assignment type is checked.

Since we are dealing with group it has to ensure MX_GROUP and MX_PRVILEGE is in sync on the user.

So it will update the MX_GROUP with the members, meaning the MXREF_MX_GROUP is updated on the user.

Since this is a multi-value reference attribute it will re-trigger the Modify task in step 3.

However the user will already have the group privilege now so it will not re-trigger.



if you assign a group privilege to a user  you basically start the process at step 4 directly.


In SAP Provisioning Framework 7.2 v2.

The process is the same, but since the v2 is embedded in Java (with the advantage of improved speed)

The detection of MXREF_MX_GROUP in modify happens inside the Java code.




And the assignments evaluation happens in the Split conditional and you are redirected to the MX_PERSON_ASSIGNMENT.

And the conditional : Person after Assignment provisioning deals with the assignment type evaluation etc.



SAP NetWeaver IdM REST API UI - calling POST method/example



            I had a problem executing a POST method, after a new security requirement was added (the Virus Scan Interface has been enabled) within IdM  REST Interface Version 2 to prevent XSRF attacks. So I had to execute a non-modifying request (GET, HEAD, OPTIONS) first, where the X-CSRF-Token header field has the value Fetch. And after I had the value from my fist call in X-CSRF-Token header field I was able to execute a modifying request (POST...). Here is an example, how I do that:


var xsrfTokenValue="";

var myData = new Object();



                type: "GET",

                url : "http:host:port/idmrest/v72alpha/entries/{MSKEY}/tasks/{TASKID}",

                dataType : "json",

                async: false, 

                contentType: 'application/json', 

                headers: {

                               "X-CSRF-Token": "Fetch",

                               "X-Requested-With": "JSONHttpRequest",

                               "X-Requested-With": "XMLHttpRequest",

                               "Content-type": "application/x-www-form-urlencoded"


                success: function(res, status, xhr){

                     xsrfTokenValue =xhr.getResponseHeader("X-CSRF-Token");




                type: "POST",

                url : "http:host:port/idmrest/v72alpha/entries/{MSKEY}/tasks/{TASKID}",

                dataType : "json",

                headers: {

                               "X-CSRF-Token": xsrfTokenValue,

                               "X-Requested-With": "JSONHttpRequest",

                               "X-Requested-With": "XMLHttpRequest",

                               "Content-type": "application/x-www-form-urlencoded"



                async: false, 

                contentType: 'application/json',

                success: function(data){






  • Into xsrfTokenValue variable is the value for X-CSRF-Token header stored(from the GET method)
  • into my headers I have all required IdM headers.
  • Into myData(in my POST request) you can dynamically generate the Object(the needed data send back to IdM) send with the POST method

As always, we start with a quote:


Captain Spock: All things being equal, Mr. Scott, I would agree with you. However, all things are not equal.


My latest project has me working with DB2 as the IDM backend. We’ve faced several challenges along the way, many in the area of performance and some in just general development. I’ll be addressing some of the performance issues in a future post (need more information from DBAs and other research)

One of the first things I learned about DB2, is how IDM recognizes it. I’m not talking about the Oracle emulation layer, but rather what code IDM uses to determine the database type. Consider the following code from the SAP IDM RDS Solution, which I have since modified:

// Main function: sapc_prepareSQLStatement

// 12MAR2014 - MGP - Added DB2 Support, some cleanup

function sapc_prepareSQLStatement(Par){


var dbType = "%$ddm.databasetype%";

var script = "sapc_prepareSQLStatement";

var returnValue="";

//uWarning ("dbType: "+dbType);


// Processing

if ( dbType == 1 ) { // MS-SQL

// uWarning("Database Type is MS-SQL.");

// uWarning("Par: " + Par);

return Par;


else if ( dbType == 2 ) { // ORACLE

returnValue = uReplaceString(Par, "AS", "");

// uWarning("Database Type is Oracle.");

// uWarning("returnValue: " + returnValue);

return returnValue;


else if ( dbType == 5 ) { // DB2

returnValue = uReplaceString(Par, "AS", "");

// uWarning("Database Type is DB2.");

// uWarning("returnValue: " + returnValue);

return returnValue;


else {

uErrMsg(2, script + " SQL Task: invalid database type: " + dbType );

// return error message and empty result

return "";




Note that while SQL Server has a value of 1, Oracle has a value of 2, while DB2 has a value of 5. Makes you wonder what happened to 3 and 4… (Sybase and MySQL, maybe?)


The other thing that I discovered is that when writing values back to a table in DB2 certain values are not welcome. I needed to write a multi-value entry back to the database and I kept receiving error messages. I was finally able to get a useful error message by changing the properties of the To Database pass I was using so that it would do SQL Updating.

To Database.png

When I did that, I received the following message (data has been changed to protect the innocent):


SQL Update failed. SQL:INSERT into recon_roleassign_EPD values (AAA__00000,AAA__00000,BBB__00000) com.ibm.db2.jcc.am.SqlSyntaxErrorException: An unexpected token "!" was found following "PD values (NL__HR005". Expected tokens may include: "!!".. SQLCODE=-104, SQLSTATE=42601, DRIVER=3.66.46


I was somewhat confused since this is the “standard” delimiter, at least as far as IDM is concerned, however after checking with a knowledgeable DB2 DBA, he confirmed that not only is ‘!!’ illegal to write, but so is ‘||’. He also mentioned that there is a list out there on the web somewhere, but I was not able to find it. If anyone can find the list of illegal characters, please comment on this entry and I will update the entry. I wound up using ‘;;’ as a delimiter.


As an aside, I have been asked over the years, why use ‘!!’, ‘||’, or even ‘;;’ as a delimiter? The answer as I understand it, is this:


There’s a chance when you use any character as a delimiter that it could be part of the string. Somewhat obvious when you think about common delimiters such as comma, colon, or dash. Even possible for characters such as pipe, slash, or pound. However when you use two like characters as a delimiter the chance that it’s supposed to be part of the actual data string is greatly reduced.


So there you have it, DB2 = 5, and you can’t write ‘!!’ or ‘||’ to a table. What other DB2 tips do you have to share?

Matt Pollicove


Posted by Matt Pollicove May 6, 2014

I had the opportunity to attend SAP’s first CEI program session for IDM. These sessions are designed to give SAP the chance to get some information on not only what customers are looking for from SAP IDM but also what they need from it. I think this is a fantastic initiative on SAP’s part for a few reasons.


  1. For the first time since SAP acquired MaXware almost seven years ago, they are going out to seek information from a variety of customers about how they would use the product. Pervious CEI initiatives were aimed at a very small group of SAP IDM customers. From what I understand this new group, mostly volunteers, is about 10 times larger! During this first session, I heard a few ideas come out that I don’t think SAP thought of, but saw the value almost immediately.
  2. SAP is actively engaging customers, partners, and consultants for feedback. The first session centered on the integration of Success Factors HR with IDM and what customers would be able to expect via an enhanced “ramp up” program.
  3. There are other sessions planned to cover a variety of topics in the coming months showing a considerable amount of initiative from SAP. I’m looking forward to exploring it in more detail.


Based on what I saw in this presentation and from things I’ve seen at past TechEd sessions, the forums, and general research, I’m seeing the following trends at SAP regarding Identity Management. These opinions are entirely my own, although comments from SAP are, as always, welcome.


  1. IDM is being positioned as the key workflow engine for managing identities throughout the SAP Landscape. If information is coming from any resource involving people, IDM is where it should be processed. HCM, Success Factors, PI, XI, AD, LDAP, no matter what the source, IDM should be handling the flow of information to other systems in the landscape and applications in the Enterprise. Unlike some other “people centric” systems like GRC and CUA, IDM is uniquely designed to work in a heterogeneous environment.
  2. It appears that SAP is poised to start a new phase of growth for IDM. This isn’t just the basic, release the latest iteration of the connector list and publish a new service pack. It’s time to add some new functionality and breathe some new life into SAP IDM!


One of the other benefits of this call-in session was getting to hear the voices of several people I have met in my travels and others that I have only met through SCN, Hope we all get to meet up soon! (TechEd, I mean d-code anyone?)


I’m looking forward to seeing what SAP has planned for IDM. Here’s to the new IDM frontier in the Landscape and the Enterprise beyond!

Dear community,


as I am often getting the question how to build a simple example using the SAP Identity Management REST API, I am writing a small blog post about it.


  • Attached, you are able to find a text file. Download it do a directory of your choice and rename it to index.html.
  • Download the jQuery library from following location and save it to the same directory as jquery-1.9.1.min.js:


  • Edit the index.html file with a plain text editor (e.g. Sublime Text) and change the host definition, so that it fit's your environment:

var baseUrl = 'http://localhost:50000/idmrest/v1';

  • After storing the file, open it with a browser (e.g. Goggle Chrome), and execute the "Search for Users" functionality. You will be prompted for username/password. As result, you should see a list of MX_PERSON entries.
  • Afterwards, execute the "Get Data from Display Task" functionality for a valid MSKEY, and you will see the attributes of this entry.


With the Google Chrome Developer Tools, you are easily able to look behind the scenes and which data is moved over the wire.


I recommend following  pages and tutorials for further information:







In addition to following blog posts:

Write your own UIs using the ID Mgmt REST API

SAPUI5 and ID Mgmt - A Perfect Combination



Maybe some of you have experienced this problem and maybe not. Maybe you just knew the answer but I couldn't find it on here anywhere so when I figured it out, I figured I'd share.


In the current environment I'm working in, when a new account is entered into IDM, be it through IDM directly or via the HR system, the first 6 characters of the last name and a couple characters from the first name or nickname are then used to complete the MSKEYVALUE, which is in turn becomes the user's Windows and SAP login IDs. We call this the 6+2 unique ID. The problem that was occurring was that if the person had spaces in their last name, that space counted as a character. It would get squeezed out when the actual MSKEYVALUE was created but it would then leave the ID in a 5+2 state.


For example, a name of "Jodi Van Camp", "Van Camp" being the MX_LASTNAME, would turn out an MSKEYVALUE of "VanCaJo" when it should be "VanCamJo".


The bottom line was, we needed to eliminate those spaces in the last name for the purpose of creating the MSKEYVALUE.


I thought it would be a simple replace using a script. Maybe something like this:


function z_eliminateWhitespace(Par){
  var result = Par.replace(/\s+/g, "");
  return result;

Or maybe this:


function z_eliminateWhitespace(Par){
  var result = Par.replace(/\s/g, "");
  return result;

Or this:


function z_eliminateWhitespace(Par){
  var result = Par.replace(/ /g, "");
  return result;

Or lastly, this:


function z_eliminateWhitespace(Par){
  var result = Par.replace(" ", "");
  return result;

None of this seemed to work. I've had it happen way too many times where a SQL query or JavaScript won't work exactly the way it should in IDM as it does in other environments so this wasn't a total surprise but now what? Finally, I happen on the idea of splitting the string on the spaces and rejoining it without the spaces. This was the script I eventually came up with and it seems to work:


function z_eliminateWhitespace(Par){
  var result = Par.split(" ").join("");
  return result;

The final script had an IF line before the split / join checking Par to make sure it wasn't empty or a NULL value but you get the general idea. Hope this perhaps helps someone out there someday.

Hi All,


I want to share a simple example with you to demonstrate how you can utilize SAP IdM to invoke a local PowerShell script.

In my scenario I am using Quest ActiveRoles Server Management Shell for Active Directory but this should work with Windows AD cmdlets as well.


In my Plugins folder I have replaced the standard To LDAP directory pass with a new Shell execute pass.

Screen Shot 2014-04-03 at 22.53.01.png

In the Destination tab you should disable the option "Wait for execution" and insert the following command with your arguments.


cmd /c powershell.exe -Command "c://scripts//ProcessQADUser.ps1" %$rep.QARS_HOST% %$rep.QARS_PASSWORD% %MSKEYVALUE% $FUNCTION.cce_core_descryptPassword(%MX_ENCRYPTED_PASSWORD%)$$ "'%Z_ADS_PARENT_CONTAINER%'" %MX_FIRSTNAME% "'%MX_LASTNAME%'"

Screen Shot 2014-04-03 at 22.57.50.png

Please remember to separate attributes using white spaces as PowerShell will remove commas and convert the arguments into an Array.


Hope this helps.





The enhanced approval mechanism was introduced with SAP NetWeaver Identity Management 7.2 SP4. The purpose was to add more functionality as well as improve performance.


This post will attempt to clarify how the basic approvals are handled when the 7.2 approvals are enabled. It will explain why you won't always see the approvers for the basic approvals in the "Approval Management" of Identity Management Administration User Interface.


Defining the basic approvers

For basic approvals, the approvers are defined on the task, and it uses the same mechanism as the access control. This may include using an SQL filter, to determine who is allowed to approve. This gives you a really powerful way of defining the approvers, but also has some drawbacks.


In the following example, I've defined a role called ROLE:APPROVER. A user with this role is allowed to approve, but is only allowed approve users within the same cost center, i.e. with the attribute MX_COSTCENTER with the same value.


The approver definition looks like this:


The filter to select users within the same cost center may look like this (on Microsoft SQL Server):



    FROM idmv_value_basic with (NOLOCK)


     ((mskey IN (SELECT mskey

        FROM idmv_value_basic


        SearchValue = (select aValue

                          from idmv_value_basic

                          where AttrName='MX_COSTCENTER' and



During execution, the %ADMINMSKEY% will be replaced by the MSKEY of the approver.


Determining the approvers

To determine the approvals for a given user, each and every pending approval must be checked. This evaluation is done when the To Do tab is opened. So for everyone who is a member of ROLE:APPROVER the system will have to check all the pending approvals to see if the target of the pending approval is in the same costcenter as the logged in user.


It is not possible to "reverse" the statement to get all the approvals for a given user(%ADMINMSKEY%).


As a side note: determining approvers for assignment approvals is simpler, as this will always be a list of users, privileges or roles, which can be expanded immediately.


Performance improvement for basic approvals

A major performance improvement was done with handling basic approvals, as the approver information is saved, which means that each approver only needs to run the above check once, for each new approval.


Whenever an approver is calculated, this approver is added to the internal approval data structure, which means that subsequent listing of approvals is very fast, compared to having to calculate this every time the user lists the approvals.


The MX_APPROVALS attribute

The MX_APPROVALS attribute is (as before) written to entries where an approval is pending, but is not used during the approval process. Therefore, if you have code which has manually changed this attribute, this will not have any effect on the pending approval.


Approval management

With the 7.2 approvals, we also added approval administration, both for manager and for administrator. This works fine for the assignment approvals (which are always expanded), but for basic approvals, you will only see approvers which have actually listed their approvals in the "To do" tab, and as a result, being added to the mxi_approver table.



Because of the possibility to use filters for defining approvers for basic approvals, it is not possible to expand the approvers initially, thus it is not possible to send notification messages. In addition they will not be shown in the approval management for the manager, until they have been expanded.

Single Sign-On versus Password Synchronization solutions.

How do you know which one is right for you?


This blog co-authored with Benjamin GOURDON is based on several customers’ experiences.


The purpose of this blog is to perform a quick comparison and to provide an overview of pros/cons between Single Sign-On and Password Synchronization solutions.  Both are designed to greatly reduce the number of calls to the support and improve the user’s comfort, and provides a ROI lower than 3 months, as proven by many customer implementations.

Single Sign-On: SAP NetWeaver Single Sign-On


SAP NetWeaver Single Sign On enables users to access all their applications through a single authentication event. From an end-user perspective, there is
no longer a need to provide credentials for connecting to each application.


The overall solution is subdivided into 3 sub solutions:


  • Secure Login which enable SSO to SAP systems using SAP GUI and other web applications in the same domain. Based
    on Kerberos tickets or X.509 certificates.
  • Identity Provider which enable SSO to any web application or web services with identity federation. Based on SAML2.0.
  • Password Manager which enable SSO to applications which are not supporting any standard protocol and requiring
    login/password information (previously locally recorded).


Depending on the system landscape, 3 different implementation scenarios are suitable and will determine the identification protocol: 

  • Homogeneous landscape: Only SAPapplications in the same domain
  • Heterogeneous landscape: SAP applications and non-SAP in the same domain
  • Heterogeneous landscape and inter-domain (« On cloud » applications)


Password synchronization:SAP NetWeaver Identity Management


SAP NetWeaver IdentityManagement allows to synchronize the password throughout your IT landscape so the user can access any application with the same password. Each password change in SAP IDM or in Microsoft Active Directory will automatically be replicated to all other integrated or supported systems as a productive password (optional). To secure this solution, the provisioned password must be encrypted via secure Channels (using SNC for SAP ABAP systems, or SSL for web applications including SAP Java systems or directories).

From an end-user perspective, this means using the same password for every application where you want to log on.

For additional information about this solution, I strongly recommend you to read this blog written by Jérémy Baars:



Determine the solution which would balance cost, security, user comfort, adaptability according to your criteria.


The table below intends to compare the Password Synchronization and Single Sign-On by analyzing their respective strengths and




So let's consider several criteria to choose the most appropriate solution:


User Friendliness

As you can see above, SAP Netweaver Single Sign On offers a better end-user experience, as this solution reduces the number of times a user must type ID and password to access an application. This also contributes to raise user productivity.


Evolution perspectives

SAP Identity Management allows to optimize the user lifecycle and to simplify user management. It is replacing SAP Central User Administration (CUA) that will not be further developed by SAP., As such, it could be interesting to choose password synchronization method if you plan to implement an Identity & Access Management solution in the near future.



If Security is an important criteria for your choice, implementing SAP Netweaver Single Sign On will guarantee a strong authentication by blocking traditional access on each application concerned.



From a financial point of view, there is not much difference regarding the implementation costs. The choice should more be oriented on the policy and the strategy of the enterprise.


Filter Blog

By author:
By date:
By tag: