Maybe some of you have experienced this problem and maybe not. Maybe you just knew the answer but I couldn't find it on here anywhere so when I figured it out, I figured I'd share.


In the current environment I'm working in, when a new account is entered into IDM, be it through IDM directly or via the HR system, the first 6 characters of the last name and a couple characters from the first name or nickname are then used to complete the MSKEYVALUE, which is in turn becomes the user's Windows and SAP login IDs. We call this the 6+2 unique ID. The problem that was occurring was that if the person had spaces in their last name, that space counted as a character. It would get squeezed out when the actual MSKEYVALUE was created but it would then leave the ID in a 5+2 state.


For example, a name of "Jodi Van Camp", "Van Camp" being the MX_LASTNAME, would turn out an MSKEYVALUE of "VanCaJo" when it should be "VanCamJo".


The bottom line was, we needed to eliminate those spaces in the last name for the purpose of creating the MSKEYVALUE.


I thought it would be a simple replace using a script. Maybe something like this:


function z_eliminateWhitespace(Par){
  var result = Par.replace(/\s+/g, "");
  return result;

Or maybe this:


function z_eliminateWhitespace(Par){
  var result = Par.replace(/\s/g, "");
  return result;

Or this:


function z_eliminateWhitespace(Par){
  var result = Par.replace(/ /g, "");
  return result;

Or lastly, this:


function z_eliminateWhitespace(Par){
  var result = Par.replace(" ", "");
  return result;

None of this seemed to work. I've had it happen way too many times where a SQL query or JavaScript won't work exactly the way it should in IDM as it does in other environments so this wasn't a total surprise but now what? Finally, I happen on the idea of splitting the string on the spaces and rejoining it without the spaces. This was the script I eventually came up with and it seems to work:


function z_eliminateWhitespace(Par){
  var result = Par.split(" ").join("");
  return result;

The final script had an IF line before the split / join checking Par to make sure it wasn't empty or a NULL value but you get the general idea. Hope this perhaps helps someone out there someday.

Hi All,


I want to share a simple example with you to demonstrate how you can utilize SAP IdM to invoke a local PowerShell script.

In my scenario I am using Quest ActiveRoles Server Management Shell for Active Directory but this should work with Windows AD cmdlets as well.


In my Plugins folder I have replaced the standard To LDAP directory pass with a new Shell execute pass.

Screen Shot 2014-04-03 at 22.53.01.png

In the Destination tab you should disable the option "Wait for execution" and insert the following command with your arguments.


cmd /c powershell.exe -Command "c://scripts//ProcessQADUser.ps1" %$rep.QARS_HOST% %$rep.QARS_PASSWORD% %MSKEYVALUE% $FUNCTION.cce_core_descryptPassword(%MX_ENCRYPTED_PASSWORD%)$$ "'%Z_ADS_PARENT_CONTAINER%'" %MX_FIRSTNAME% "'%MX_LASTNAME%'"

Screen Shot 2014-04-03 at 22.57.50.png

Please remember to separate attributes using white spaces as PowerShell will remove commas and convert the arguments into an Array.


Hope this helps.





The enhanced approval mechanism was introduced with SAP NetWeaver Identity Management 7.2 SP4. The purpose was to add more functionality as well as improve performance.


This post will attempt to clarify how the basic approvals are handled when the 7.2 approvals are enabled. It will explain why you won't always see the approvers for the basic approvals in the "Approval Management" of Identity Management Administration User Interface.


Defining the basic approvers

For basic approvals, the approvers are defined on the task, and it uses the same mechanism as the access control. This may include using an SQL filter, to determine who is allowed to approve. This gives you a really powerful way of defining the approvers, but also has some drawbacks.


In the following example, I've defined a role called ROLE:APPROVER. A user with this role is allowed to approve, but is only allowed approve users within the same cost center, i.e. with the attribute MX_COSTCENTER with the same value.


The approver definition looks like this:


The filter to select users within the same cost center may look like this (on Microsoft SQL Server):



    FROM idmv_value_basic with (NOLOCK)


     ((mskey IN (SELECT mskey

        FROM idmv_value_basic


        SearchValue = (select aValue

                          from idmv_value_basic

                          where AttrName='MX_COSTCENTER' and



During execution, the %ADMINMSKEY% will be replaced by the MSKEY of the approver.


Determining the approvers

To determine the approvals for a given user, each and every pending approval must be checked. This evaluation is done when the To Do tab is opened. So for everyone who is a member of ROLE:APPROVER the system will have to check all the pending approvals to see if the target of the pending approval is in the same costcenter as the logged in user.


It is not possible to "reverse" the statement to get all the approvals for a given user(%ADMINMSKEY%).


As a side note: determining approvers for assignment approvals is simpler, as this will always be a list of users, privileges or roles, which can be expanded immediately.


Performance improvement for basic approvals

A major performance improvement was done with handling basic approvals, as the approver information is saved, which means that each approver only needs to run the above check once, for each new approval.


Whenever an approver is calculated, this approver is added to the internal approval data structure, which means that subsequent listing of approvals is very fast, compared to having to calculate this every time the user lists the approvals.


The MX_APPROVALS attribute

The MX_APPROVALS attribute is (as before) written to entries where an approval is pending, but is not used during the approval process. Therefore, if you have code which has manually changed this attribute, this will not have any effect on the pending approval.


Approval management

With the 7.2 approvals, we also added approval administration, both for manager and for administrator. This works fine for the assignment approvals (which are always expanded), but for basic approvals, you will only see approvers which have actually listed their approvals in the "To do" tab, and as a result, being added to the mxi_approver table.



Because of the possibility to use filters for defining approvers for basic approvals, it is not possible to expand the approvers initially, thus it is not possible to send notification messages. In addition they will not be shown in the approval management for the manager, until they have been expanded.

Single Sign-On versus Password Synchronization solutions.

How do you know which one is right for you?


This blog co-authored with Benjamin GOURDON is based on several customers’ experiences.


The purpose of this blog is to perform a quick comparison and to provide an overview of pros/cons between Single Sign-On and Password Synchronization solutions.  Both are designed to greatly reduce the number of calls to the support and improve the user’s comfort, and provides a ROI lower than 3 months, as proven by many customer implementations.

Single Sign-On: SAP NetWeaver Single Sign-On


SAP NetWeaver Single Sign On enables users to access all their applications through a single authentication event. From an end-user perspective, there is
no longer a need to provide credentials for connecting to each application.


The overall solution is subdivided into 3 sub solutions:


  • Secure Login which enable SSO to SAP systems using SAP GUI and other web applications in the same domain. Based
    on Kerberos tickets or X.509 certificates.
  • Identity Provider which enable SSO to any web application or web services with identity federation. Based on SAML2.0.
  • Password Manager which enable SSO to applications which are not supporting any standard protocol and requiring
    login/password information (previously locally recorded).


Depending on the system landscape, 3 different implementation scenarios are suitable and will determine the identification protocol: 

  • Homogeneous landscape: Only SAPapplications in the same domain
  • Heterogeneous landscape: SAP applications and non-SAP in the same domain
  • Heterogeneous landscape and inter-domain (« On cloud » applications)


Password synchronization:SAP NetWeaver Identity Management


SAP NetWeaver IdentityManagement allows to synchronize the password throughout your IT landscape so the user can access any application with the same password. Each password change in SAP IDM or in Microsoft Active Directory will automatically be replicated to all other integrated or supported systems as a productive password (optional). To secure this solution, the provisioned password must be encrypted via secure Channels (using SNC for SAP ABAP systems, or SSL for web applications including SAP Java systems or directories).

From an end-user perspective, this means using the same password for every application where you want to log on.

For additional information about this solution, I strongly recommend you to read this blog written by Jérémy Baars:


Determine the solution which would balance cost, security, user comfort, adaptability according to your criteria.


The table below intends to compare the Password Synchronization and Single Sign-On by analyzing their respective strengths and




So let's consider several criteria to choose the most appropriate solution:


User Friendliness

As you can see above, SAP Netweaver Single Sign On offers a better end-user experience, as this solution reduces the number of times a user must type ID and password to access an application. This also contributes to raise user productivity.


Evolution perspectives

SAP Identity Management allows to optimize the user lifecycle and to simplify user management. It is replacing SAP Central User Administration (CUA) that will not be further developed by SAP., As such, it could be interesting to choose password synchronization method if you plan to implement an Identity & Access Management solution in the near future.



If Security is an important criteria for your choice, implementing SAP Netweaver Single Sign On will guarantee a strong authentication by blocking traditional access on each application concerned.



From a financial point of view, there is not much difference regarding the implementation costs. The choice should more be oriented on the policy and the strategy of the enterprise.

This blog exposes a method for designing SAP HR integration with SAP IDM, it also gives you a large understanding of HR use cases and some tips to succeed this implementation. Here are some thoughts summarized from several customer experiences on SAP HR implementation with IDM.




HR possible use cases related to IDM scenarios


Combining Personnel Administration (PA) “tasks” (example: leaving) and “reasons” (example: firing) can quickly have different meanings and generate many changes on an employee PA file. Below is a description of the most common PA tasks which defines one employee lifecycle:


  • Hiring: hiring a new employee can be under different contract types or different employment categories for instance, as a permanent employees or as a trainee.


  • Rehiring: reentering an employee into a company after a long period of time, for example, for maternity protection leave (same as Suspension of contract)


  • Organizational reassignment: changes when the employee changes positions, cost center, or is moved to another subsidiary.

"Promotion" is an essential case to consider for IDM design, as it can be directly related to automatic roles calculation such as ESS/MSS roles.


  • Country Reassignment: refers to an employee being assigned to an organizational unit in another country, in other words, the employee is being expatriated to a different country.


  • Basic employee information modification: changing an employee’s last name in case of Marriage. This case can also represent a highlight when Login IDs are based on user’s last name.


  • Early retirement: as any other "early event" it’s when updating information for future events in SAP HR. (Same as Extension of contract)

In those use cases we should manage validity dates with attention during IDM design phase.


  • Leaving case: When an employee leave the company.


HR flow.png


Identity Lifecycle regarding HR Business processes


Key steps for a successful design of SAP IDM scenarios derived from HR use cases


     1. Dig into how customer deal with every HR process

Essential SAP HR personnel administration tasks are defined and performed differently from one customer to another.

Prepare a set of questions to ask about every process during design phase, an example of questions would be:

    • How do HR operators deal with expatriations?
    • Is it a leaving task followed by a rehiring?
    • Is it only an organizational change?


    2. Think big … start small

When implementing HR with IDM, we tend to automate accounts management following a predefined rules.

Automatic rules can’t fit to 100% of company employees, that’s why it’s important to demarcate HR scope on a small “population” for a start then enhance      it to the most.


    3. Make it simple

SAP IDM provides a set of good utilities to manage rules on roles such as RBAC, Dynamic group for automatic calculation, inheritance between Business Roles Layers ….

When designing role model, let it be as simple as possible and avoid combining many IDM utilities, the structure gets quickly messy and it’s always a pain  to explain to IDM end users.


    4. Spot relevant information

Pick up relevant information of what you need to know to build SAP IDM workflows and translate what you understood from HR process to IDM workflows      in a basic way.

From IDM side, everything is about creation, modifications and deletion.


     5. Summarize and focus on SAP IDM fundamentals

Sort out the collected information and focus on what you really need to know to build IDM workflows, below an example of an easy way to recap:


HR process

IDM Workflow

Relevant information for IDM?


Standard Modification

Specific * modification


Hiring a trainee





New PA*

Hiring a permanent employee





New PA*






New PA*

Marriage / divorce





Last name modification






Personnel area / Country / Organization modification

Company transfer





Organization modification






Contract type / country modification


*Specific modification: implementing a triggered modification workflow based on event tasks in IDM to respond to one customer specific business requirements.

*PA: personal administration




What you need to know about customization


Here are some tips that you will probably have to anticipate:

  • Query result: If you realize that you have many records for the same employee, you will probably have to ask your developer to make it all in one.
  • HCM write back: if you choose to write back information to HR, think about unselecting the corresponding "communication" data from SAP Query as you set SAP IDM as master on “communication” infoset.
  • Future events in HR such as future departures, usually require a modification on the standard query selection.


Driving SAP IDM processes by SAP HR events proves to be a good way to cut off support costs.


Feel free to try those tips and leave us a comment to let us know if it turns efficient for your projects too :-)

The latest support package, SP9, for SAP NetWeaver Identity Management, contains important integration enhancements for the SAP and GRC provisioning frameworks, and an updated SAP HANA connector. You can learn how to set up the connector in Penka Tatarova's new video.


SP9 also offers a new feature: You can now benefit from attestation capabilities. Attestation, also known as re-certification, means that managers or administrators periodically check and "attest" that a person only has those access rights he or she should have.


Need more information? Take a look at our overview presentation, and read the detailed release notes on the SAP Help Portal.
Ready to download? Get the support package on the SAP Service Marketplace.

Network Security: Don’t Leave Your Virtual Doors and Windows Open


Imagine designing a new home. It’s likely you’d focus on the
overall layout first and then move on to the layout of each room. From there,
you’d incorporate important features, like your heating and air conditioning
systems, plumbing, and maybe a surround sound system. Maybe you’d start
selecting appliances. And of course, you’d want input on the design and décor
of your floors, walls, and ceilings.



But what if your contractor forgot to include locks on your
doors? Or used easily shattered glass for your windows? What about installing a
security system or screens to keep out pests? No matter how functional or
beautiful your home is, your investment isn’t worth much if it’s vulnerable to outside



But that’s often the case for many organizations that build
out their network organization. They design an efficient, state-of-the-art
solution with an attractive interface, but they forget a key component: network
security. In effect, they’re leaving their doors and windows open to the
internet equivalents of home burglars and pests – the hackers, cyber
terrorists, worms, and moles.



Network Security Shouldn’t Be an Afterthought

Often, security is added retroactively, when the damage is
already done. Many companies don’t recognize that they have a problem until
after their digital walls have been breached. And what’s even more dangerous is
that some may not even realize that an attack has occurred at all. Often, the
attacks are designed to be surreptitious. The longer an attack goes undetected,
the more information can be stolen.


A single cyber-attack can tear down what a company has spent
years building, resulting in:

  • The loss of intellectual property and
    proprietary data
  • Disruption to services for days, weeks, or
  • Permanent damage to your brand loyalty and
  • Legal costs associated with compensating
    customers for loss or identity theft
  • Compensation related to delays in meeting
    contractual obligations
  • Loss of customers to competitors
  • An increase in insurance premiums


So just how common is cybercrime? Both small businesses and
corporations are at risk. In my next post, I’ll talk numbers.

Assignment Notification Customization and Standalone Notifications


When the new approval mechanism was introduced in SP4 we also added a new notification script. This was designed to be fairly flexible and usable for other notifications as well so you can use it in your workflows to send notifications about anything. I will do two things in this blog


1) Add additional/custom replacement strings to existing template

2) Use the Assignment Notification task to send a message as part of a regular (non-assignment) workflow




Common steps


Importing the Assignment Notification task and Notification Templates job folder


Start by importing the job folder and task and configuring the notification repository as outlined in the documentation. This is the quick version:

The templates are located in "\usr\sap\IdM\Identity Center\Templates\Identity Center\Provisioning\Notifications"


Right click on a provisioning group of your choice (or create a new one like I did with "Notification blog thingy") and import Assignment Notification_Task.mcc

Right click on a job folder of your choice or the top node (I used the default Job Folder) and import Notification Templates_jobFolder.mcc



The first thing to do is check if you need to fix a mistake we did. The Assignment Notification task was not made public in the template provided with some versions. So open the task and verify that Public is checked, and if not, check it and save:


If this is not done any attempts to start this task using uProvision will be rejected.


Configure the notification repository values


You also need a notification repository with these values set:


Import the standard notification templates


Next we need to update the templates in the database, so run the job named Import notification templates and verify there are no errors.



Create a basic approval workflow


Ordered task
  Action with a To Identity Store pass

  Approval Task


To Identity Store pass in Add approvers to PVOapprovalWorkflowAddApprovers.png

Configure the approval task

- use the PVO to get approvers (MX_APPROVERS attribute).

- use the Assignment notification task as the ... notification task.

- use the Approval Initial Notification template as the initial message.


Create a test privilege


The privilege for this does not need anything more than a name and pointing the Validate Add task to the Assignment Approval Workflow that we created



Creating users and repeatedly testing


It's very useful to create a small job that just creates a new user and perform a few test operations on it, and this sample can be used in a multitude of scenarios. In this job I have two very simple passes. The first just sets an email address on the user I've decided to use as an approver:


The next pass creates a new user prefixed with the %ddm.job% constant which is a counter that increases every time the job is run.


This means that every time I run this job a user with the name USER.TEST.BLOGNOTIFICATION.<number> will be created and assigned the privilege that has the approval  task, and I can rerun this and test the approval process with new users as many times as I need until I get all the bugs sorted out of my configuration.


At this point you should be able to run the job and get a notification in your mailbox when the assignment hits the approval task, more or less like this one if you're using the same release as me. Not perfect, but enough for this purpose:


End of Common


Customizing existing message with new strings


This is a fairly simple operation and I will use the Initial Approval Notification sent to approvers to demonstrate this. The notification script looks at the context variables for most of its data. All the template files use a syntax of PAR_<VARIABLENAME> for text replacement strings. These are passed to the notification script as context variables named MSG_PAR_<VARIABLENAME>. This means that to add a new value all you have to do is


1) Add the new string replacement variable to the template file

2) Add the context variable using the uSetContextVar function


Editing the template


Locate the file AssignmentRequestApprovalinitialnotification_EN.html in \usr\sap\IdM\Identity Center\Templates\Identity Center\Provisioning\Notifications. Open it in a UTF-8 compatible editor and start editing. For this example I've modified the end of the file and added PAR_QOTD where the copyright notice used to be, and inserted a shoppinglist reminder above the URL:


Next we need to update the templates in the database, so run the job named Import notification templates as outlined in the common section


Setting the additional variables in the workflow


First we need an Assignment Approval workflow, again I keep most of this short and simple, it's well documented in tutorials etc. and focus on the new part, Add custom message variables. The basic layout of the task is described in the Common section.


Insert a new action with a To Generic pass after the Add approvers to PVO actionapprovalWorkflow.png
Add custom message variables is a To Generic pass with a fairly simple entry script listed below. This simply takes the array and splits it first on !!, then writes each %1=%2 combination as an audit variable named #MSG_%1 with value %2. See the Additional Data section to see how it actually looks in the tableapprovalWorkflowAddCTX.png


// Main function: setCTXVARS
function setCTXVARS(Par){
  tmp = Par.get("MSGVARS");
  ctxVars = uSplitString(tmp,"!!");
  for (ctxvar = ctxVars.iterator(); ctxvar.hasNext();dummy=1)
  tmp =;
  vals = tmp.split("=");
  ctxVarToSet = "#MSG_"+vals[0];
  ctxValToSet = vals[1];
  OutString = uSetContextVar(ctxVarToSet, ctxValToSet);

The QOTD script returns a random string. I'll attach it to the end of the blog for the curious.


With the templates up to date in the system it's time to test, and to do that we just run the testjob from the common section again. My result, red outlines around the changes are added by me:



End of Customizing existing message with new strings!



Sending custom messages


This is a bit more complex. First we need to create a new template. The default location for the template files is

\usr\sap\IdM\Identity Center\Templates\Identity Center\Provisioning\Notifications


Creating a new template

Make sure you use a text editor that can save the file as UTF-8 without BOM (Byte Order Mark, gives two garbage chars at the beginning of the message if included) when performing the next steps.


We're creating a new message template text file named MyCustomMessage_EN.html with the following content:


<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">




<p>Something PAR_DESCRIPTION has happened!</p>




Next we need to add a row describing this template to the index file, AssignmentNotificationsList.txt:


Here's the text for those who like things easy.

Custom Workflow Message;EN;999;Its custom;Message for you sir;CHARENC=UTF8;MyCustomMessage_EN.html;CUSTOM


Importing the template


Next we need to get this template into the mc_templates table. This is what the Import Notification Templates job we imported earlier does. Run it and check the log. There should be no warnings or errors. You can also verify the database contents using this query


select * from mc_templates where mcClass = 'CUSTOM'

The result should be something like this:


Starting the notification from a workflow


Now we're ready to trigger the notification. It requires some fixed values to be set to work, and these are:

#MSG_TEMPLATE     Should match the name we used: Custom Workflow Message

#MSG_TYPE              Should match the message class used: CUSTOM

#MSG_RECIPIENTS   Is the MSKEY of the user message recipient

#MSG_PAR_<variables> Whatever else we feel like saying


I already made a script that sets a range of context variables earlier so I reuse it with a small modification in the script. This can be part of pretty much any workflow, you decide where it makes sense to you. My test task works as a UI task and when started using test provisioning and should give you some ideas on how to use it, nothing more:



// Main function: setCTXVARS
function setCTXVARS(Par){
  tmp = Par.get("MSGVARS");
  ctxVars = uSplitString(tmp,"!!");
  for (ctxvar = ctxVars.iterator(); ctxvar.hasNext();dummy=1)
  tmp =;
  vals = tmp.split("=");
  ctxVarToSet = "#MSG_"+vals[0];
  ctxValToSet = vals[1];
  OutString = uSetContextVar(ctxVarToSet, ctxValToSet);
  mskey = Par.get("MSKEY");
  taskid = Par.get("NOTIFICATIONTASKID");
  AuditID = uGetAuditID();
  OutString = uProvision(mskey ,taskid ,AuditID, 0, "does this work?",0);

The key change is that in addition to set the #MSG_ context variables it also calls the Notification task. This is brute force hard-coded taskid in the action, no error checking and not pretty code. So go forth and improve it.


Anyway, the result in my case was a very simple but satisfying email in my inbox:


And thats it.


Additional data


What does all this context variable look like anyway


At the time when the Assignment Notification action runs the mc_audit_variables table can look like this:


And for the custom notification:



Quote of the Day script

, not well designed or thought out but for this sample it does what it needs to


// Main function: qotd
function qotd(Par){
  qn = Math.floor((Math.random()*5)+1);
  if (qn == 1) {
  return "A day without sunshine is like, night";
  } else
  if (qn == 2) {
  return "A penny saved is a government oversight";
  } else
  if (qn == 3) {
  return "When everything comes your way you're in the wrong lane";
  } else
  if (qn == 4) {
  return "Silence is golden, duct tape is silver";
  } else
  if (qn == 5) {
  return "The road to success is always under construction";
  return "You should not be here?! No quote for you!";

So it seems that finding out what happened to an entry/assignment and why is also an issue. Here are some hints on how.


Please note that these are not official table definitions. Views that start with idmv and mxpv are public and stable, but the tables themselves can and have occasionally changed.




All provisioning tasks are audited, the combination mskey, actionid, auditid is unique in the system. Any task that is started using the IdM UI, Rest interface, attribute events, privilege events etc. will get a unique AuditId. If the workflow processes triggers addd



The main audit table is mxp_audit. This contains a single row entry for each task/event that has been triggered for your entries. Some notable columns:


auditidunique identifier
taskidTaskid of root task of this audit (such as change identity, provision, deprovision, modify in provision framework)


Entry for which this audit was started
auditrootroot audit or itself. Root audit points back beyond the parent audit (refaudit) to the very first audit in case of deep nested events
posteddateDatetime when the task was initiated and put into the provisioning queue
statusdateDatetime when the task or any subtask of this audit was last updated or evaluated
provstatusSee mxp_provstatus, Key values: 0=Initiated OK, 1000=Task OK, 1001=Task Failed, 1100= OK, 1101=FAILED
LastActionId of last subtask that has been executed or attempted executed. Will be the task that failed in case of provstatus 1001 or 1101
refauditnull or the parent audit
MSGError message or just informational message depending on provStatus

Somewhat fuzzy logic datablob, but there is a structure to it


If its simply a number then it should be the mskey of an entry, and the task was started by a user in the IdM UI.


When value starts with # this column indicates an privilege attribute event that caused the task to start. The Format is  #<attrid>:<operation>:<checksum>:<oldvalueid>

Example: #395:INSERT;3285570;0

395 is the attribute ID (MXREF_MX_PRIVILEGE), 3285570 is the checksum of the value that triggered the task (mxi_values.bCheckSum=<checksum>), oldvalueid = 0 means this is a new value, not a replacement/modify otherwise the old value can be found in mxi_old_values.old_id=<oldvalueid>


When starting with + this value indicates that this is an On event such as On (chain) OK or On (chain) Fail. Format is +<taskid>:<eventname>

Example: +1002083:On Chain OK task


When starting with * this value indicate that it was started by an entry event (defined on the entry type). Format *<entryid>:<operation> where operation should be Insert, Modify or Delete.

startedByOnly valid when task is started from Workflow UI/REST, contains the mskey of the logged in user that ran the task


Views: mxpv_audit


A fun/useful thing to do with the audit includes checking the average execution time of a task from start to end over time. Change taskid=X to taskid in (x,y,z) to get more tasks or extend this sequel with a link to mxp_tasks to use tasknames and be creative. I suggest keeping it limited to a few tasks or the results might become difficult to interpret.


SQL Server:

select taskid,convert(varchar(10),posteddate,20) Date,count(auditid) as numExecs,avg(datediff(ss,A.posteddate,A.statusdate)) AvgTimeToComplete
from mxp_audit A with(nolock)
where taskid = 1 and posteddate > '2014-02-01' and ProvStatus >999
group by taskid,convert(varchar(10),posteddate,20)
order by taskid,convert(varchar(10),posteddate,20)


select taskid,to_char(posteddate,'YYYY-MM-DD') "date",count(auditid) "numExecs",AVG(round(statusdate-posteddate,2)*24*60*60) "avgTimeToComplete"
from mxp_audit
where taskid = 20 and postedDate > to_date('2014-02-01','YYYY-MM-DD') and provstatus > 999
group by taskid,to_char(posteddate,'YYYY-MM-DD')
order by taskid,to_char(posteddate,'YYYY-MM-DD')

This calculates the average time between start-time and end-time of the task with id=1 (SQL) and 20 (Oracle), and I suggest using the taskid for Provision or Modify to test this. ProvStatus >= 1000 are completed, those still running will have no statusdate worth using in this case.

On SQL Server changing the length of the convert statement to 7 characters you can group it by month, 4 to get per year. On Oracle you can change to to_char conversions to YYYY-MM, or just YYYY.

You can also query for posteddate between two dates and much more. This is also useful to spot negative trends over time, but you must consider the overall load of the system. This is also useful during testing and tuning to verify if improvements you do have any impact on a full workflow execution.


List all tasks that have been executed on a user (SQL Server and Oracle):

select A.auditid, A.AuditRoot, A.RefAudit auditParent, A.userid, A.StartedBy, A.taskid, T.taskname, A.mskey, A.PostedDate, A.StatusDate, A.provstatus, A.LastAction, A.msg
from MXP_AUDIT A, MXP_Tasks T where A.TaskId = T.TaskID
and A.msKey = (select mcmskey from idmv_entry_simple where mcMskeyValue = 'ADMINISTRATOR')
order by auditroot,RefAudit

mxp_audit.taskid can be linked to mxp_tasks.taskid to get the taskname when accessing the mxp_table instead of the view (which has an unfortunate top 1000 limit).



The extended audit is stored in mxp_ext_audit. This contains a single row entry for each task/action executed within an audit and is enabled by checking the "Enable trace" checkbox.


Aud_refAudit Id
Aud_TaskTask Id
Aud_datetimeDatetime when the ext_audit record was created
Aud_ApproverMskey of approver. You should use mxp_link_audit for this when getting link approval records

Generic information. If the audited task is a switch or conditional this column will contain the result of the evaluation. In case of conditionals it will be TRUE or FALSE, for switches it will contain the value returned by the SQL Statement.


Reason for the task starting. Another fuzzy logic data blob. Some of the common value formats:





TASK:<taskid>:<task operation. 0=inittask, 1=OnError, 2=OnOk, 3=OnChainError, 4=OnChainOk>

PRIV:<priv mskey>:<entryoperation>

ROLE:<role mskey>:<entryoperation>

OTHER:<other info>    : This is typical for tasks started using uProvision.


Operation values: 1=Modify, 2=Delete, 3=Insert

Entryoperation values: 0=Provision, 1=Deprovision, 2=Modify


Views: mxpv_ext_audit


The extended audit is useful for when you need to see what happened in subtasks, what conditional or switch statements returned or find out where a workflow stopped for a user. This query lists all tasks started by and including auditid 1307812, but can easily be modified to filter on aud_onEntry (mskey), and dates.

SQL Server and Oracle:

select t.taskname,A.aud_ref,aud_Datetime,aud_info,Aud_StartedBy
from mxp_ext_audit A, mxp_tasks T
where T.taskid = A.aud_task and aud_ref
in (select auditid from mxp_audit where AuditRoot=1307812)
order by aud_datetime

The red arrows shows a child audit being started, in this case by a uProvision call in a script, and the green arrows show where the child-audit is completed and allowing the parent audit to continue from its wait for events state.



Link audits, mxi_link_audit


Audits related to evaluation and processing of reference values (role/privilege assignments, manager and other references) have information stored in mxi_link_audit (also see mxi_link_audit_operations). This has a lot of columns and I suggest you look at the views and see what is there. Some of the key columns are:


mcLinkid/linkIdReference to mxi_link.mcUniqueId
mcauditid/auditidReference to mxp_audit.auditid
mcDate/dateDate of entry
mcOperation/operationReference to mxi_link_audit_operations
mcReason/reasonRequest/approve/decline reasons
mcMSKEYUserMskey of user
mcMSKEYAssignmentMskey of the assigned entry (privilege, role, manager etc.)


Views: idmv_linkaudit_<basic/ext/ext2/simple>


Example data from idmv_linkaudit_ext2 view for an audit in which a role was added to a person, which caused to two inherited privileges to be assigned. Later the role was removed.


SQL Server and Oracle:

select linkid,auditid,auditDate,userMSKEYVALUE,AssignmentMSKEYVALUE,OperationText,AdditionalInfo
from idmv_linkaudit_ext2
where userMskey = 23
order by auditdate


Note that a new audit be created only when there's an event task execution. The privilege in my example only had a del-member event and this event got a new audit (944072), the rest shared the add-audit of the role they were inherited from.


Useful variation with tasknames (SQL Server and Oracle):

select LA.userMSKEYVALUE, LA.auditDate, LA.AssignmentMSKEYVALUE, LA.operationtext, LA.auditid, A.taskid, T.taskname
from idmv_linkaudit_ext LA
  left outer join mxp_audit A on A.AuditID = LA.auditid
  left outer join mxp_tasks T on T.taskid = A.TaskId
order by LA.auditDate




There are additional logs in jobs and actions that are stored in base64 blobs in the database. From SP8 we've added a new log, the execution log, which now stores messages from the runtime logged with uInfo/uWarning/uError.


Job and action logs, mc_logs


This contains the logs of all jobs and actions as well as other useful values. Some columns I find useful are:


JobIdId of the job. An action is linked to a jobconfiguration on mxp_tasks.jobguid = mc_jobs.guid
TimeUsedThe number of seconds the action/job used to complete for this logentry
TotalEntriesThe total number of entries processed in this logentry
Num_AddsNumber of add operations performed
Num_ModsNumber of modify operations performed
Num_DelNumber of delete operations performed
Num_WarningsNumber of warnings reported
Num_ErrorsNumber of errors reported


Views: mcv_logall,  mcv_logwarn, mcv_logerr


One of the things this can be used for is to calculate how many entries per second an action/job processes.

SQL Server:

select jobname,JobId,sum(TotalEntries) totalEntries,sum(TimeUsed) totalTime,
round(cast(sum(TotalEntries) as float)/cast(sum(TimeUsed) as float),2) entriesPerSecond
from mcv_logall group by jobname,jobid
order by round(cast(sum(TotalEntries) as float)/cast(sum(TimeUsed) as float),2)  asc


select jobname,JobId,sum(TotalEntries) totalEntries,sum(TimeUsed) totalTime ,Round(sum(TotalEntries) /sum(TimeUsed),2) entriesPerSecond
from mcv_logall
group by jobname,jobid
order by entriesPerSecond


This can give valuable hints about actions or jobs that are slow and will cause problems at some point in time. In this case my "Test something true: False" task is slow and needs a look. You can also reverse this by calculating totalTime/totalEntries to get time used per entry. This can be used in combination with the threshold log when running mass-update/performance tests in dev/qa cycles to detect potential issues before they cause downtime in production.


execution log


View: mcv_executionlog_list


This is a new log that has been in hiding for a while as it needed a UI. It still doesn't have one outside the MMC, but it is very useful. This log contains all the messages from the runtimes that would usually be locked inside the big blob of mc_logs or the dse.log file on the file system. So in short, this means that messages like this one:


Are now also logged individually and linkable to the user and auditids. My root audit was 1316231 so this query will find all related audits and list runtime messages reported during the processing of these audits:


select mcAuditId,mcTaskId,mcTaskName,mcMskey,mcMsg,mcLogLevel,mcLogTime From mcv_executionlog_list where mcAuditId in
(select auditid from mxp_audit where AuditRoot = 1316231) order by mcUniqueId

This output would usually would be "hidden" inside the logfiles associated with each individual job:


There is a lot more to the execution-log though, so have a look at it when you get your hands on a version supporting it.


Pulling it all together


To summarize this

  • One audit per task executed on an entry
    • One extended audit entry per sub-task
      • 0 to many execution log entries per action

And combining all this information can be done using:

  AT.NAme,T.taskname taskname, EA.aud_ref auditid, ea.aud_datetime logtime,
  '' loglevel, ea.Aud_Info info, ea.Aud_StartedBy startedby 
  mxp_tasks T, mxp_actiontype AT, MXP_Ext_Audit EA 
  T.taskid=EA.Aud_task and T.actiontype = AT.actType and 
  EA.Aud_ref in (select auditid from mxp_audit where AuditRoot = 1316231)
select 'Action' as type, mcTaskName taskname,mcAuditId auditid,mcLogTime logtime,
  When mcLogLevel = 0 then 'Info'
  when mcLogLevel = 1 then 'Warning'
  when mcLogLevel = 2 then 'Error'
  else cast (mcLogLevel as varchar)
end loglevel,mcMsg info ,'' startedby 
  mcv_executionlog_list where mcAuditId in (select auditid from mxp_audit where AuditRoot = 1316231) 
order by logtime

This can give something like this:




And that I believe is all the detail one could look for on the processing of a specific task "dispatcher test #1.0.0" through two child tasks and back for a user all in a single view. I'm sure there'll be an admin UI for this later, but for now I expect this to be most useful in the development and QA cycle.

This is part 2 of a blog in 3 parts (at the moment) on how IdM manages queue processing and the audits and logs created during processing

Part 1: Tables, views and processing logic:

Part 2: Viewing the current queues:

Part 3: Post execution audits and logs:

Feedback is most welcome, and additions as well as corrections can be expected.

Edit 20140224: Oracle versions of queries added.


Getting an overview of the queues


One of the most important things to do in case of a productive stand-still or issue is to get an overview of what's in the different queues.

Link evaluations, approvals and workflows have separate queues and processing of them is done by different threads in the dispatcher(s).

Jobs are simply set to state=1 and scheduletime < now in the mc_jobs table.


Jobs and actions


As mentioned above, jobs do not really have a queue. They are scheduled to run by having scheduletime set and state set to 1. The dispatcher will start runtime(s) to process jobs if the mc_dispatcher_check procedure returns 1 or more standard jobs to run. The java runtime will use the procedure mc_getx_job to reserve a job from the available jobs. Once running the state in mc_jobs changes to 2.


Just to clarify, a job sits outside the Identity Store(s) in a job folder and usually works with bulk processing and contains 1 or many passes. Actions are inside the workflow of an Identity Store and can only contain 1 pass and process 1 entry at a time. To slightly confuse the matter, the configuration of an action task is a job, in the mc_jobs table, and the logs it creates are stored in the mc_logs table. There's a link between the task in mxp_tasks to mc_jobs on mxp_tasks.jobguid = mc_jobs.jobgui.


With this knowledge a query listing jobs and provisioning actions that are running can look like this:


SQL Server:

select name,case when provision=1 then 'Action' else 'Job' end type, CurrentEntry, Current_Machine from mc_jobs with(nolock) where state = 2


select name,case when provision=1 then 'Action' else 'Job' end type, CurrentEntry, Current_Machine from mc_jobs where state = 2;

This produces output like this:


Note that the CurrentEntry column in mc_jobs is updated every 100 entry, or every 30 seconds by the runtimes.



The provisioning queue & semaphores


The provisioning queue is based on the table mxp_provision. To process parts of the queue a dispatcher must first set a semaphore that indicates that other dispatchers should keep away from processing the same type of task. This is done by setting a semaphore (basically its own Id as owner along with a timestamp) in the mc_semaphore table. The timestamp is updated as the thread is processing entries, and a semaphore whose timestamp is older than 300 seconds is considered dead. This means that if you have conditional statements taking a very long time to run so that the dispatcher thread is not able to update the timestamp within 300 seconds the semaphore is released and another dispatcher will start processing conditional statements as well. That means trouble because the two threads risk running the same conditional mskey,action,audit combination!


The provisioning queue is divided into views according to the threads in the dispatcher: mxpv_grouptasks_ordered/mxpv_grouptasks_unordered/mxpv_grouptasks_conditional/mxpv_grouptasks_switch/mxpv_grouptasks_approval/



These views will at most contain 1000 entries from a Top 1000 limiter. As mentioned in part #1, actions that are to be processed by runtime engines are picked up by a procedure and has no view.


The link evaluation queue


This queue contains assignments that need to be evaluated. Any mxi_link entry with mcCheckLink < now is in this queue. This includes role/privilege assignments and entry references such as manager.


The dispatcher processes this from the view mxpv_links. This view will contain 0 entries in normal situations, up to 1000 under load. To get the real number of links that need evaluations you can run:


SQL Server:

SELECT count(mcUniqueId) FROM mxi_link with(NOLOCK) WHERE (mcCheckLink < getdate()) AND (mcLinkState IN (0,1))


SELECT count(mcUniqueId) FROM mxi_link WHERE (mcCheckLink < sysdate) AND (mcLinkState IN (0,1))


To see if a specific user has privileges that are queued for evaluation, or if a privilege has entries where it's state is still to be evaluated:


SQL Server:

-- Assignments to evaluate for 'User Tony Zarlenga'
SELECT count(mcUniqueId) FROM mxi_link with(NOLOCK) WHERE (mcCheckLink < getdate()) AND (mcLinkState IN (0,1)) and
mcThisMskey in (select mcmskey from idmv_entry_simple where mcMskeyValue = 'User Tony Zarlenga')
-- User assignments to evaluate for privilege 'PRIV.WITH.APPROVAL'
SELECT count(mcUniqueId) FROM mxi_link with(NOLOCK) WHERE (mcCheckLink < getdate()) AND (mcLinkState IN (0,1)) and
mcOtherMskey in (select mcmskey from idmv_entry_simple where mcMskeyValue = 'PRIV.WITH.APPROVAL')


-- Assignments to evaluate for 'User Tony Zarlenga'
SELECT count(mcUniqueId) FROM mxi_link WHERE (mcCheckLink <sysdate) AND (mcLinkState IN (0,1)) and
mcThisMskey in (select mcmskey from idmv_entry_simple where mcMskeyValue = 'User Tony Zarlenga')
-- User assignments to evaluate for privilege 'PRIV.WITH.APPROVAL'
SELECT count(mcUniqueId) FROM mxi_link WHERE (mcCheckLink < sysdate) AND (mcLinkState IN (0,1)) and
mcOtherMskey in (select mcmskey from idmv_entry_simple where mcMskeyValue = 'PRIV.WITH.APPROVAL')



Listing actions ready to be run by runtime engines


Runtime actions are listed in the provisioning queue with actiontype=0. Combined with state=2 (ready to run) and exectime < now the entry is ready to be processed by a runtime. A very basic query listing number of entries ready for processing by different actions is:


SQL Server:

select count(P.mskey) numEntries,P.actionid, t.taskname from mxp_provision P with(NOLOCK), mxp_tasks T with(NOLOCK)
where P.ActionType=0 and T.taskid = P.ActionID
group by p.ActionID,t.taskname


select count(P.mskey) numEntries,P.actionid, t.taskname from mxp_provision P , mxp_tasks T
where P.ActionType=0 and T.taskid = P.ActionID
group by p.ActionID,t.taskname

Unless you have a lot of actions with delay before start configured actions will usually have an exectime in the past. This query will create a simple result showing the entries that can be processed by runtimes:


Listing actions ready to be run by runtime engines and the state of the job


In most cases this is only part of the full picture. You really want to know if a runtime is actually working on those entries as well. Let's add mc_jobs and mc_job_state to the query to get a bit more detail:


SQL Server:

select count(P.mskey) numEntries,P.actionid, t.taskname, as jobState
from mxp_provision P with(NOLOCK)
inner join mxp_tasks T with(NOLOCK) on T.taskid = P.ActionID
left outer join mc_jobs J with(NOLOCK) on J.JobGuid = T.JobGuid
left outer join mc_job_state JS with(NOLOCK) on j.State = JS.State
where P.ActionType=0 and P.state=2 and T.taskid = P.ActionID
group by p.ActionID,t.taskname,


select count(P.mskey) numEntries,P.actionid, t.taskname, as jobState
from mxp_provision P
inner join mxp_tasks T on T.taskid = P.ActionID
left outer join mc_jobs J on J.JobGuid = T.JobGuid
left outer join mc_job_state JS on j.State = JS.State
where P.ActionType=0 and P.state=2 and T.taskid = P.ActionID
group by p.ActionID,t.taskname,


The current reason for my system not processing anything is getting clearer:


No actions are running so something is blocking or stopping the runtimes from starting and I know to look at the dispatcher. Since I've manually stopped it it's no big surprise and troubleshooting is simple.


Just a few of the actions/jobs are running


If you think that not enough runtimes are being started and see situations like this:


You should look at item 5 in the checklist below and also have a look at the properties and policy of the dispatcher


Dispatcher properties and policies




Max rt engines to start determines how many runtimes a dispatcher starts when it finds X actions ready to run in the queue. In this case, even if 100 are ready to run it will only start 1 in this check interval (see the picture to the right).



Max concurrent rt engines controls how many runtimes a dispatcher will have active at the same time. 1 is always reserved for the Windows Runtime though. So my system is now limited to a single active java runtime at any time.


Max loops for rt engine is also a very useful setting. Starting a java runtime processes and loading all the classes can often take a second or three and in a low-load scenario this can be the slowest operation in the system. This setting tells the runtime that once it's done with an action/job it should enter a small loop with a 1 second delay to check for additional actions that are available. This also increases performance as it is independent of the check interval(see two pictures below)

part2_dispatcherGlobalMaxRt.pngAlso notice the global setting for max concurrent rt engines. If you have 3 dispatchers that can run 50 simultaneous runtimes each you can still limit the total active runtime count to 100 for instance.


The check interval controls how frequently the dispatcher connects to the database to check for available tasks, actions and jobs.


A general recommendation is to increase this in systems with multiple dispatchers so that the average interval is around 5 seconds. So when running 2 dispatcher, check interval is 10 on both, with 4 dispatcher interval is 20 and so on.


By increasing the Max concurrent rt engines setting to 10, and max rt engines to start to 3 the situation is quickly changed to the point where it's difficult to create a screenshot:



Troubleshooting actions/jobs not starting


A quick checklist for troubleshooting:


  1. Check that the dispatcher process (mxservice.exe on windows) is running
  2. Check how many java processes you have in task manager (or using ps, 'ps -ef | grep java', or similar on Unix. Also see Andreas Trinks suggestions in the comments section)
    • If there are no or just a few java processes then the runtimes are most likely not started by the dispatcher
      • Check prelog.log for startup issues
    • If you have lots of java processes but no actions running, then the runtime is probably having problems connecting to the db or reserving a job
      • Check prelog.log
  3. Check that the dispatchers are allowed to run the jobs that are queued
  4. A job listed with state ERROR will not run, and has to be forced to restart. Check it's logs for errors though, they end up error state for a reason (most of the time)
  5. Check the database activity monitor, reports or using queries from IDM SQL Basics #2: Locating problem queries to check if
    • If the procedure mc_dispatcher_check is running for a long time the dispatcher is unable to perform the check on the queue to see how many actions are ready for processing and java runtimes will not be started.
    • If the procedure mxp_getx_provision is running for a long time the result is many java processes in the system but they are unable to allocate jobs


Listing the number of items and their state in the queue


I would usually start off with the following query that lists the number of entries per task, per state and including the state of the linked Job for action tasks.


SQL Server:elect

     count(P.mskey) numEntries,t.taskid,t.taskname,A.Name ActionType, StateName, ISNULL(JS.Name,'not an action task') JobState
  mxp_provision P with(nolock)
  inner join mxp_Tasks T with(nolock) on T.taskid = P.actionid
  inner join mxp_state S with(nolock) on S.StatID = P.state
  inner join MXP_ActionType A with(nolock) on A.ActType=P.ActionType
  left outer join mc_jobs J with(nolock) on J.JobGuid = T.JobGuid
  left outer join mc_job_state JS with(nolock) on j.State = JS.State
group by
     t.taskid,T.taskname,,, JS.Name
order by,


     count(P.mskey) numEntries,t.taskid,t.taskname,A.Name ActionType, StateName, NVL(JS.Name,'not an action task') JobState
  mxp_provision P
  inner join mxp_Tasks T on T.taskid = P.actionid
  inner join mxp_state S on S.StatID = P.state
  inner join MXP_ActionType A on A.ActType=P.ActionType
  left outer join mc_jobs J on J.JobGuid = T.JobGuid
  left outer join mc_job_state JS on j.State = JS.State
group by
     t.taskid,T.taskname,,, JS.Name
order by,


I've started my dispatcher test task described in Part #1 for 1000 entries The query above gives me a result like this during the processing



(click to enlarge)

A quick explanation of some of the type/state combinations and what would process them


Action Task/Ready To Run: Action that is ready to be processed by a runtime

+ JobStatus: The state of the job linked to the Action Task. If it's Idle it means a runtime has not picked this up yet.


Conditional, Switch and (un)Ordered Tasks are processed by dispatchers that have a policy that allows Handle Tasks.

Ready to run for conditional or switch task means i'ts ready for evaluation

Ready to run for Ordered/Unorderd task means the workflow can be expanded into the queue

Expanded OK means the workflow at this level is expanded

Waiting generally means that its waiting for a sub-process or child event to finish


The final view of the provisioning queue, with current entry count for actions


Since the mc_jobs table contains a column named CurrentEntry we can also see how many entries running actions have processed using:

SQL Server:

  count(P.mskey) numEntries,t.taskid,t.taskname,A.Name ActionType, StateName,
  case when = 'Running' then 'Running, processed:'+cast(ISNULL(J.CurrentEntry,0) as varchar) else end state
     mxp_provision P with(nolock)
  inner join mxp_Tasks T with(nolock) on T.taskid = P.actionid
  inner join mxp_state S with(nolock) on S.StatID = P.state
  inner join MXP_ActionType A with(nolock) on A.ActType=P.ActionType
  left outer join mc_jobs J with(nolock) on J.JobGuid = T.JobGuid
  left outer join mc_job_state JS with(nolock) on j.State = JS.State
group by
     t.taskid,T.taskname,,, case when = 'Running' then 'Running, processed:'+cast(ISNULL(J.CurrentEntry,0) as varchar) else end
order by,


  count(P.mskey) numEntries,t.taskid,t.taskname,A.Name ActionType, StateName, 
  case when = 'Running' then 'Running, processed:'||to_char(NVL(J.CurrentEntry,0)) else end as Jstate 
     mxp_provision P 
  inner join mxp_Tasks T on T.taskid = P.actionid 
  inner join mxp_state S on S.StatID = P.state 
  inner join MXP_ActionType A on A.ActType=P.ActionType 
  left outer join mc_jobs J on J.JobGuid = T.JobGuid 
  left outer join mc_job_state JS on j.State = JS.State 
group by 
     t.taskid,T.taskname,,,case when = 'Running' then 'Running, processed:'||to_char(NVL(J.CurrentEntry,0)) else end
order by,     


The result is quite useful as it's now possible to see how many entries the actions that are running have processed so far (click to see):



This will have to do for revision one. If I get time to add more this week I will, but there is patches and SPs to work on as well.

I really just wanted to archive this somewhere else than in my mailbox where it keeps getting lost even though I'm asked for it every 2 years or so :-)

2014-04-09: Updated with tested Active Directory errorcodes


Sometimes actions fail, but the reason is that everything is OK. Such as adding a member to a group when the member is already a member of the group. (Always wanted to write that!). Or you just don't care that the action failed, you want the workflow to continue anyway and not end up in the On Fail event just yet.


If that's the case the Call script in case of error option is just what you need. This example is from 2010 but I believe it should still work. I don't have an LDAP server to test it on at the moment so please let me know if its broken. It accesses some specific objects to get the actual error so its quite nice to have around. You don't need to make it this advanced though. The only things you really need are:


- Check the error

- If you want the workflow to go on, execute uSkip(1,1);

- If you want to end the workflow and go to whatever On Error/Chain Error events exists, just exit the script or verify it using uSkip(1,2);


uSkip sets the exit state, first parameter is 1 for entry, 2 for pass (use in jobs only, not provision actions). The second parameter is state where 1 is OK, 2 is FAILED.




// Main function: myLdapErrorHandler  
// Some LDAP servers reports an ERROR if a a multivalue add or del operation tries to add an existing or delete a non-existing value  
// This occurs for uniquemember, memberof and a few other multivalue attributes  
// Because this is reported as an error the workflow will stop...  
// This script checks if reported LDAP Error is 
// ADS ADD operation:
//   Indicates that the add operation attempted to add an entry that already exists, or that the modify operation attempted to 
//   rename an entry to the name of an entry that already exists.
// Example: Original mod exceptionjavax.naming.NameAlreadyBoundException: [LDAP: error code 68 - 00000562: ....
// SUN ADD operation:
//    Indicates that the attribute value specified in a modify or add operation already exists as a value for that attribute.
// ADS DEL operation:
// Indicates that the LDAP server cannot process the request because of server-defined restrictions.
// Example: Exception from Modify operation:javax.naming.OperationNotSupportedException: [LDAP: error code 53 - 00000561: ...
// SUN DEL operation:
//    "LDAP: error code 16" 
// and if the errocode matches we set provision status OK so that the workflow can continue. An error has already been
// logged so the runtime logfile will still have the errorcount increased and a red entry in the UI.
// This script must be run as On Error in a To DSA pass  
function myLDAPerrorhandler(Par){
   entry = uGetErrorInfo();  
   if (entry != null)  
      UserFunc.uErrMsg(0,"myLDAPerrorhandler: Got data from errorInfo");  
      attr = entry.firstAttr();  
      LdapEntry = entry;  
      if (entry.containsKey("err_ModException"))  
         var exc = entry.get("err_ModException");  
         var orig = exc.getOriginalException();  
         if (orig != null)  
            UserFunc.uErrMsg(0, "myLDAPerrorhandler: Original mod exception" + orig);  
            addSUNPos=Instr(1,orig,"LDAP: error code 20",1);  
            addADSPos=Instr(1,orig,"LDAP: error code 68",1);
            delSUNPos=Instr(1,orig,"LDAP: error code 16",1);  
            delADSPos=Instr(1,orig,"LDAP: error code 53",1);  
            if (addSUNPos > 0 || addADSPos > 0) {  
               UserFunc.uErrMsg(0, "myLDAPerrorhandler: Error on multivalue add for existing value detected, setting provision OK");  
            if (delSUNPos > 0 || delADSPos > 0) {  
               UserFunc.uErrMsg(0, "myLDAPerrorhandler: Error on multivalue delete of nonexisting value detected, setting provision OK");  

Sample output from the Runtime Logs testing this with an Active Directory server:


Fail during ADD to member attriute in ADS because the person is already a member of the group:

09.04.2014 15:06:58 :I:initPass ToDSADirect: Test Add Person To Group

09.04.2014 15:06:58 :E:Failed storing CN=temporaryGroup3,CN=Groups,dc=enormo,dc=inc

09.04.2014 15:06:58 :E:Exception from Mod operation:ToDSADirect.modEntry CN=temporaryGroup3,CN=Groups,dc=enormo,dc=inc failed with NamingException. (LDAP error: The object already exists)

Explanation: [LDAP: error code 68 - 00000562: UpdErr: DSID-031A119B, problem 6005 (ENTRY_EXISTS), data 0


Remaining name: CN=temporaryGroup3,CN=Groups,dc=enormo,dc=inc

Resolved name:  - javax.naming.NameAlreadyBoundException: [LDAP: error code 68 - 00000562: UpdErr: DSID-031A119B, problem 6005 (ENTRY_EXISTS), data 0

]; remaining name 'CN=temporaryGroup3,CN=Groups,dc=enormo,dc=inc'

09.04.2014 15:06:58 :I:myLDAPerrorhandler: Got data from errorInfo

09.04.2014 15:06:58 :I:myLDAPerrorhandler: Original mod exceptionjavax.naming.NameAlreadyBoundException: [LDAP: error code 68 - 00000562: UpdErr: DSID-031A119B, problem 6005 (ENTRY_EXISTS), data 0

]; remaining name 'CN=temporaryGroup3,CN=Groups,dc=enormo,dc=inc'

09.04.2014 15:06:58 :I:myLDAPerrorhandler: Error on multivalue add for existing value detected, setting provision OK

09.04.2014 15:07:03 :I:exit ToDSADirect

09.04.2014 15:07:03 :I:ToDSA Direct pass completed in 5.363 seconds.


Fail during DEL from member attriute in ADS because the person is not a member of the group:


09.04.2014 15:12:21 :I:initPass ToDSADirect: Test Add Person To Group

09.04.2014 15:12:21 :E:Failed storing CN=temporaryGroup3,CN=Groups,dc=enormo,dc=inc

09.04.2014 15:12:21 :E:Exception from Mod operation:ToDSADirect.modEntry CN=temporaryGroup3,CN=Groups,dc=enormo,dc=inc failed with NamingException. (LDAP error: The server does not handle directory requests)

Explanation: [LDAP: error code 53 - 00000561: SvcErr: DSID-031A120C, problem 5003 (WILL_NOT_PERFORM), data 0


Remaining name: CN=temporaryGroup3,CN=Groups,dc=enormo,dc=inc

Resolved name:  - javax.naming.OperationNotSupportedException: [LDAP: error code 53 - 00000561: SvcErr: DSID-031A120C, problem 5003 (WILL_NOT_PERFORM), data 0

]; remaining name 'CN=temporaryGroup3,CN=Groups,dc=enormo,dc=inc'

09.04.2014 15:12:21 :I:myLDAPerrorhandler: Got data from errorInfo

09.04.2014 15:12:21 :I:myLDAPerrorhandler: Original mod exceptionjavax.naming.OperationNotSupportedException: [LDAP: error code 53 - 00000561: SvcErr: DSID-031A120C, problem 5003 (WILL_NOT_PERFORM), data 0

]; remaining name 'CN=temporaryGroup3,CN=Groups,dc=enormo,dc=inc'

09.04.2014 15:12:21 :I:myLDAPerrorhandler: Error on multivalue delete of nonexisting value detected, setting provision OK

09.04.2014 15:12:26 :I:exit ToDSADirect

09.04.2014 15:12:26 :I:ToDSA Direct pass completed in 5.373 seconds.



This is part 1 of a blog in 3 parts (at the moment) on how IdM manages queue processing and the audits and logs created during processing

Part 1: Tables, views and processing logic:

Part 2: Viewing the current queues:

Part 3: Post execution audits and logs:


Though this post will be focused on the solution as it is from 72SP7 I'll try to point out the differences to earlier versions.


Feedback is most welcome, and additions as well as corrections can be expected. So for now while publish is clicked, some errors expected :-)




A common issue we see in support are messages about processing of tasks and workflows stopping. Frequently the message to us is "the dispatcher has stopped". In many cases it's not stopped, but rather has found something else to do than what you expected. So I've decided to try to document a few things about queue processing, the dispatcher, troubleshooting processing halts and provide some useful queries.


The dispatcher is the key component for processing workflows in IdM. It processes task expansions, conditional and switch task evaluations, approvals/attestation, executes link evaluation of assignments and it's also responsible for starting the runtimes that executes actions and jobs. This is quite a lot to do for a single component, so let's look at how it does this.


To process all of this the dispatcher runs multiple threads, each with their own database connection. This also allows us to give you control over what task(s) each dispatcher can process. Meaning you can let a dispatcher running on or very near the database host do database intensive tasks such as task/approval/link processing; while a dispatcher closer to a data source can process jobs and actions dealing with the target system(s). The reason that I include some of the procedure names in here is that using some of the queries I've documented previously you might recognize what is stuck.



Tables and Views



The mxp_provision table is the main table for all workflow processing. Any workflow that is initiated by assignment events, users through a UI, uProvision calls in scripts, etc. all end up making an initial entry in this table. The main columns are mskey, actionid and auditid which are unique. There are more columns as well that we'll get to later. It's important to know that when processing a workflow (or process) the unique identifier in the system is mskey, auditid, actionid. If the same task is executed several times on a user it will always have a different auditid. This is also why it's not possible to have the same task linked twice within an ordered task in a workflow. If you do you get constraint violations in the database and messages like "Error expanding task" in the audit. The provision table is dynamic and there is no history.



The dispatcher(s) uses these views to get the next bulk of entries to process. By default these will list the 1000 first entries in the queue of each task type as indicated by their name. Older versions of IdM would list everything in one view, mxpv_grouptasks, and the dispatcher would process all the different taskt ypes in a single thread. This could be controlled by setting a global constant, MX_DISPATCHER_POLICY, which would switch between using the joint view or the separate views in a couple of service packs. I can't say for sure in which release this approach was abandoned, but I believe it's to blame for the excessive amount of dispatchers we see in use in productive systems. Now the dispatcher creates an independent thread per view and running many dispatchers on a single host has less of a positive effect.



Any workflow that is initiated also gets a corresponding entry in mxp_audit where the overall state of processing of this entry/task/audit combo is kept. The audit information is kept forever.



If you enable the somewhat mislabeled Trace the system will create an entry per task/action in the mxp_ext_audit table. This will contain the result of conditional/switch tasks and other useful information. This is also kept forever.




Update 2014-03-06: This is a new table/view that is included in SP9. It was ready from SP8 but was not included due to missing UIs. It contains messages that you usually would find in the runtime log files as well as messages from the dispatcher when processing entries. This is really fantastic and will get its own blog post.


A small test scenario


Let's look at it in action using a setup I've used for performance and function testing



My example workflow called Dispatcher test #1.0.0 has an ordered task with multiple task types below it.


It starts with a simple ordered task with an action containing a To Generic pass that sets CTX variable


Next is a conditional task that always goes into True


Then a conditional task that always goes into False


Followed by a new ordered task that expands into

A switch task with cases for the last digit of the mskey (0..9), each contains an action


Then an ordered task with "Wait for Events" containing a single action executing another ordered task


And finishing off with an ordered task containing an action that logs that its complete




0 - "Dispatcher test #1.0.0" task initiated


Let's see what happens when this is executed. In this example I've just executed it using the "test provision" function on my "administrator" user




This is the Initial state. uProvision/event/assignment/test_provision/something else has initiated the task. At this point the top level task is in the queue, ready to run.




This task is also visible to the dispatcher in the mxpv_grouptasks_ordered view, which contains the first 1000 ordered tasks ready for processing from the provisioning queue. One ordered task entry is available for processing. Ordered tasks have one operation and that is expanding the tasks/actions they contain which we see in the next step.




The audit shows that the task has been initiated.


1 - "Dispatcher test #1.0.0" task expansion


A dispatcher tasks thread will now pick up 1/Dispatcher test #1.0.0 from the view mxpv_grouptasks_ordered and expands the task.






Now the ordered task 2892/Update ctx emu is ready for processing indicated by state=2.




State of the audit is now officially Running


2 - "Update CTX emu" task expansion




Now the ordered task 2892/Update ctx emu is expanded and this adds our first action task to the queue.



Actions can only be processed by a runtime, so at this point the mxpv_groupstasks_ordered view is empty as there are no more ordered tasks to process at the moment.




The audit shows that the last completed action is now 2892/Update ctx emu.


3 - Processing the action task

At this point another thread in the dispatcher looking for actions takes over. This runs a procedure called mc_dispatcher_check whose only task is to let the dispatcher know if there are, andif so, how many, jobs or provisioning actions available for it to run. This check (*) requires a lot of joins on lots of tables and as a result this procedure is sometimes seen to take a few seconds when the queue reaches around 1 million rows in pre-SP7 releases.


In this case it will return 0 windows jobs, 0 java jobs, 0 windows provisioning actions, 1 java provisioning action.


From SP7 this procedure will generate a cache table to avoid rerunning the check too frequently as it would start slowing down systems when the queue got to about 1 million rows. This table, mc_taskjob_queue, will contain a list of actions available that no runtimes has yet picked up. It refreshes as it nears empty.


So with this result the dispatcher will now know there is 1 action ready to run, and a runtime started by dispatcher with id=1 will have an action/job to run.


If there were more than 1 returned, it would look at its "Max rt engines to start" value to see if it should start more than one runtime at this moment.

It also checks how many it already has started that have not ended and compares this to the "Max concurrent rt engines" setting.

And then checks against the global "Max concurrent rt engines" to see that it's not exceeding this.


So, if all is OK, the dispatcher will now start a java runtime process to run a job.


4 - The runtime executes the action task

At this point the dispatcher has started the runtime by initiating a java.exe with lots of parameters such as the database connection string and classpath extensions. It's important to note that the dispatcher does not tell the java runtime process which job it wants it to start. It just starts the runtime process and lets it pick something from the queue by itself. The runtime does this using the procedure mc_getx_provision, which in pre SP7 releases would run a somewhat complex query looking for an action to run which was basically just the same as the dispatcher had already done(*). If this started to take more than 5 seconds (or whatever you configured your dispatcher check interval to) the dispatcher would see that the jobs were not picked up and start more runtimes which got stuck in the same procedure. From SP7 we do a quick look up in the cache table mc_taskjob_queue to avoid this problem

As the runtime engine initializes it will log to a file called /usr/sap/IdM/Identity Center/prelog.log. This file can be useful to check it should contain messages that occur before it can connect to the database, especially if it's not able to connect to the database at all. Once the runtime has run mc_getx_provision it will download the job/action configuration into /usr/sap/IdM/Identity Center/Jobs/<folder with GUID of job/action> where it will keep its temporary files from now on. This folder contains the last versions of the text-formatted .log and .xml log-files. The full path of this folder is listed in each log in the management console as well. The text log is very useful in cases where there are so many messages that they can't all be uploaded to the IdM database.


Anyway, in most cases the runtime is able to get the configuration and start processing entries. Each entry is processed by itself and after each entry the runtime will update the provisioning queue mskey/actionid/auditid combination using either the mxp_set_ok or mxp_set_fail procedure depending on success/failure of the operation.


5 - Test something true


According to the workflow the next step to process should be "Test something true" which is a conditional task and will as such be listed in the mxpv_grouptasks_conditional view.




And "Test something true" is now in the queue, ready to run.




Also notice that the SQL statement for the conditional operation is part of this view.




Our task is still a work in progress.

The dispatcher does a parameter replacement on %% values in the MXP_SQL and runs the statement, then evaluates the result. Depending on the result being 0 (false) or higher (true) it will run the mxp_set_FALSE or mxp_set_TRUE procedure for the mskey, actionid, auditid combination and the procedures will expand to the next step.


6 - Test something true action, and so on...


As a result of the previous evaluation ending in a True result the action in the True node has been expanded into the queue. Also notice how the mxpv_provision view includes the result of the conditional statement. This also occurs with switches. This information is stored in the extended audits if enabled which is really useful for tracing problems.




At this point the processing should start to be clear and this document is already too long before I've even started on the troubleshooting part :-) Now the action will trigger a runtime to start, then the Test something false process will be expanded and so on through my test scenario.

The dispatcher picks up the entry from the queue, evaluates and runs procedure to continue workflow. Nothing interesting happens until the Exec external and wait for task is started, which has the Wait for event tasks results option checked.



This is used in various places in the provisioning framework such as in the Provision workflow where a user needs to get an account created in the target repository before any assignments are added.


7 - Exec external and wait for

In this case I've halted my event task so that I can see what the queue looks like.



The MSG column shows that audit 1273983is waiting for audit 1273984 before it continues. In this case I've stopped the dispatcher capable of running the action of task 45, so it's temporarily stuck here. So, starting the dispatcher it will continue the external and eventually continue and finish the workflow.

8 - Suddenly all done, but wait, what happened?

To close this off and get on to the troubleshooting I just wanted to mention the extended audit table. With it I can get a complete picture of the events for my two audits:

As a mew feature from SP8 and on, I can even get all the messages from the runtimes as well by looking at the mcv_executionlog_list view.

Additional notes and curiosities


How does the runtime process entries?


As mentioned previously the runtime uses mc_getx_provision to reserve an action to run. With the actionid it also gets a repository ID, and then it retrieves the jobdefinition linked to the action and prepares to process all entries in the queue for the task it's been handed for given repository. So it will process the queue for the task one repository at a time (by default). This is nice when connection/disconnecting to repositories take a long time. Not so nice during bulk loads when you have 20.000 entries processing to your least important ABAP system that somehow got priority over the important one. Anyway, the queue it will process is found using:


(it's using a prepared statement so @P0 and @P1 are input parameters)


I've once more created a small scenario to test and demonstrate this:


For this task I've queued 9 entries targeted to 3 different repositories, and as a result mxp_provision contains 9 entries for repositories GENREP1, GENREP2 and GENREP3. (The query that lists this is in part #2):


This is also what ends up in the cache (mc_taskjob_queue) table, and what the query in the procedure of older versions resolve:


With a somewhat recent release you should also see that the logs for the job are uploaded per repository, and that they appear to be processed in order:



So how do do I optimize the runtime processing then?


Glad I asked. Some time ago an option called Allow parallel provisioning was added to tasks. This option allows IdM to create clones of the original job that can run in parallel. The clones are empty in the sense that they don't have a configuration of their own, just a reference to the shared master. With this enabled the timing in the log changes completely (enabled in green, not enabled in red):


If I'm quick I can even catch it in the status screen, and the action will also reflect it:





Basically what happens is that the dispatcher has started 3 runtimes at the same time to process each repository in parallel. This also requires that the Max rt engines to start setting is bigger than 1 in my demo since my queue is too small for it to have any effect otherwise. This is done behind the scenes by the procedures that the runtime calls so no action is required by you when adding new repositories.


"This is so awesome! Why didn't you make this the default?!?!" You might ask. This works best when used selectively. Imagine you have hundreds of ABAP repositories (some actually do). If your system could handle 50-70 runtimes in parallel you run the risk of them all being busy updating your ABAP repositories while nothing else happened.



[edit, some spelling fixes]


This editor is getting a bit slow and perhaps unstable, so I'll continue this in part #2.

Ian Daniel

Platform Choices for IdM

Posted by Ian Daniel Jan 27, 2014



It looks like SAP IdM is getting a bit more interest now, particularly based on the number of new "faces" on the forum, which is a very exciting time for the product and for those of us that have been working on it for a while. With that in mind, I thought I would share some observations on the platform options available for IdM, as there are a few things that are not as obvious at first sight.


AS Java

The Platform Availability Matrix (PAM) for IdM, available at states that for IdM 7.2, the following are supported platforms and are discussed in more detail in the installation guide:


SAP enhancement package 1 for SAP NetWeaver 7.0

SAP enhancement package 1 for SAP NetWeaver 7.3

SAP enhancement package 1 for SAP NetWeaver Composition Environment 7.1

SAP enhancement package 2 for SAP NetWeaver 7.0

SAP enhancement package 3 for SAP NetWeaver 7.0

SAP NetWeaver 7.0

SAP NetWeaver 7.3

SAP NetWeaver Composition Environment 7.2


What is not clear is that AS Java 7.0 is only in "Maintenance Mode" support from SAP. I've yet to find where this is written down, but I've been told it very clearly by the UI teams. What this means is they will fix anything that breaks, but not put any new features on to it. This means it is effectively locked with the UI features from IdM 7.2 SP4. So for me, if you are starting from scratch, you should start on AS Java 7.3. It looks better, and supports all the new features, which are well worth having.



Again, from the PAM, IdM 7.2 is currently supported on SQL Server, Oracle and DB2. As with everything SAP these days, I'm sure it is only a matter of time before HANA is included, and, as IdM is already a very database centric product using stored procedures, this is a very natural fit. This will of course transform IdM, delivering sub micro-second responses, in-memory, at the speed of thought, while making the tea...


But back in the real work, I'm going to focus on the current offering....


My experience has been predominantly on a single large IdM deployment on Oracle and so first off, I'm going to ignore DB2, as I have no experience of it, for IdM or anything else in SAP, and so don't think it is fair to comment.


As for IdM on Oracle, it is fair to say it has not been smooth sailing. I think the development of IdM by the product team is done on SQL server, and then converted on to Oracle by some means that I'm not clear on. This process is not always smooth and we have had been shipped tools and code containing SQL server syntax that was not picked up. We've also got some outstanding performance problems, and some strange "features" appearing occasionally as our database table statistics are updated.


Based on these facts, again, if I was starting for scratch, I would deploy IdM on SQL server, as a database, even though there is still a fair bit of bias, mainly historical in my opinion, about the robustness of it as a database platform generally.


Runtime and Design Time

We have both windows and Linux environments for our runtime and have had no problems with either, with the windows ones being slightly easier to administrate, as you can manage the start and stop directly from a MMC design time installation on the same servers. So, again if I was starting from scratch, I would go for windows design time and runtime, putting both on each of the runtime servers required, assuming one is not sufficient.



So, based on the above, if I had to pick a platform to deploy IdM 7.2 on, ignoring any other factors such as existing IT department skills, organisational preference, snobbery about UNIX over windows, it would be


  • Design Time and Runtime - Windows Server
  • Database - MS SQL Server
  • UI - AS Java 7.3 - any O/S and database


I would of course be delighted to hear what others have experienced, and think about platform choices, and if I've many any glaring omissions, please let me know.

This blog post is about calling a remote REST Service, e.g. some 3rd Party Application, which is publishing its data via a REST API.

This could be done with the VDS, executing a HTTP Request against this REST Service.

It is also possible to perform this inside a JavaScript, which will be executed by the IdM runtime directly, without the need to set up a VDS inside your landscape.

Unfortunately, the used Rhino JavaScript Engine used inside IdM is not able to perform AJAX calls directly, so we have to do this via Java (Thanks Kai Ullrich for the hint with "Scripting Java inside JavaScript").


Below you find some example code.


Cheers, Jannis



// Main function: doTheAjax


function doTheAjax(Par){


    // import all needed Java Classes









    // variables used for the connection, best to import them via the table in a ToGeneric Pass

    var urlString = "http://host:port/rest_api";

    var urlParameters = "attribute=value";

    var httpMethod = "POST"; //or GET

    var username = "administrator";

    var password = "abcd1234";

    var encoding = uToBase64(username + ":" + password);


    // In case of GET, the url parameters have to be added to the URL

    if (httpMethod == "GET"){

        var url = new URL(urlString + "?" + urlParameters);

        var connection = url.openConnection();

        connection.setRequestProperty("Authorization", "Basic " + encoding);



    // In case of POST, the url parameters have to be transfered inside the body

    if (httpMethod == "POST"){

        // open the connection

        var url = new URL(urlString);

        var connection = url.openConnection();

        connection.setRequestProperty("Authorization", "Basic " + encoding);




        connection.setRequestProperty("Content-Type", "application/x-www-form-urlencoded");

        connection.setRequestProperty("charset", "utf-8");

        connection.setRequestProperty("X-Requested-With", "XMLHttpRequest");

        //connection.setRequestProperty("Content-Length", "" + Integer.toString(urlParameters.getBytes().length));


        var os = new DataOutputStream(connection.getOutputStream());






    //get the result and print it out

    var responseCode = connection.getResponseCode();


    var is = connection.getInputStream();

    var isr = new InputStreamReader(is);

    var br = new BufferedReader(isr);

    var response = new StringBuffer();

    var line;

    while ((line = br.readLine()) != null) {





    uWarning("Sending " + httpMethod + " Request to URL: " + urlString);

    uWarning("Response Code: " + responseCode);

    uWarning("Response: " + response.toString());





During the later part of last year had to develop a set of quick&dirty reports in a proof of concept project where the ETL-cababilites of IdM were demonstrated. I used the Initialization/Termination/Entry scripts and it gave me an idea to write a blog about using them. I am not sure how the basic IdM training course by SAP addresses these topics but if it doesn't and since not all the new IdM'ers participate the training, so maybe this helps someone in getting started or gives some ideas.


In IdM passes you can define the following types of custom scripts to be triggered upon execution of the pass:

  1. Initialization Script
  2. Termination Script
  3. Entry Script


The toGeneric pass has also three additional types of custom scripts:

  1. Open destination
  2. Next data entry
  3. Close destination

Consider following dummy toGeneric pass as an example. It runs a dummy SQL-statement in Source-tab that returns static text as a result set which gets passed to Destination-tab. The passed entries are "processed" by the scripts in Destination-tab.



The Source-tab’s SQL-statement returns two dummy rows (or records, or entries depending how you want to see them) to the Destination-tab.



The Destination-tab calls all the 3 types of scripts possible in toGeneric-pass plus getCount-script in the attribute mapping. What ever is returned by getCount-script gets inserted into it's place in the table cell and is passed to the Next Data Entry script among the other attributes defined in the mapping.


Execution Order
Let’s examine the job log and see the execution order plus what the output was. All the scripts in the example output their name plus what was passed as parameter.


So the execution order is:

  1. Initialization Script
  2. Open Destination Script
  3. Entry Script
  4. Next Data Entry Script
  5. Close Destination Script
  6. Termination Script


Initialization Script


The Initialization Script was called first and from the output it’s visible that while it received the parameters in Par-object none of the macros were executed. All the values appear as they were typed in the Destination-tab.


In the example we have one custom variable called “theCount” and as it is introduced outside the function definition it becomes global variable and can be used in any other scripts in the pass as long as the other script also defines the same global variable. Variable theCount is set to initial value 0 in the Initialization Script.


I’ve used Initialization Script mostly in two ways:

  1. Setting the initial values for attributes in pass (or in whole job if the pass is before where the value is later used)
  2. When using delta-functionality in a way that entris no longer present in the source would be automatically deleted in IdStore. Here the initialization script is handy in checking if the data source has lesser number of rows than expected. For example if some data transfer has failed and the source is empty, using delta to mark entries to be deleted could be fatal without any checks.

Like the name suggests and output displays the Initialization Script is called once.


Open Destination Script

The Open Destination Script is called next based on the output it does not even get the Par-hash table. Open Destination is typically used like the name suggests, in opening a connection, for example an JCo-connection in provisioning task. In JCo-call scenario Next Data Entry Script could do the actual call and Close Destination could close the opened connection. Based on the output Open Destination got called once.


Entry Script

The Entry Script in Source-tab is called next. Based on the output the hash table "Par" has it’s elements fully translated to data contents. The example uses Entry Script in growing the counter variable by one and storing the new value to Par to be used in the Destination tab. (BTW, for some reason having an underscore in the element name, for example “THE_COUNT”, crashed the pass.)

The Entry Script is called as many times as the Source-tab definition returns rows.


Next data entry Script


The Next Data Entry Script in Destination-tab is called next and again it is called as many times as the Source-definition returns records. It receives the full Par-hash table and it’s correct values along with the values we just manipulated in the Entry Script.

Close destination Script


The Close Destination was called as second to last.

Termination Script


The Termination Script was the last script to be called.


HTML-file generation using Initialization/Termination/Entry scripts

The example job has two passes; first one that loads data from CSV-file to temp-table and another pass that reads the contents of the table and writes them to HTML-file.


Populating temp table





The destination has just file vs. table attribute mapping. I always name the columns in files after the IdM-attributes so it simplifies the “interface” and it sort of documents itself.


Writing HTML


Writing plain text file or file with CSV-structure is pretty easy from IdM as all that needs to be done for the formatting are the column/attribute-mapping and defining the CSV-delimiter, headings etc.


HTML is slightly trickier as all it’s possible to output in the Destination-tab are the repeatable elements meaning just table cells. The start and end of the HTML document plus start and end of the table must come from somewhere and this is where Termination Script is handy.




The source SQL reads the entries from the temp table. Note that it’s possible to have a JavaScript in the SQL-statement.


The JavaScript is executed first and whatever is returned by mySQLFilter gets embedded into SQL. Good exampe on how to use the JavaScript within SQL can be found from the BW-interface and how IdM sends the data to SAP BW via LDAP push.


Initialization Script


Initialization Script is used to generate the somewhat unique filename from timestamp. Name of the file is stored to global variable so that the name is accessible for other scripts. The colons are removed from the file name with uReplaceString-function. The Initialization Script also sets the counter that is used in counting the rows to zero.


Entry Script

Entry Script just grows the counter like in previous example.


Termination Script


As the previous example showed the Termination Script is called in the end. So, Termination Script is called after the table cells are written into the file and here it reads the table cells into a string plus adds the start and end of the HMTL-page around the table cells.


The HTML-page uses a style sheet that is returned from script just to demonstrate that there can be more than one script in the script “file”.




The destination has simple attribute mapping that writes the HTML-table rows to the file.


The output filename is returned by a script getHtmlFileName, which just returns the value from global variable.


The result



Filter Blog

By author:
By date:
By tag: