1 2 3 9 Previous Next

SAP Identity Management

124 Posts

Queries and other useful stuff, infodump


See The Future of SAP IDM by Matt Pollicove for more information but in short; IdM is moving from my office in Trondheim to Sofia. This entry will simply be an infodump of some of the IdM related and non-IdM related queries I've thought about using in future blogs, have used in support-cases and things I've found elsewhere online that I thought would be useful at a later time. This might be a bit heavy on execution plan stuff and might not be useful for "plain" IdM solution implementation but I've seen some of you guys getting pretty advanced so...


Please excuse the lack of (or in some cases misleading) descriptions, formatting, testing and organization. I've tried to keep this limited to harmless queries, but don't use them if you don't understand what they do, or atleast use them in a non-critical system while figuring them out.


Thank you all for the feedback and the great learning experience this community has offered!


Best regards, thanks and goodbye,

Per "Chris" Christian Krabsetsve

:-)




The blogs that are already published that are still in progress will not be finished by me. Anyone who wants to take the texts, partial or whole, to finalized or improve is welcome to do so.

IDM SQL Basics #1: Queries against the Identity Store

IdM SQL Basics #2: Locating problem queries

IDM SQL Basics #3: Testing and improving queries


On queue processing, or the lack thereof. Part #1

On queue processing, or the lack thereof. Part #2

On queue processing, what (just) happened? Part #3


IDM and provision queue counters & statistics gathering


Misc queries, might work on either ora or sql or both.

 

Provisioning/queue

Actions ready to run group by actiontype

select A.Name as actiontype,COUNT(mskey) as availableEntries from MXP_Provision P, mxp_actiontype A
where P.State=2 and A.ActType=P.ActionType group by A.Name





















Provisioning actions(jobs) that could be running but are not

select T.TaskName,COUNT(mskey) as availableEntries from MXP_Tasks T,MXP_Provision P where P.ActionID=T.TaskID and
P.ActionType=0 and P.state=2 and T.JobGuid not in (select JobGuid from mc_jobs where State=1)
group by taskname





















Actions ready to run grouped by taskname

select t.taskname,count(p.mskey) from MXP_Provision P with(nolock) , mxp_tasks T with(nolock)
where T.taskid = P.actionid group by T.taskname





















Actions ready to run grouped by action type name,taskid

select T.TaskName ,A.Name as actiontype,COUNT(mskey) as available
from MXP_Provision P, mxp_actiontype A, MXP_Tasks T
where P.State=2 and A.ActType=P.ActionType and T.taskid=P.actionid group by T.taskname,A.Name





















Running provision jobs and regular jobs

select case when Provision=1 then 'provision' when provision=0 then 'job' else 'huh' end "type", count(jobid) running
from mc_jobs where state=2 group by provision





















Provision queue entries ready to process/run grouped by actiontype and name

select T.Name,COUNT(mskey) as available from MXP_Provision P, mxp_actiontype T 
where P.State=2 and T.ActType=P.ActionType group by T.Name





















Provision queue entries ready to process/run grouped by actiontype and name with statename

select count(P.mskey),t.taskname,s.name from mxp_provision P, mxp_Tasks T, mxp_state S
where S.StatID=P.State and T.taskid = P.ActionID  group by T.taskname, S.name





















Task usage, list number of times a task has been executed since xx.yy.zz

select T.taskname,COUNT(A.aud_ref) from MXP_Ext_Audit A, MXP_Tasks T
where T.TaskID = A.aud_task and A.Aud_datetime > '2012-06-20'
group by T.TaskName
order by COUNT(A.aud_ref) desc





















Display number of jobs and provision jobs available for a dispatcher

SET NOCOUNT ON
declare @sj int, @jj int, @sp int, @jp int, @dn varchar(20), @dv varchar(20)
SET @dn = 'dispatcher name' -- dispatcher name
SET @dv = '8.0' -- version, not important
exec mc_dispatcher_check @dn,@dv,1,1,1,1,@sj output,@jj output,@sp output,@jp output
print 'Jobs and provisioning jobs available for '+@dn
print 'WinRT jobs:'+cast(@sj as varchar)
print 'javaRT jobs:'+cast(@jj as varchar)
print 'WinRT prov jobs:'+cast(@sp as varchar)
print 'JavaRT prov jobs:'+cast(@jp as varchar)









 

 

Job processing


Use the logs to summarize the number of entries a job has processed, average processing time and some more.

select J.name,sum(L.Num_Adds) adds,SUM(L.NUM_MODS) mods, SUM(timeused) tUsed, (sum(L.Num_Adds)+SUM(L.NUM_MODS))/SUM(timeused) entryPerSec
from mc_logs L with (nolock), mc_jobs J with (nolock) where J.JobId=L.JobId and TIMEUSED > 0 and L.LOGDATE > '2012-06-20'
group by J.name
order by (sum(L.Num_Adds)+SUM(L.NUM_MODS))/SUM(timeused) desc





















 

select LA.jobname,LA.JobId,case when J.provision = 1 then 'action' else 'job' end as "jobtype",
sum(LA.TotalEntries) totalEntries,sum(LA.TimeUsed) totalTime
,round(cast(sum(LA.TotalEntries) as float)/cast(sum(LA.TimeUsed) as float),2) entriesPerSecond
from mcv_logall LA, mc_jobs J
where LA.JobId = J.JobId
group by LA.jobname,LA.jobid, J.Provision



















 

select jobname,JobId,sum(TotalEntries) totalEntries,sum(TimeUsed) totalTime
,Round(sum(TotalEntries) /sum(TimeUsed),2) entriesPerSecond
from mcv_logall
group by jobname,jobid
order by entriesPerSecond



















 

select jobname,JobId,sum(TotalEntries) totalEntries,sum(TimeUsed) totalTime,
round(cast(sum(TotalEntries) as float)/cast(sum(TimeUsed) as float),2) entriesPerSecond
from mcv_logall group by jobname,jobid
order by round(cast(sum(TotalEntries) as float)/cast(sum(TimeUsed) as float),2)  asc



















 

 

 

Provisioning audit

 

Some audit stuff (oracle). Execution time for task X executed after date Y

select taskid,to_char(posteddate,'YYYY-MM-DD') "date",count(auditid) "numExecs",AVG(round(statusdate-posteddate,2)*24*60*60) "avgTimeToComplete"
from mxp_audit
where taskid = 20 and provstatus > 1000 and postedDate > to_date('2014-01-01','YYYY-MM-DD')
group by taskid,to_char(posteddate,'YYYY-MM-DD')
order by taskid,to_char(posteddate,'YYYY-MM-DD')





















Some audit stuff (SQL Server). Execution time for task X executed after date Y

select avg(datediff(SS,posteddate,statusdate)) Seconds,cast(posteddate as date) Date
from MXP_AUDIT
where taskid = 1406 and posteddate > '2013-06-01'
group by cast(posteddate as date)
order by cast(posteddate as date)




















Some audit stuff (oracle). Executed task for user X

select A.auditid, A.AuditRoot, A.RefAudit auditParent, A.userid, A.StartedBy, A.taskid, T.taskname, A.mskey, A.PostedDate, A.StatusDate, A.provstatus, A.LastAction, A.msg
from MXP_AUDIT A, MXP_Tasks T where A.TaskId = T.TaskID
and A.msKey = (select mcmskey from idmv_entry_simple where mcMskeyValue = 'ADMINISTRATOR')
order by auditroot,RefAudit




















Some audit stuff (SQL Server), mainlylisting seconds from task being posted to its completion

select taskid,convert(varchar(10),posteddate,20) Date,count(auditid) as numExecs,avg(datediff(ss,A.posteddate,A.statusdate)) AvgTimeToComplete
from mxp_audit A
where taskid = 1 and posteddate > '2014-02-01' and ProvStatus > 1000
group by taskid,convert(varchar(10),posteddate,20)
order by taskid,convert(varchar(10),posteddate,20)




















Some audit stuff (SQL Server), mainlylisting seconds from task being posted to its completion inc. count of substasks

select auditid,postedDate,taskid,datediff(ss,A.posteddate,A.statusdate) seconds,
   (select count(aud_ref) from MXP_Ext_Audit AE with(nolock) where a.AuditID=ae.Aud_ref)
from mxp_audit A with(nolock) order by posteddate




















Average time between posting and completion of task between two dates

select avg(datediff(ss,A.posteddate,A.statusdate)) seconds
from mxp_audit A where postedDate >='2013-12-05 16:49:35' and postedDate <'2013-12-05 16:49:40'




















Number of executions of tasknames after  specified date

select
       count(a.aud_ref),t.taskname,
       convert(varchar,A.aud_datetime,20)
from
       mxp_ext_audit A with(nolock), mxp_tasks T with(nolock)
where
       a.Aud_task = T.TaskID and
       (aud_datetime > '2013-11-18 12:00')
group by t.taskname,convert(varchar,A.aud_datetime,20)



















List number root tasks started in minute intervals for a period of time

-- Create helper table for intervals
declare @intervaltbl table(st datetime,et datetime)
declare @it datetime
declare @interval int
set @interval = 5 -- minutes
-- Interval Start time
set @it = '2014-05-27 10:00:00'
-- Interval End time
while @it <= '2014-05-27 12:30:00'
begin
    insert into @intervaltbl values (@it,dateadd(mi, @interval, @it))
    set @it = dateadd(mi, @interval, @it)  -- 5 minute interval, change as needed
end
-- count per interval
select it.st startinterval, it.et endinterval, count(*) troottaskstarted
  from @intervaltbl it, MXP_AUDIT A with(nolock)
  where A.PostedDate between it.st and it.et
group by it.st,it.et
















List tasks and the number of execs per task in 10 minute intervals for a timeframe

select count(aud_ref) taskexecutions, t.taskname, left(CONVERT(varchar,aud_datetime,21),15) 
from mxp_ext_audit AE with(nolock), MXP_Tasks T with(nolock)
where T.taskid = AE.aud_task 
and   Aud_datetime between '2013-04-19 00:01' and '2013-04-19 03:30'
group by T.taskname, left(CONVERT(varchar,aud_datetime,21),15)















List how many times each individual task was run in a time-interval

-- Create helper table for intervals
declare @intervaltbl table(st datetime,et datetime)
declare @it datetime
declare @interval int
set @interval = 5 -- minutes
-- Interval Start time
set @it = '2014-05-27 10:00:00'
-- Interval End time
while @it <= '2014-05-27 12:30:00'
begin
    insert into @intervaltbl values (@it,dateadd(mi, @interval, @it))
    set @it = dateadd(mi, @interval, @it)  -- 5 minute interval, change as needed
end
-- Do a count per interval
select it.st startinterval, it.et endinterval, T.taskname, count(*) TimesStarted
  from @intervaltbl it, MXP_EXT_AUDIT A with(nolock), MXP_Tasks T with(nolock)
  where A.Aud_datetime between it.st and it.et and A.Aud_task = T.taskid
group by it.st,it.et, T.taskname
order by it.st, T.taskname
















 

an example of a query extracting details of tasks executed for a user

SELECT        dbo.MXP_Ext_Audit.Aud_ref AS Auditref, dbo.MXP_Ext_Audit.Aud_OnEntry AS EntryId, dbo.MXP_Ext_Audit.Aud_datetime AS Time, MXP_Tasks_1.TaskName AS Task, 
                         dbo.MXP_ProvStatus.Name AS Status, dbo.MXP_Tasks.TaskName AS [End task], dbo.MXP_Ext_Audit.Aud_Approver AS Approver, 
                         dbo.MXP_Ext_Audit.Aud_Info AS Info, dbo.MXP_Ext_Audit.Aud_StartedBy AS [Started by], dbo.MXP_AUDIT.MSG AS Message, 
                         dbo.MC_REPOSITORY.rep_name AS Repository
FROM dbo.MXP_Ext_Audit WITH (nolock) INNER JOIN
 dbo.MXP_AUDIT ON dbo.MXP_Ext_Audit.Aud_ref = dbo.MXP_AUDIT.AuditID INNER JOIN
 dbo.MXP_ProvStatus ON dbo.MXP_AUDIT.ProvStatus = dbo.MXP_ProvStatus.ProvStatus INNER JOIN
 dbo.MXP_Tasks ON dbo.MXP_AUDIT.LastAction = dbo.MXP_Tasks.TaskID INNER JOIN
 dbo.MXP_Tasks AS MXP_Tasks_1 ON dbo.MXP_Ext_Audit.Aud_task = MXP_Tasks_1.TaskID LEFT OUTER JOIN
 dbo.MC_REPOSITORY ON dbo.MXP_AUDIT.repository = dbo.MC_REPOSITORY.rep_id
WHERE (dbo.MXP_Ext_Audit.Aud_ref IN
 (SELECT AuditID
 FROM dbo.MXP_AUDIT AS MXP_AUDIT_1 with (nolock)
 WHERE        (MSKey =
 (SELECT mcMSKEY
 FROM dbo.idmv_entry_simple with (nolock)
 WHERE        (mcMSKEYVALUE = 'TEDA')))))
ORDER BY Auditref, Time















Initial Load data stuff

 

Using the passes from Initial Load (or using initial load with the To IdStore passes disabled) read the repository data to temporary tables; users, roles, assignments, stop the job from doing anything else.

 

Look at the mapping of users from the repository to the IdStore ACCOUNT%REP.NAME% attribute. This you can use  to

Find users from the repository tempoarary table not in the idstore

select REP.* from 
sap%$rep.$NAME%User REP 
left outer join idmv_value_basic IDSVAL on REP.accountname = IDSVAL.SearchValue and IDSVAL.ATTRNAME='ACCOUNT%$rep.$NAME%'
where IDSVAL.SearchValue IS NULL 












Find IdStore users in the IdStore not in the repository

select E.mcmskeyvalue
from idmv_value_basic IDS
inner join idmv_entry_simple E on E.mcmskey = ids.mskey and E.mcEntryType='MX_PERSON'
left outer join mxmc_rt_u.sap%$REP.$NAME%user REP on IDS.SearchValue=REP.accountname
where   ids.attrname = 'sap%$REP.$NAME%user' 
and REP.accountname IS NULL












 

Identity Store Flattening


select mskey,
  max (case when AttrName = 'MSKEYVALUE' then aValue end ) mskeyvalue,
  max (case when AttrName = 'DISPLAYNAME' then aValue end ) DisplayName,
  max (case when AttrName = 'MX_ENTRYTYPE' then aValue end ) EntryType
from idmv_value_basic
where attrname in ('MSKEYVALUE','DISPLAYNAME','MX_ENTRYTYPE')
group by mskey







Using pivot:


select * from
(
  select mskey,avalue,attrname from idmv_value_basic where attrname in ('MSKEYVALUE','DISPLAYNAME','MX_ENTRYTYPE')
) s
pivot
(
  max(avalue) for attrname in (MSKEYVALUE,DISPLAYNAME,MX_ENTRYTYPE)
) p








Also using the STUFF + for xml path to flatten things is cool TSQL &amp;#8211; Concatenate Rows using FOR XML PATH() | Sql And Me


IdM 7.1 example, easy to adapt:

select 'My roles:'+stuff((select ','+searchvalue
      from mxiv_sentries where attrname = 'MX_AUTOROLE' and mskey =128108
for xml path('')),1,1,'') 
as MyRoles








SQL Server

 

Show running queries

SELECT database_id,st.text, r.session_id, r.status, r.command, r.cpu_time,r.total_elapsed_time
FROM sys.dm_exec_requests r CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS st order by database_id



















 

Finding execution plan handle (prepared or adhoc, or filter for just one of them as needed):

SELECT objtype, p.size_in_bytes, t.[text], usecounts, p.plan_handle
FROM sys.dm_exec_cached_plans p
OUTER APPLY sys.dm_exec_sql_text (p.plan_handle) t
WHERE objtype IN ('Prepared', 'Adhoc')
and T.[text] like '%mxi_values%'

















 

SELECT plan_handle,UseCounts, Cacheobjtype, Objtype, TEXT, query_plan
FROM sys.dm_exec_cached_plans
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
CROSS APPLY sys.dm_exec_query_plan(plan_handle)
WHERE TEXT like '%mcuniqueid%mcothermskey%mcotherentrytype%mcdisplayname%mcreason%mcvalidfrom%mcvalidto%mcExecState%mcExecStateHierarchy%'

















 

SELECT qplan.query_plan AS [Query Plan],qtext.text
FROM sys.dm_exec_query_stats AS qstats
CROSS APPLY sys.dm_exec_query_plan(qstats.plan_handle) AS qplan
cross apply sys.dm_exec_sql_text(qstats.plan_handle) as qtext
where text like '%mcuniqueid%mcothermskey%mcotherentrytype%mcdisplayname%mcreason%mcvalidfrom%mcvalidto%mcExecState%mcExecStateHierarchy%';

















Retrieve an execution plan using plan handle (see above)

If forced to use RT account you might need someone to run this for you first

GRANT SHOWPLAN TO mxmc_user_u

















SELECT * FROM sys.dm_exec_query_plan (0x06000600ACC0D612F05E7C5F0200000001000000000000000000000000000000000000000000000000000000);

















Clear plan cache and more

DBCC FREEPROCCACHE

 

Check memory states

DBCC MEMORYSTATUS

 

Show statistics for table/index

DBCC SHOW_STATISTICS (mxi_values,ix_mxi_values_perf)

 

 

List indexes and index usage stats

SELECT DB_NAME(DATABASE_ID) AS DATABASENAME, OBJECT_NAME(B.OBJECT_ID) AS TABLENAME,
       INDEX_NAME = (SELECT NAME FROM   SYS.INDEXES A WHERE  A.OBJECT_ID = B.OBJECT_ID AND A.INDEX_ID = B.INDEX_ID),
       USER_SEEKS, USER_SCANS, USER_LOOKUPS, USER_UPDATES
FROM   SYS.DM_DB_INDEX_USAGE_STATS B INNER JOIN SYS.OBJECTS C ON B.OBJECT_ID = C.OBJECT_ID
WHERE  DATABASE_ID = DB_ID(DB_NAME()) AND C.TYPE <> 'S'

















 

 

SELECT   PVT.TABLENAME, PVT.INDEXNAME, [1] AS COL1, [2] AS COL2, [3] AS COL3,
         [4] AS COL4, [5] AS COL5, [6] AS COL6, [7] AS COL7, B.USER_SEEKS,
         B.USER_SCANS, B.USER_LOOKUPS
FROM     (SELECT A.NAME AS TABLENAME,A.OBJECT_ID,B.NAME AS INDEXNAME,B.INDEX_ID,D.NAME AS COLUMNNAME,C.KEY_ORDINAL
          FROM   SYS.OBJECTS A INNER JOIN SYS.INDEXES B ON A.OBJECT_ID = B.OBJECT_ID
                 INNER JOIN SYS.INDEX_COLUMNS C ON B.OBJECT_ID = C.OBJECT_ID AND B.INDEX_ID = C.INDEX_ID
                            INNER JOIN SYS.COLUMNS D ON C.OBJECT_ID = D.OBJECT_ID AND C.COLUMN_ID = D.COLUMN_ID
          WHERE  A.TYPE <> 'S') P
         PIVOT
         (MIN(COLUMNNAME)
          FOR KEY_ORDINAL IN ( [1],[2],[3],[4],[5],[6],[7] ) ) AS PVT
         INNER JOIN SYS.DM_DB_INDEX_USAGE_STATS B
           ON PVT.OBJECT_ID = B.OBJECT_ID AND PVT.INDEX_ID = B.INDEX_ID AND B.DATABASE_ID = DB_ID()
ORDER BY TABLENAME, INDEXNAME;

















 

Update statistics for a table

update statistics mxi_values

















 

List system processes/sessions with state/login/db and more

SELECT
spid,status,sid,hostname,program_name,cmd,cpu,physical_io,blocked,dbid,
convert(sysname, rtrim(loginame)) as loginname,spid as 'spid_sort',
substring( convert(varchar,last_batch,111) ,6 ,5 ) + ' ' + substring( convert(varchar,last_batch,113) ,13 ,8 ) as 'last_batch_char',last_batch, DB_NAME(dbid)
from master.dbo.sysprocesses (nolock)
order by loginname

















 

Missing Indexes hints

SELECT * FROM sys.dm_db_missing_index_details
SELECT * FROM sys.dm_db_missing_index_group_stats
SELECT * FROM sys.dm_db_missing_index_groups
SELECT * FROM sys.dm_db_missing_index_columns(1)

















SELECT mig.*, statement AS table_name,
column_id, column_name, column_usage
FROM sys.dm_db_missing_index_details AS mid
CROSS APPLY sys.dm_db_missing_index_columns (mid.index_handle)
INNER JOIN sys.dm_db_missing_index_groups AS mig ON mig.index_handle = mid.index_handle
ORDER BY mig.index_group_handle, mig.index_handle, column_id;

















 

Statistics, Timing, IO, CPU, XML, PLAN…

 

set statistics io on

set statistics time on

set statistics profile on

SET SHOWPLAN_TEXT ON

SET SHOWPLAN_ALL ON

SET STATISTICS XML ON

 

 

Backup, transaction log shrinking and other stuff

BACKUP LOG [mxmc_db]
TO  DISK = N'C:\temp\mxmcdbfullcopy'
WITH NOFORMAT, INIT,  NAME = N'mxmc_db-Transaction Log Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO

















 

select name,recovery_model_desc from sys.databases
Alter database mxmc_db SET Recovery simple
select name,size from sys.database_files
DBCC SHRINKFILE (N'mastlog' , 1)
Alter database mxmc_db SET Recovery full

















Clearing all transactionlogs on dev and test-systems

 

SET NOCOUNT ON
CREATE TABLE #TransactionLogFiles (DBName VARCHAR(150), LogFileName VARCHAR(150) )
DECLARE
DBList CURSOR FOR SELECT name FROM master..sysdatabases WHERE NAME NOT IN ('master','tempdb','model','msdb','distribution')
DECLARE
@DB VARCHAR(100)DECLARE @SQL VARCHAR(8000)
OPEN DBList
FETCH NEXT FROM DBList INTO @DB
WHILE @@FETCH_STATUS <> -1
BEGIN
      SET @SQL = 'USE ' + @DB + ' INSERT INTO #TransactionLogFiles(DBName, LogFileName) SELECT ''' + @DB + ''', Name FROM sysfiles WHERE FileID=2'
      EXEC(@SQL) FETCH NEXT FROM DBList INTO @DB
END
DEALLOCATE DBList
DECLARE TranLogList CURSOR FOR
SELECT DBName, LogFileName FROM #TransactionLogFiles
DECLARE @LogFile VARCHAR(100)
OPEN TranLogList
FETCH NEXT FROM TranLogList INTO @DB, @LogFile
WHILE @@FETCH_STATUS <> -1
BEGIN
      PRINT @DB
      --SELECT @SQL = 'EXEC sp_dboption ' + @DB + ', ''trunc. log on chkpt.'', ''True'''
         SELECT @SQL = ' ALTER DATABASE ' + @DB + ' SET AUTO_SHRINK ON '
-- ALTER DATABASE [dbname] SET RECOVERY SIMPLE
      EXEC (@SQL)
      SELECT @SQL = 'USE ' + @DB + ' DBCC SHRINKFILE(''' + @LogFile + ''',''truncateonly'') WITH NO_INFOMSGS'
      EXEC (@SQL)
      --SELECT @SQL = 'EXEC sp_dboption ' + @DB + ', ''trunc. log on chkpt.'', ''False'''
           SELECT @SQL = ' ALTER DATABASE ' + @DB + ' SET AUTO_SHRINK OFF '
         EXEC(@SQL) FETCH NEXT FROM TranLogList INTO @DB, @LogFile END
DEALLOCATE TranLogList
DROP TABLE #TransactionLogFiles
















 

 

 

List a tables constraints, w. optional only default values

SELECT SCHEMA_NAME(schema_id) AS SchemaName,
       OBJECT_NAME(parent_object_id) AS TableName,
       OBJECT_NAME(OBJECT_ID) AS NameofConstraint
FROM sys.objects WHERE OBJECT_NAME(parent_object_id) = 'MXP_TASKS'
-- and type_desc = 'DEFAULT_CONSTRAINT'

















 

Drop all constraints for a table

BEGIN TRANSACTION
DECLARE @sql NVARCHAR(MAX) = N'';
SELECT @sql += N'
ALTER TABLE ' + SCHEMA_NAME(schema_id)
    + '.' +OBJECT_NAME(parent_object_id) +
    ' DROP CONSTRAINT ' + OBJECT_NAME(OBJECT_ID) + ';'
FROM sys.objects where OBJECT_NAME(OBJECT_ID) = 'MXP_TASKS'
PRINT 'Dropping constraints: ' + @sql;
EXEC sp_executesql @sql;
                -- COMMIT

















 

Find tables containing specific column type

select TABLE_NAME,COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH, IS_NULLABLE
from INFORMATION_SCHEMA.COLUMNS IC
where COLUMN_NAME = 'aLong'

















 

Refresh statistics for ALL tables in system, and basic exception handling

DECLARE @statName varchar(255)
DECLARE @lastUpd datetime
DECLARE @tableName varchar(255)
DECLARE @updName varchar(512)
DECLARE @OPTION int
DECLARE cStatsList CURSOR LOCAL FAST_FORWARD FOR
       SELECT t.name tablename,s.name statname,STATS_DATE(s.[object_id], s.stats_id) AS lastupd
       FROM sys.stats s JOIN sys.tables t ON s.[object_id] = t.[object_id]
       WHERE t.is_ms_shipped = 0 ORDER BY t.name,s.name
       OPEN cStatsList
       FETCH NEXT FROM cStatsList INTO @tableName,@statName,@lastUpd
       WHILE (@@FETCH_STATUS = 0)
       BEGIN
              PRINT 'Refreshing statistics '+@statName+' for table '+@tableName+' which was last updated '+ISNULL(convert(varchar,@lastUpd),'never (probably an empty table)')
              SET @updName = @tableName+'('+@statName+')'
              BEGIN TRY
                     exec ('UPDATE STATISTICS '+@updName+'WITH FULLSCAN')
              END TRY
              BEGIN CATCH
                     PRINT 'Unable to refresh statistics '+@statName+' for table '+@tableName
              END CATCH
              FETCH NEXT FROM cStatsList INTO @tableName,@statName,@lastUpd
       END
CLOSE cStatsList
DEALLOCATE cStatsList

















List tables and their  indexes

SELECT o.name AS tablename ,i.name AS indexname ,i.type_desc AS indextype ,STATS_DATE(i.[object_id], i.index_id) AS statisticsdate
FROM sys.indexes i INNER JOIN sys.objects o ON i.[object_id] = o.[object_id]
WHERE o.TYPE = 'U' -- Only get indexes for User Created Tables
AND i.name IS NOT NULL
ORDER BY o.name ,i.TYPE

















 

List tables indexes with last update time

SELECT t.name tablename,s.name statname,STATS_DATE(s.[object_id], s.stats_id) AS lastupd
FROM sys.stats s JOIN sys.tables t ON s.[object_id] = t.[object_id]
WHERE t.is_ms_shipped = 0 ORDER BY t.name,s.name

















 

List tables indexes with more detail

SELECT
t.name AS TableName,
c.name AS ColumnName,
s.name AS StatName,
STATS_DATE(s.[object_id], s.stats_id) AS LastUpdated,
s.has_filter,
s.filter_definition,
s.auto_created,
s.user_created,
s.no_recompute
FROM sys.stats s
JOIN sys.stats_columns sc ON sc.[object_id] = s.[object_id] AND sc.stats_id = s.stats_id
JOIN sys.columns c ON c.[object_id] = sc.[object_id] AND c.column_id = sc.column_id
JOIN sys.tables t ON s.[object_id] = t.[object_id]
WHERE t.is_ms_shipped = 0
ORDER BY t.name,s.name,c.name;

















 

Testing a prepared statement issued by webui

declare @p1 int
set @p1=0
exec sp_prepexec @p1 output,N'@P0 int,@P1 int,@P2 int,@P3 int,@P4 int',N'
select distinct  top 200 mskey from idmv_jmx_entries VALS WITH (NOLOCK) where Attr_id IN (@P0,@P1,@P2,@P3) AND disabled=0
AND EXISTS (SELECT mcMskey FROM mxiv_entry E WHERE mcEntryTypeId=@P4 and mcMskey=VALS.mskey  and ((mcACEntry=0) or (mcACentry=1 AND
((23 IN (SELECT MemberMskey FROM idmv_members WHERE EntryMskey=E.mcMSKEY)) OR (23 IN (SELECT OwnerExpandedMskey FROM idmv_owners
WHERE EntryMSKEY=E.mcMSKEY)))) or ( mcACEntry=2 AND 23 IN (SELECT OwnerExpandedMskey FROM idmv_owners WHERE EntryMSKEY=E.mcMSKEY))))
',2,4,59,60,4
select @p1

















 

Used and available table space

SELECT name ,size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS AvailableSpaceInMB
FROM sys.database_files;

















 

Display role/priv hierarcies on versions of IdM 7.2 that has full structure table

select U.mcDisplayName, A.mcDisplayName assignment,L.mcAssignedDirect isDirect,SR.mcParentMskeyassignmentParentMskey,AP.mcDisplayName assignmentParentName,SR.mcRootMskey assignmentRootMskey, AR.mcDisplayNameassignmentRootName
from idmv_entry_simple U
inner join mxi_link L on L.mcThisMSkey = U.mcMSkey andL.mcOtherEntryType in (select ocId from mxi_objectclasses whereocName='MX_PRIVILEGE')
left outer join idmv_entry_simple A ON A.mcMskey = L.mcOtherMSKEY
left outer join mxi_structure_root SR ON SR.mcChildMskey = A.mcMSKEY
left outer join idmv_entry_simple AP ON AP.mcMSKEY = SR.mcParentMskey
left outer join idmv_entry_simple AR ON AR.mcMSKEY = SR.mcRootMskey
where U.mcEntryType='MX_PERSON' -- and U.mcMskeyValue = 'some.user'
order by U.mcMskey,A.mcMskey,ap.mcMSKEY,ar.mcMSKEY desc





 

Display role/priv hierarcies on versions of IdM 7.2 that don't have full structure table by creating a temporary table

DECLARE @MSKEY int, @MSKEYV VARCHAR(255),@ROOTMSKEY int,@level int,@MAXLEVEL int,@Mylevel int, @Lnum int
SET @MAXLEVEL = 20
-- Not setting below values returns all users.
SET @MSKEY = NULL -- or add user mskey, such as 58
SET @MSKEYV = NULL -- or add user mskeyvlaue, such as 'User.Test.A'
-- Create temporary table
CREATE TABLE #STRUCT (mcChildMskey INT, mcRootMskey INT, mcParentMskey INT, mcEntryType INT, mcLevel INT)
-- Insert first level links
insert into #STRUCT (mcChildMskey, mcRootMskey, mcParentMskey, mcEntryType, mcLevel)
    select A.mcThisMskey, A.mcOtherMskey, A.mcOtherMskey, A.mcthisentrytype, 1 from mxi_link A with (nolock)
    where not exists (select 1 from mxi_link B with (nolock) where  A.mcOtherMskey = B.mcThisMskey and B.mcLinkType = 1 AND B.mcLinkState = 0)
    and   A.mcLinkType = 1 AND A.mcLinkState = 0
    and   A.mcAttrId in (select attr_id from mxi_attributes with (nolock) where AttrName = 'MXREF_MX_ROLE')
    and   A.mcotherentrytype in (select ocid from mxi_objectclasses with (nolock) where ocName = 'MX_ROLE')
    and   A.mcthisentrytype in (select ocid from mxi_objectclasses with (nolock) where ocName in ('MX_ROLE', 'MX_PRIVILEGE'))
  -- Other level links
  set @Mylevel = 2
  while @Mylevel < @MAXLEVEL
  begin
  insert into #STRUCT (mcChildMskey, mcRootMskey, mcParentMskey, mcEntryType, mcLevel)
  select A.mcThisMskey, A.mcOtherMskey, A.mcOtherMskey, A.mcthisentrytype, @Mylevel from mxi_link A  with (nolock)
  where A.mcLinkType = 1 and A.mcLinkState = 0
  and   A.mcattrid in (select attr_id from mxi_attributes with (nolock) where AttrName = 'MXREF_MX_ROLE')
  and   A.mcotherentrytype in (select ocid from mxi_objectclasses with (nolock) where ocName = 'MX_ROLE')
  and   A.mcthisentrytype in (select ocid from mxi_objectclasses with (nolock) where ocName in ('MX_ROLE', 'MX_PRIVILEGE'))
  and   A.mcOtherMskey in (select B.mcChildMskey from #STRUCT B with (nolock) where B.mcLevel = @Mylevel - 1)
  and   not exists (select 1 from #STRUCT C with (nolock) where A.mcThisMskey = C.mcChildMskey and A.mcOtherMskey = C.mcRootMskey and A.mcOtherMskey = C.mcParentMskey)
  set @Lnum = @@Rowcount
  -- Build inherit link structure
  insert into #STRUCT (mcChildMskey, mcRootMskey, mcParentMskey, mcEntryType, mcLevel)
  select distinct A.mcChildMskey, B.mcRootMskey, A.mcParentMskey, A.mcEntryType, 0 from #STRUCT A with (nolock)
  inner join #STRUCT B with (nolock) on A.mcParentMskey = B.mcChildMskey
  where A.mcLevel > 1
  and   not exists (select 1 from #STRUCT C with (nolock) where A.mcChildMskey = C.mcChildMskey and B.mcRootMskey = C.mcRootMskey and A.mcParentMskey = C.mcParentMskey)
  if @Lnum = 0
  begin
  break
  end
  set @Mylevel = @Mylevel + 1
  end
  -- Update sub tree nodes to level 0.
  update #STRUCT set mcLevel = 0 where mcLevel > 1
  select U.mcDisplayName, A.mcDisplayName assignment, L.mcAssignedDirect isDirect,SR.mcParentMskey assignmentParentMskey,AP.mcDisplayName assignmentParentName, SR.mcRootMskey assignmentRootMskey, AR.mcDisplayName assignmentRootName
  from idmv_entry_simple U
  inner join mxi_link L on L.mcThisMSkey = U.mcMSkey and L.mcOtherEntryType in (select ocId from mxi_objectclasses where ocName='MX_PRIVILEGE')
  left outer join idmv_entry_simple A ON A.mcMskey = L.mcOtherMSKEY
  left outer join #STRUCT SR ON SR.mcChildMskey = A.mcMSKEY
  left outer join idmv_entry_simple AP ON AP.mcMSKEY = SR.mcParentMskey
  left outer join idmv_entry_simple AR ON AR.mcMSKEY = SR.mcRootMskey
  where U.mcEntryType='MX_PERSON'
  AND ((@MSKEYV IS NOT NULL AND U.mcMskeyValue = @MSKEYV) OR (@MSKEY IS NOT NULL AND U.mcMskey = @MSKEY) OR (@MSKEY IS NULL AND @MSKEYV IS NULL))
  order by U.mcMskey,A.mcMskey,ap.mcMSKEY,ar.mcMSKEY desc
-- Clean up
DROP TABLE #STRUCT






 

 

ORACLE

 

Measuring execution time of query with milliseconds:

set time on timing on;
select 1 from DUAL;

















Use F5 (Run script) to get timing, result example_

      1

Elapsed: 00:00:00.002

 

List number of sessions per username/host

select s.username as username,
(case when grouping(s.machine) = 1 then '**** All Machines ****' else s.machine end) AS machine,
count(*) as session_count
from v$session s, v$process p
where s.paddr   = p.addr and s.username is not null
group by rollup (s.username, s.machine) order by s.username, s.machine ;

















List username, state, host etc for logged in sessions

 

SELECT username, seconds_in_wait, machine, port, terminal, program, module, service_name FROM v$session WHERE type = 'USER';

Add states and some more stuff:

 

SELECT sess.machine,sess.process, sess.status,sess.state, sess.username, sess.schemaname,sess.wait_time,to_char(sess.sql_exec_start,'YYYY-MM-DD HH24:MI:SS'), sess.blocking_session,sql.sql_text
FROM v$session sess, v$sql sql
WHERE sql.sql_id(+) = sess.sql_id AND sess.type = 'USER' and schemaname LIKE 'MXMC%' order by status

 

 


Listing sessions and blocking sessions

List running queries with basic information:
SELECT sess.process, sess.status, sess.username, sess.schemaname, sql.sql_text FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id AND sess.type = 'USER'

















SELECT sess.username,sess.sid,sess.machine,sess.process,sess.blocking_session,sess.status, sess.username, sess.schemaname,sess.wait_time,sess.sql_exec_start,sql.sql_text FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id AND sess.type = 'USER' and schemaname LIKE 'MXMC%' order by blocking_session,sid

List statements that are blocking others

SELECT sess.username,sess.sid,sess.machine,sess.process,sess.blocking_session,sess.status, sess.username, sess.schemaname,sess.wait_time,sess.sql_exec_start,sql.sql_text FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id and sess.sid in (SELECT sess.blocking_session
FROM v$session sess
WHERE sess.blocking_session is not null and sess.type = 'USER' and schemaname LIKE 'MXMC%')

SELECT t.status, t.start_time,sess.username,sess.sid,sess.machine,sess.process,sess.blocking_session,sess.status, sess.username, sess.schemaname,sess.wait_time,sess.sql_exec_start,sql.sql_text
FROM v$session sess, v$sql sql , V$TRANSACTION t WHERE sql.sql_id(+) = sess.sql_id and t.ses_addr = sess.saddr and sess.sid in
(SELECT sess.blocking_session FROM v$session sess WHERE sess.blocking_session is not null and sess.type = 'USER' and schemaname LIKE 'MXMC%')

 

 

List running queries with additional information

select S.USERNAME, s.sid, s.osuser, t.sql_id, sql_text from v$sqltext_with_newlines t,V$SESSION s
where t.address =s.sql_address and t.hash_value = s.sql_hash_value and s.status = 'ACTIVE' and s.username <> 'SYSTEM' order by s.sid,t.piece

















 

SELECT sess.process, sess.status, sess.username, sess.schemaname,sess.wait_time,sess.sql_exec_start,sess.blocking_session,sql.sql_text
FROM v$session sess, v$sql sql
WHERE sql.sql_id(+) = sess.sql_id AND sess.type = 'USER' and schemaname LIKE 'MXMC%' order by status

















 

select sess.USERNAME, sess.sid, sqlt.sql_id, sqlt.sql_text from v$sqltext_with_newlines sqlt, V$SESSION sess
where sqlt.address = sess.sql_address and sqlt.hash_value = sess.sql_hash_value and sess.status = 'ACTIVE'
and sess.username like 'MVIEWS%' order by sess.sid,sqlt.piece

















 

set lines 400
column "Time left" format A15
column "Time spent" format A15
column message format a100
column username format a20
select inst_id, sid, serial#, username, context,
trunc(time_remaining / 60,2)||' Minutes' "Time left",
trunc(elapsed_seconds / 60,2)||' Minutes' "Time spent",
round(sofar/totalwork*100,2) "% Completed",
message
from gv$session_longops
where sofar != totalwork
and totalwork != 0
order by 1

















 

This seems broken, but could be useful if checked further

SELECT * FROM (select username,opname,sid,serial#,context,sofar,totalwork ,round(sofar/((totalwork+1)*100),2) "% Complete" from v$session_longops) WHERE "% Complete" != 100;

















 

List queries averaging more than  X ms worktime

set lines 400
SELECT cast(ROUND(A.ELAPSED_TIME / A.EXECUTIONS / 1000000) as number(10)) SQL_AVG,
       cast(A.SQL_ID as varchar(20)) SQL_ID,
       cast(DBMS_LOB.SUBSTR(A.SQL_FULLTEXT,300) as varchar(300)) SQL_TEXT,
       cast(A.EXECUTIONS as number(10)) EXECCOUNT,
        cast(NVL(S.PROGRAM,A.MODULE) as varchar(20)) PNAME,
       cast(NVL(S.USERNAME,A.PARSING_SCHEMA_NAME) as varchar(20)) USERID
  FROM  V$SQLAREA A, V$SESSION S
  WHERE A.SQL_ID = S.SQL_ID(+)
    AND A.EXECUTIONS > 0
    AND ROUND(A.ELAPSED_TIME / A.EXECUTIONS / 1000000) > 10

















 

 

Profiling procedures

 

SQL Plus part:

sqlplus system/abcd1234 AS SYSDBA
@D:\app\Administrator\product\11.2.0\dbhome_1\RDBMS\ADMIN\tracetab.sql
@D:\app\Administrator\product\11.2.0\dbhome_1\RDBMS\ADMIN\dbmshptab.sql
grant execute on DBMS_HPROF to mxmc_oper

















 

Rest can be done in SQL Dev

CREATE OR REPLACE directory PLSHPROF_DIR as 'C:\temp';
grant READ, WRITE on directory PLSHPROF_DIR to MXMC_OPER;
grant create table to MXMC_OPER;
grant create sequence to MXMC_OPER;
grant unlimited tablespace to MXMC_OPER;

















Then profile procedures from SQL Developer or run file:

DECLARE
v_runid NUMBER;
BEGIN
v_runid := DBMS_HPROF.ANALYZE (LOCATION => ‘PROFILE_DATA’,
FILENAME => ‘myteststuff.txt’);
DBMS_OUTPUT.PUT_LINE(‘Run ID: ‘ || v_runid);
END;

















More info: http://dbaora.com/plsql-hierarchical-profiler-oracle-database-11g-release-2-11-2/

 

 

Undo tablespace and transactions  stuff

SELECT SUM(a.bytes) "UNDO_SIZE"
  FROM v$datafile a, v$tablespace b, dba_tablespaces c
WHERE c.contents = 'UNDO' AND c.status = 'ONLINE' AND b.name = c.tablespace_name AND a.ts# = b.ts#;

















 

select sum(used_ublk) from v$transaction;

















 

select * from dba_data_files where tablespace_name='UNDOTBS1';

















 

select to_char(begin_time, 'DD-MON-RR HH24:MI') begin_time,
to_char(end_time, 'DD-MON-RR HH24:MI') end_time, tuned_undoretention
from v$undostat order by end_time;

















 

SELECT TO_CHAR(BEGIN_TIME, 'MM/DD/YYYY HH24:MI:SS') BEGIN_TIME,
         TO_CHAR(END_TIME, 'MM/DD/YYYY HH24:MI:SS') END_TIME,
         UNDOTSN, UNDOBLKS, TXNCOUNT, MAXCONCURRENCY AS "MAXCON"
         FROM v$UNDOSTAT WHERE rownum <= 144;

















select cast(file_name as varchar2(60)) filename,cast(tablespace_name as varchar2(10)) tblspc,cast(bytes as varchar2(10)) tblsize,
cast(autoextensible as varchar2(5)) isAutoExt, cast(maxbytes as varchar2(12)) maxSize,cast(increment_by as varchar2(6)) autoIncr
FROM DBA_DATA_FILES;

















List index states

select index_name,status,num_rows,sample_size,last_analyzed,visibility from all_indexes where owner = 'MXMC_OPER' ORDER BY last_analyzed

 

Gather statistics for a table

exec dbms_stats.gather_table_stats('MXMC_OPER','mxi_values',no_invalidate=>false);

















begin
  dbms_stats.gather_schema_stats (ownname    => 'mViews_oper', estimate_percent=>dbms_stats.auto_sample_size, cascade=>TRUE);
end;

















 

Index state, size, samplesize …

select index_name,status,num_rows,sample_size,last_analyzed,visibility
from all_indexes where owner = 'MXMC_OPER' --and status='UNUSABLE';

















 

Execution Plan Stuff

 

List plans for given schema

SELECT * FROM
(SELECT  sql_fulltext,sql_id,child_number,disk_reads,  executions,first_load_time,last_load_time
FROM    v$sql where parsing_schema_name = 'MXMC_OPER'
ORDER BY elapsed_time DESC)
WHERE ROWNUM < 40

















 

Plan for adhoc statement:

EXPLAIN PLAN for
select distinct mskey from (
SELECT mskey from mxi_values VALS where  SEARCHVALUE='MX_PRIVILEGE' AND
ATTR_ID=1  AND disabled=0 AND mskey in
(select mskey from mxi_values VALS where searchvalue  like
'%uncritical%EEP%' AND Attr_id IN (2, 4, 18))
) where rownum < 200;
SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());

















 

Plan for adhoc statement with output formatting

EXPLAIN PLAN FOR
select distinct(mskey) from mxmc_oper.mxiv_values where searchvalue = 'POSITION ID' and attr_id in (select attr_id from mxi_attributes where attrname = 'Z_MX_ROLE_TYPE');
SELECT *
FROM   TABLE(DBMS_XPLAN.DISPLAY_CURSOR(format => 'BASIC'));

















Explain plan setting statement id and getting plan:

EXPLAIN PLAN
SET STATEMENT_ID = 'SAPTEST_1' FOR
select * from (
  SELECT mskey from idmv_jmx_entries VALS where mskey in
    (select distinct mskey from mxi_values where SearchValue ='LAMBERT' AND Attr_ID=60)
  AND mskey in
    (select distinct mskey from mxi_values where SearchValue ='BOSKAMP' AND Attr_ID=59)
  AND disabled=0
);
SET PAGESIZE= 1000;
SET LINESIZE= 500;
SET LONG=100000;
COL xplan format a100;
SELECT dbms_xplan.build_plan_xml(statement_id => 'SAPTEST_1') AS XPLAN FROM dual;

















 

Find the plan for a statement:

SELECT
  --sql_fulltext,sql_id,cpu_time,elapsed_time,user_io_wait_time,disk_reads,executions
  *
FROM    v$sql where sql_fulltext like '%select distinct mskey from (SELECT mskey from mxiv_values VALS where%'

















Get plan by planid:

select DBMS_SQLTUNE.REPORT_SQL_MONITOR(type=>'HTML',report_level=>'ALL',sql_id=>'1zf0h9sw688bx') as report FROM dual;

















History of executions of plan

select * from DBA_HIST_SQLSTAT where sql_id = 'a2zqghuq57f2y'

















Executions for a timeframe

select *
from   dba_hist_sqlstat t, dba_hist_snapshot s
where  t.snap_id = s.snap_id
and    t.dbid = s.dbid
and    t.instance_number = s.instance_number
and    s.begin_interval_time between trunc(sysdate)-1 and trunc(sysdate)

















 

select *
from   dba_hist_sqlstat t, dba_hist_snapshot s
where  t.snap_id = s.snap_id
and    t.dbid = s.dbid
and    t.instance_number = s.instance_number
and    s.begin_interval_time between trunc(sysdate)-1 and trunc(sysdate)

















select cast(st.sql_text as varchar(400)), sum(t.elapsed_time_delta/1000000)/sum(t.executions_delta)
from   dba_hist_sqlstat t, dba_hist_snapshot s, dba_hist_sqltext st
where  t.snap_id = s.snap_id and st.sql_id = t.sql_id
and    t.dbid = s.dbid
and    t.instance_number = s.instance_number
and    s.begin_interval_time between trunc(sysdate)-1 and trunc(sysdate) -- yesterday's stats
group  by cast(st.sql_text as varchar(400))

















Empty/clear buffer, pools, execution plans

ALTER SYSTEM FLUSH BUFFER_CACHE;
alter system flush shared_pool;

















 

Search using locally defined var

DEFINE vSearchString = "'%QUALITY%'";
select distinct mskey from (SELECT mskey from mxi_values VALS where SEARCHVALUE='MX_PERSON' AND ATTR_ID=1  AND disabled=0
AND mskey in (select mskey from mxi_values VALS where searchvalue like &&vSearchString AND Attr_id IN (2,4,5))) where rownum
< 200;

















 

var vSearchString varchar2(30);
exec :vSearchString := '%QUALITY%STA%';
--print vSearchString;
select distinct mskey from (SELECT mskey from mxi_values VALS where SEARCHVALUE='MX_PERSON' AND ATTR_ID=1  AND disabled=0
AND mskey in (select mskey from mxi_values VALS where searchvalue like :vSearchString AND Attr_id IN (2,4,5))) where rownum
< 200;

















 

IDM performance metrics and counters

 

I've made an experimental procedure that collects a number of what I think are useful counters about the provisioning queue and the state of the IdM system. It can be used to detect bottlenecks, generate performance statistics, load predictions, and be a good utility for troubleshooting a system you have a halt situation or if the queue is moving along. It's using stuff, pivot and some other query types I've been testing for my own sake, and as such it may not be optimized.

 

It is also currently SQL Server only and as I'm leaving SAP the chances that I will be porting this to Oracle, DB2 or HANA are minimal. You are most welcome to do so yourself, as well as modifying and improving it. The query this procedure uses are mostly already documented in my other blogs, and I've also added some that reads CPU usage that I found online at SQLPerformance.wordpress.com.

 

The idea is that this procedure is run every X seconds, minutes or hours and the result can be used to build graphs displaying what is going (if anything) on during a performance test, deployment or when troubleshooting system performance. I'll to show how the results look first, then how it is used afterwards. Interpreting the results will be up to you and the situation you're in, but in metrics like "ready to run" and "running" it is better to have many "running" and few "ready to run" etc.

 

This will be one of my very last blog posts here on SDN and I hope you liked it :-)

 

A test scenario and example data

 

I have a test job that runs 1000 entries through an obstacle course of conditional, switch and ordered tasks, with some child tasks and wait for functions thrown in for good measure. . It takes about 4 minutes to complete so during the test I gather statistics every 5 seconds using this little statement that I ran in SQL Server Management Studio:

 

DECLARE @STOPTIME DATETIME
DECLARE @INTERVAL VARCHAR(8)
SET @STOPTIME = '2014-08-06 14:50:00'
SET @INTERVAL = '00:00:05'
WHILE GETDATE() < @STOPTIME
BEGIN
  BEGIN TRANSACTION
  exec mc_ProvQStats 0,0,0 -- Parameters for gather stats,no listing
  COMMIT
  WAITFOR DELAY @INTERVAL
END









You could also execute the procedure from a job scheduled to run every X seconds, but then you depend on the system actually running and having the capacity to run the job at the specific intervals during load, which might not work during a performance test or a halt situation.

 

The result produces "interesting" stuff like this when put into Excel:

statsTestQueueProcessing.png

 

Here is a sample run where I killed the dispatcher in the middle showing the stats flat lining as nothing was processed for a minute or two and that the active dispatcher count changed from 1 to 0in the a graph for the active dispatcher count metric...

statsTestQueueProcessingDeadDispatcher.png

 

Using the procedure

 

mc_ProvQStats <OP>, <LISTLEVEL>,<RETURN ROWS>

 

OP = 0 - Gather statistics only, return no data

OP = 1 - Gather statistics, return the values

OP = 2 - Don't gather statistics, return data as specified by parameters LISTLEVEL and RETURN ROWS

 

LISTLEVEL = 0 - return nothing (mc_provqueuestats_desc.mShow = 0)

LISTLEVEL = 1 - return entries where mc_provqueuestats_desc.mShow <= 1

LISTLEVEL = 2 - return entries where mc_provqueuestats_desc.mShow <= 2

 

RETURN ROWS = Number of rows to return, newest collection first.

 

Examples:

Just collect stats: exec mc_ProvQStats 0,0,0

Collect stats and return them: exec mc_ProvQStats 1,0,0

List last 10 collections; exec mc_ProvQStats 2,1,10

 

statsUsageExample.png

 

 

 

The counters and Metrics collected

 

This is the list of metrics and counters it collects and can display. It is stored in the table mc_provqueuestats_desc

 

Starting analysisThe datetime when this analysis-run was started and the parameters used. Not useful at the moment but I had plans :-)
Host CPU usageTotal CPU usage on the host computer. -1 indicates missing grant, GRANT VIEW SERVER STATE TO mxmc_rt
SQL Server CPU usageCPU usage by the SQL process. -1 indicates missing grant, GRANT VIEW SERVER STATE TO mxmc_rt
Provision queue sizeThe total number of all entries in the queue
Provision queue unique auditsThe number of unique audits in the queue
Task execution deltaNumber of tasks executed/added to ext audit previous measurement
Ordered tasks in queueThe total number of ordered tasks in the queue
Ordered tasks ready to run in queueThe total number of ordered tasks ready to run  in the queue
Unordered tasks in queueThe total number of unordered tasks in the queue
Unordered tasks ready to run in queueThe total number of unordered tasks ready to run in the queue
Conditional tasks in queueThe total number of conditional tasks in the queue
Conditional tasks ready to run in queueThe total number of conditional tasks ready to run in the queue
Switch tasks in queueThe total number of switch tasks in the queue
Switch tasks ready to run in queueThe total number of switch tasks ready to run in the queue
Approval tasks in queueThe total number of approval tasks in the queue
Approval tasks ready to run in queueThe total number of approval tasks ready to run in the queue
Attestation tasks in queueThe total number of attestation tasks in the queue
Attestation tasks ready to run in queueThe total number of attestation tasks ready to run in the queue
Action tasks in queueTotal number of tasks in the queue
Action tasks ready to run in queueTotal number of tasks ready to run in the queue
Jobs ready to runThe number of jobs that are ready to run
Jobs runningThe number of currently running jobs
Provision jobs runningThe number of currently running provision jobs
Provision jobs ready to runThe number of provision jobs ready to run in the queue
Runtimes allowed by systemThe maximum number of runtimes allowed. 0 means no total system limit but it can still be limited per dispatcher
Runtimes activeThe number of jobs marked as running
Threshold violation deltaThe number of new entries in the execution threshold log since previous statistic collection
Jobs in error stateThe number of jobs that are in error state and cannot be run
Dispatchers activeThe number of dispatchers reported active the last <reload frequency> seconds
Dispatchers inactiveThe number of dispatchers not reported active the last <reload frequency> seconds

 

The mc_provqueuestats_desc table

 

ColumnValue
mItemThe Item name
mTypeThe item data type, C=Counter(INT), DT=DateTime, V=Varchar
mDescDescription of the item
mShowNULL=don't show when listing results, numbers indicate the report level to include it on.
mOrder(identity), column order in listing

 

Sample data:

mItemmTypemDescmShowmOrder
Starting analysisDT
The datetime when this analysis-run was started
11
Host CPU usageCTotal CPU usage on the host computer. -1 indicates missing grant12
SQL Server CPU usageCCPU usage by the SQL process. -1 indicates missing grant,13
Max auditid valueCThe maximum auditid value at the time of measurementNULL4
Provision queue sizeCThe total number of all entries in the queue16

 

The mc_provqueuestats table

 

The metric data is collected in this table and it is also quite simple:

 

ColumnValue
mIdID for metric collection run
mDTDateTime for collection of value
mItemItem name for collected metric - MUST MATCH with mc_provqueuestats_desc.mItem
mValueOptional VARCHAR value collected
mCountOptional INT value collected
mDateTimeOptional DATATIME value collected

 

Sample data:

mIdmDTmItemmValuemCountmDateTime
12014-08-06 14:39:55.010Starting analysis0:0NULL2014-08-06 14:39:55.010
12014-08-06 14:39:55.063Host CPU usageNULL10NULL
12014-08-06 14:39:55.063Provision queue sizeNULL0NULL
22014-08-06 14:40:00.070Starting analysis0:0NULL2014-08-06 14:40:00.070
22014-08-06 14:40:00.110Host CPU usageNULL10NULL
22014-08-06 14:40:00.110Provision queue sizeNULL0NULL
32014-08-06 14:40:05.110Starting analysis0:0NULL2014-08-06 14:40:05.110
32014-08-06 14:40:05.150Host CPU usageNULL10NULL
32014-08-06 14:40:05.150Provision queue sizeNULL0NULL

(Note that each collection (mId) will really have about 33 rows of data)

 

 

The procedures, tables and optional indexes and grants

 

Optional indexes

 

Some of the metrics require additional indexes to work. If you enable these, make sure that you change line 12 in the procedure mc_ProvQStats from

SET @P_GOTINDEXES = 0

To

SET @P_GOTINDEXES = 1

 

Don't do this change without the following indexes as that will cause some of the metrics collection queries to do tablescans, which are bad.

 

------------------------------------
-- ADDITIONAL INDEXES             --
-- Update procedure mc_ProvQStats --
-- SET @P_GOTINDEXES = 1 to use   --
------------------------------------
CREATE NONCLUSTERED INDEX [IX_MXI_EXT_AUDIT_DATE] ON [dbo].[MXP_Ext_Audit]
(
  [Aud_datetime] ASC
)
GO
CREATE NONCLUSTERED INDEX [IX_MC_EXEC_STAT_DT] ON [dbo].[mc_exec_stat]
(
  [mcDateTime] ASC
)
GO



 

CPU Usage statistics, additional grant

 

To get the SQL Server hos CPU states the procedure must run as a user that has been granted VIEW SERVER STATE. So run the following for oper, RT or the logon you're using

 

GRANT VIEW SERVER STATE TO mxmc_rt



 

Creating the tables

 

if exists (select * from sysobjects where id = object_id(N'[dbo].[mc_provqueuestats]') and
  OBJECTPROPERTY(id, N'IsTable') = 1)
    drop table [dbo].[mc_provqueuestats]
GO
CREATE TABLE [dbo].[mc_provqueuestats](
  [mId] [int] NOT NULL,
  [mDT] [datetime] NULL,
  [mItem] [varchar](255) NULL,
  [mValue] [varchar](512) NULL,
  [mCount] [int] NULL,
  [mDateTime] [datetime] NULL
) ON [PRIMARY]
GO
GRANT SELECT, INSERT, DELETE ON dbo.mc_provqueuestats TO mxmc_rt_role
GO
CREATE NONCLUSTERED INDEX [IX_MCPROVQSTAT_IDITDT] ON [dbo].[mc_provqueuestats]
(
  [mId] ASC,
  [mItem] ASC,
  [mDT] ASC
)
GO
if exists (select * from sysobjects where id = object_id(N'[dbo].[mc_provqueuestats_desc]') and
  OBJECTPROPERTY(id, N'IsTable') = 1)
    drop table [dbo].[mc_provqueuestats_desc]
GO
CREATE TABLE [dbo].[mc_provqueuestats_desc](
  [mItem] [varchar](255) NULL,
  [mType] [varchar](2) NULL,
  [mDesc] [varchar](512) NULL,
  [mShow] [int] NULL,
  [mOrder] [int] IDENTITY(1,1) NOT NULL
) ON [PRIMARY]
GO
GRANT SELECT ON dbo.mc_provqueuestats_desc TO mxmc_rt_role
GO
CREATE NONCLUSTERED INDEX [IX_MCPROVQSTATDESC_ITDE] ON [dbo].[mc_provqueuestats_desc]
(
  [mItem] ASC,
  [mDesc] ASC,
  [mShow] ASC
)
GO
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Starting analysis','DT','The datetime when this analysis-run was started','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Host CPU usage','C','Total CPU usage on the host computer. -1 indicates missing grant, GRANT VIEW SERVER STATE TO mxmc_rt','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('SQL Server CPU usage','C','CPU usage by the SQL process. -1 indicates missing grant, GRANT VIEW SERVER STATE TO mxmc_rt','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Max auditid value','C','The maximum auditid value at the time of measurement',NULL)
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc)   VALUES ('Max extaudit datetime value','DT','The maximum datetime in the extended audit at the time of measurement')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Provision queue size','C','The total number of all entries in the queue','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Provision queue unique audits','C','The number of unique audits in the queue','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Task execution delta','C','Number of tasks executed/added to ext audit previous measurement','2')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Ordered tasks in queue','C','The total number of ordered tasks in the queue','2')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Ordered tasks ready to run in queue','C','The total number of ordered tasks ready to run in the queue','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Unordered tasks in queue','C','The total number of unordered tasks in the queue','2')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Unordered tasks ready to run in queue','C','The total number of unordered tasks ready to run in the queue','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Conditional tasks in queue','C','The total number of conditional tasks in the queue','2')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Conditional tasks ready to run in queue','C','The total number of conditional tasks ready to run in the queue','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Switch tasks in queue','C','The total number of switch tasks in the queue','2')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Switch tasks ready to run in queue','C','The total number of switch tasks ready to run in the queue','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Approval tasks in queue','C','The total number of approval tasks in the queue','2')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Approval tasks ready to run in queue','C','The total number of approval tasks ready to run in the queue','2')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Attestation tasks in queue','C','The total number of attestation tasks in the queue','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Attestation tasks ready to run in queue','C','The total number of attestation tasks ready to run in the queue','2')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Action tasks in queue','C','Total number of tasks in the queue','2')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Action tasks ready to run in queue','C','Total number of tasks ready to run in the queue','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Jobs ready to run','C','The number of jobs that are ready to run','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Jobs running','C','The number of currently running jobs','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Provision jobs ready to run','C','The number of provision jobs ready to run in the queue','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Provision jobs running','C','The number of currently running provision jobs','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Runtimes allowed by system','C','The maximum number of runtimes allowed. 0 means no total system limit but it can still be limited per dispatcher','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Runtimes active','C','The number of jobs marked as running','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc)   VALUES ('Threshold violation max timestamp','DT','The highest datatime of execution threshold logged')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Threshold violation delta','C','The number of new entries in the execution threshold log since previous statistic collection','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Jobs in error state','C','The number of jobs that are in error state and can not be run','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Dispatchers active','C','The number of dispatchers reported active the last <reload frequency> seconds','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Dispatchers inactive','C','The number of dispatchers not reported active the last <reload frequency> seconds','2')
GO



Creating the procedure

 

CREATE PROCEDURE [dbo].[mc_ProvQStats]
    @P_OP int,       -- 0=Just add stats to table, 1=Add stats and list collected data, 2=List previous collected data
  @P_LISTLEVEL INT,-- 0=Dont list, 1=basic, 2=all
  @P_LISTROWS  INT -- Number of rows to list
AS
  DECLARE 
  @L_C INT,@L_I INT, @L_V VARCHAR(255), @L_DT DATETIME, @L_MID INT, @L_QUEUESIZE INT, @L_QS_ATTST INT, @L_QS_APPR INT, @L_QS_SW INT, @L_QS_CDTL INT, @L_QS_UNORD INT, @L_QS_ORDRD INT, @L_QS_ACT INT,
  @L_AUDITID INT, @L_CHILDAUDIT INT, @L_CHILDAUDITINQUEUE INT, @L_CHILDAUDITMSG VARCHAR(255),@L_LEVEL VARCHAR(255),
  @L_QAUDITREF INT,@L_QACTIONID INT,@L_QACTIONTYPE INT,@L_QMSKEY INT,@L_QWAITCOUNT INT,@L_QEXECTIME DATETIME,@L_QUPDTIME DATETIME,@L_QREPOSITORYID INT,@L_QMSG VARCHAR(255),
  @L_COLUMNS NVARCHAR(MAX), @L_QUERY NVARCHAR(MAX), @L_LIMITER VARCHAR(255), @ts_now bigint, @L_HOSTCPU INT, @L_SQLCPU INT, @P_GOTINDEXES INT
  SET @P_GOTINDEXES = 0
    SET NOCOUNT ON
  SET @L_MID = (SELECT ISNULL(MAX(MID),0)+1 FROM mc_provqueuestats WITH(NOLOCK))
  IF @P_OP NOT IN (0,1,2) AND @P_LISTLEVEL NOT IN (1,2)
  BEGIN
  print 'Illegal operation ' + convert(varchar,@P_OP) + ' - 0=List only)' --, 1 (retry without provisioning), or 2 (retry with provisioning))'
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mValue) VALUES (@L_MID,getdate(),'Invalid parameter(s)',cast(@P_OP as varchar)+':'+cast(@P_LISTROWS as varchar))
  RETURN
  END
  IF @P_OP IN (0,1)
  BEGIN
  -----------------------
  -- GATHER STATISTICS --
  -----------------------
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mValue,mDateTime) VALUES (@L_MID,getdate(),'Starting analysis',cast(@P_OP as varchar)+':'+cast(@P_LISTROWS as varchar),getdate())
  --------------------------------
  -- SETTINGS AND MISC COUNTERS --
  --------------------------------
  -- Get max allowed runtimes
  SET @L_C = (SELECT CAST(VARVALUE as INT) FROM MC_GLOBAL_VARIABLES WITH(NOLOCK) WHERE VARNAME = 'MX_MAX_CONCURRENT_RUNTIME_ENGINES')
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Runtimes allowed by system',@L_C)
  -- Get total running runtimes
  SET @L_C = (SELECT COUNT(*) FROM MC_JOBS WITH(NOLOCK) WHERE STATE=2)
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Runtimes active',@L_C)
  -- THESE REQUIRE ADDITIONAL INDEXES...
  IF @P_GOTINDEXES=1
  BEGIN
  --Execution threshold violation delta
  SET @L_DT = (SELECT mDateTime FROM mc_provqueuestats WITH(NOLOCK) WHERE mid = @L_MID -1 AND mItem = 'Threshold violation max timestamp')
  IF @L_DT IS NOT NULL SET @L_C = (SELECT COUNT(mcDateTime) FROM mc_exec_stat WITH(NOLOCK) WHERE mcDateTime > @L_DT)
  ELSE SET @L_C = 0
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Threshold violation delta',@L_C)
  --Execution threshold violation max timestamp
  SET @L_DT = (SELECT MAX(mcDateTime) FROM mc_exec_stat)
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mDateTime) VALUES (@L_MID,getdate(),'Threshold violation max timestamp',@L_DT)
  -- Get current max ext-audit time
  SET @L_DT = (SELECT MAX(AUD_DATETIME) FROM MXP_EXT_AUDIT WITH(NOLOCK))
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mDateTime) VALUES (@L_MID,getdate(),'Max extaudit datetime value',@L_DT)
  SET @L_DT = (SELECT mDateTime FROM mc_provqueuestats WITH(NOLOCK) WHERE mid = @L_MID -1 AND mItem = 'Max extaudit datetime value')
  IF @L_DT IS NOT NULL SET @L_C = (SELECT COUNT(AUD_DATETIME) FROM MXP_EXT_AUDIT WITH(NOLOCK) WHERE AUD_DATETIME > @L_DT)
  ELSE SET @L_C = 0
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Task execution delta',@L_C)
  END
  -- JOBS IN ERROR STATE
  SET @L_C = (SELECT COUNT(*) FROM MC_JOBS WITH(NOLOCK) WHERE STATE = -1)
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Jobs in error state',@L_C)
  -- JOBS READY TO RUN
  SET @L_C = (SELECT COUNT(*) FROM MC_JOBS WITH(NOLOCK) WHERE STATE = 1 AND PROVISION = 0 AND ScheduledTime < getdate())
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Jobs ready to run',@L_C)
  -- RUNNING JOBS
  SET @L_C = (SELECT COUNT(*) FROM MC_JOBS WITH(NOLOCK) WHERE STATE = 2 AND PROVISION = 0)
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Jobs running',@L_C)
  -- PROVISION JOBS READY TO RUN
  SET @L_C = (SELECT COUNT(*) FROM MC_JOBS J WITH(NOLOCK)WHERE STATE=1 AND PROVISION = 1 AND JOBGUID IN 
  (SELECT JOBGUID FROM MXP_TASKS T WITH(NOLOCK)WHERE T.TASKID IN (SELECT DISTINCT(P.ActionID) FROM MXP_PROVISION P WITH(NOLOCK) WHERE P.ActionType=0 AND P.State=2) )  )
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Provision jobs ready to run',@L_C)
  -- RUNNING PROVISION JOBS
  SET @L_C = (SELECT COUNT(*) FROM MC_JOBS WHERE STATE = 2 AND PROVISION = 1)
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Provision jobs running',@L_C)
  -- DISPATCHERS APPEARING TO BE INACTIVE. Does not refresh exactly at reload time, so double it
  SET @L_C = (SELECT count(*) from MC_Dispatcher WITH(NOLOCK) WHERE last_visited < dateadd(ss,(RELOAD_FREQUENCY*-2),getdate()) )
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Dispatchers inactive',@L_C)
  -- DISPATCHERS APPEARING TO BE ACTIVE. 
  SET @L_C = (SELECT count(*)  from MC_Dispatcher WITH(NOLOCK) WHERE last_visited > dateadd(ss,(RELOAD_FREQUENCY*-2),getdate()) )
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Dispatchers active',@L_C)
  -- CPU Usage
  BEGIN TRY
  SELECT @ts_now = cpu_ticks / (cpu_ticks/ms_ticks)  FROM sys.dm_os_sys_info
  SELECT top 1 
  @L_HOSTCPU = CASE WHEN system_cpu_utilization_post_sp2 IS NOT NULL THEN system_cpu_utilization_post_sp2 ELSE system_cpu_utilization_pre_sp2 END, -- AS system_cpu_utilization,
  @L_SQLCPU = CASE WHEN sql_cpu_utilization_post_sp2 IS NOT NULL THEN sql_cpu_utilization_post_sp2 ELSE sql_cpu_utilization_pre_sp2 END-- AS sql_cpu_utilization
  FROM 
  (
  SELECT record.value('(Record/@id)[1]', 'int') AS record_id, DATEADD (ms, -1 * (@ts_now - [timestamp]), GETDATE()) AS EventTime,
  100-record.value('(Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]', 'int') AS system_cpu_utilization_post_sp2,
  record.value('(Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]', 'int') AS sql_cpu_utilization_post_sp2 , 
  100-record.value('(Record/SchedluerMonitorEvent/SystemHealth/SystemIdle)[1]', 'int') AS system_cpu_utilization_pre_sp2,
  record.value('(Record/SchedluerMonitorEvent/SystemHealth/ProcessUtilization)[1]', 'int') AS sql_cpu_utilization_pre_sp2
  FROM (SELECT timestamp, CONVERT (xml, record) AS record FROM sys.dm_os_ring_buffers 
  WHERE ring_buffer_type = 'RING_BUFFER_SCHEDULER_MONITOR' AND record LIKE '%<SystemHealth>%') AS t
  ) AS t ORDER BY record_id desc
  END TRY
  BEGIN CATCH
  SET @L_HOSTCPU = -1
  SET @L_SQLCPU = -1
  END CATCH
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Host CPU usage',@L_HOSTCPU)
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'SQL Server CPU usage',@L_SQLCPU)
  -----------------
  -- PROVQ STUFF --
  -----------------
  -- Get current max auditid
  SET @L_C = ( SELECT MAX(AUDITID) FROM MXP_AUDIT WITH(NOLOCK) )
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Max auditid value',@L_C)
  SET @L_QUEUESIZE = ( SELECT COUNT(*) FROM MXP_PROVISION WITH(NOLOCK) )
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Provision queue size',@L_QUEUESIZE)
  SET @L_C = ( SELECT COUNT(DISTINCT(AUDITREF)) FROM MXP_PROVISION WITH(NOLOCK) )
  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Provision queue unique audits',@L_C)
  DECLARE C_PQ_ALL CURSOR LOCAL FAST_FORWARD READ_ONLY FOR 
  SELECT COUNT(P.MSKEY) C,T.ActType ActionType FROM MXP_ActionType T WITH (NOLOCK) LEFT OUTER JOIN MXP_PROVISION P  WITH (NOLOCK)ON T.ActType=P.ACTIONTYPE AND P.MSKEY IS NOT NULL GROUP BY T.ActType
  OPEN C_PQ_ALL
  FETCH NEXT FROM C_PQ_ALL INTO @L_QUEUESIZE, @L_I
  WHILE (@@Fetch_status = 0)
  BEGIN
  IF @L_I =  0 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Action tasks in queue',@L_QUEUESIZE)
  IF @L_I = -1 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Ordered tasks in queue',@L_QUEUESIZE)
  IF @L_I = -2 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Unordered tasks in queue',@L_QUEUESIZE)
  IF @L_I = -3 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Conditional tasks in queue',@L_QUEUESIZE)
  IF @L_I = -4 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Switch tasks in queue',@L_QUEUESIZE)
  IF @L_I = -5 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Approval tasks in queue',@L_QUEUESIZE)
  IF @L_I = -6 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Attestation tasks in queue',@L_QUEUESIZE)
  FETCH NEXT FROM C_PQ_ALL INTO @L_QUEUESIZE, @L_I
  END
  CLOSE C_PQ_ALL
  DEALLOCATE C_PQ_ALL
  DECLARE C_PQ_ALL CURSOR LOCAL FAST_FORWARD READ_ONLY FOR 
  SELECT COUNT(P.MSKEY) C,T.ActType ActionType FROM MXP_ActionType T WITH (NOLOCK) LEFT OUTER JOIN MXP_PROVISION P  WITH (NOLOCK) ON T.ActType=P.ACTIONTYPE AND P.STATE=2 AND P.MSKEY IS NOT NULL GROUP BY T.ActType
  --SELECT COUNT(*) c ,ActionType FROM MXP_PROVISION WITH(NOLOCK) WHERE STATE=2 GROUP BY ActionType
  OPEN C_PQ_ALL
  FETCH NEXT FROM C_PQ_ALL INTO @L_QUEUESIZE, @L_I
  WHILE (@@Fetch_status = 0)
  BEGIN
  IF @L_I = 0 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Action tasks ready to run in queue',@L_QUEUESIZE)
  IF @L_I = -1 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Ordered tasks ready to run in queue',@L_QUEUESIZE)
  IF @L_I = -2 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Unordered tasks ready to run in queue',@L_QUEUESIZE)
  IF @L_I = -3 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Conditional tasks ready to run in queue',@L_QUEUESIZE)
  IF @L_I = -4 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Switch tasks ready to run in queue',@L_QUEUESIZE)
  IF @L_I = -5 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Approval tasks ready to run in queue',@L_QUEUESIZE)
  IF @L_I = -6 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Attestation tasks ready to run in queue',@L_QUEUESIZE)
  FETCH NEXT FROM C_PQ_ALL INTO @L_QUEUESIZE, @L_I
  END
  CLOSE C_PQ_ALL
  DEALLOCATE C_PQ_ALL
  END
  ---------------------
  -- LIST STATISTICS --
  ---------------------
  IF @P_OP IN (1,2) AND @P_LISTLEVEL > 0 -- List current or all
  BEGIN
  SET @L_QUERY = 'SELECT @L_COLUMNS = STUFF((SELECT '','' + QUOTENAME(mItem) FROM [mc_provqueuestats_desc] where mShow <= '+CAST(@P_LISTLEVEL as varchar)+' order by mOrder FOR XML PATH(''''), TYPE).value(''.'', ''NVARCHAR(MAX)''),1,1,'''') '
  --print @L_QUERY
  exec sp_executeSQL @L_QUERY,N'@L_COLUMNS NVARCHAR(MAX) OUT',@L_COLUMNS OUT
  --PRINT @L_COLUMNS
  IF @P_OP = 1 SET @L_LIMITER = 'WHERE mId = '+cast(@L_MID as varchar)+' AND'
  ELSE SET @L_LIMITER = ' WHERE '
  set @L_QUERY = 'SELECT TOP '+CAST(ISNULL(@P_LISTROWS,1) AS VARCHAR)+' mId, ' + @L_COLUMNS + ' FROM 
            (select S.mId,S.mItem,
  case
  when D.mType = ''V'' then cast(mValue as varchar)
  when D.mType = ''C'' then cast(mCount as varchar)
  when D.mType = ''DT'' then convert(varchar,mDateTime,120)
  else ''0''
  end as mValue
                from mc_provqueuestats S left outer join mc_provqueuestats_desc D on S.mItem = D.mItem '+ISNULL(@L_LIMITER,' ')+' D.mShow <= '+CAST(ISNULL(@P_LISTLEVEL,0) as varchar)+'
           ) main
            pivot 
            ( max(mValue)
              for mItem in (' + @L_COLUMNS + ')
            ) piv order by mid desc';
  print @L_QUERY
  execute(@L_QUERY);
  END
GO
Matt Pollicove

The Future of SAP IDM

Posted by Matt Pollicove Jul 21, 2014

I was recently made aware that SAP is planning some changes in how they are going forward with SAP IDM. SAP is moving IDM development from Norway to Bulgaria during the third quarter of this year. While it is not unusual for companies to make these changes, there are some questions and concerns that have been passed my way as someone closely identified with SAP IDM.

 

So what does this mean for IDM? Overall, SAP is planning on further embedding their IDM initiatives to the Cloud and analytics, which we will see in future releases of SAP IDM. One would assume that this would mean a tighter integration with HANA, which should be well received in SAP-centric organizations.

However what is most concerning to me is that it has proven difficult to obtain any official statements regarding the road-map or the changes in the SAP IDM team. However Gerlinde Zibulski, Director Product Management Security at SAP HQ in Walldorf has been most helpful to me in providing information about how this will affect the product. I had a chance to have a discussion with Gerlinde, who was able to answer some of the questions that were on my mind and that I’ve heard from the SAP IDM community.

 

  • What are the plans for development of the Attestation module? – It’s out and available in IDM 7.2 via Service Pack. This module will remain accessible via REST only as an option for those customers who do not plan to integrate with GRC.
  • What are the plans for the future of the GRC integration? – SAP remains committed to the integration of IDM and GRC. The roadmap indicates that there will be tighter integration with HANA and analytic \s for GRC.
  • Will SAP continue to support the non-SAP connectors and the change generic connector architecture that allows for connectivity to non-Landscape Systems? - Yes, this is a core part of SAP IDM and there are no plans to change this.
  • How do these changes affect the overall road-map? – Not at all. If you’re interested in learning more about the road-map, you can view it here. (SMP Login required)

 

Gerlinde pointed me towards the official SAP IDM road-map which confirmed that HANA is indeed a large part of IDM’s future. Additionally, the inclusion of Analytics to understand all incoming security information coming from GRC. One of the other interesting things about SAP IDM’s future is its integration with the cloud. As the IT and ERP world is all about the cloud right now, it should be interesting to see what comes about.

She also confirmed to me that SAP IDM 8 is on track to be released just before d-code 2014. There are also plans in place to hire more developers in the expanded Sofia office to work on SAP IDM.

 

To be fair, SAP has done an excellent job with its recent CEI program for the purpose of obtaining feedback from customers and consultants so that they may better understand how SAP IDM is used in the field. I look forward to attending and participating in more of these sessions in the future. (I’d also be very interested in seeing a CEI for GRC, but more on that another time)

 

Finally, I would like to send my best wishes to the Trondheim Labs (formerly MaXware) team. For almost 20 years, they have been working on IDM and VDS, while assisting with the definition and execution of what Identity Management means to the SAP Landscape and the overall Business Enterprise. I have been proud to be associated with them as a co-worker and as a partner. I know whatever you work on will be vastly enhanced by your participation. I wish you all the best in your future endeavors and hope that we get to work again in the future.

 

If anyone has questions regarding SAP IDM, they may contact her via email. Additionally, ideas for the future of SAP IDM can always be logged at the SAP IDM Idea Place.

Matt Pollicove

Did you know?

Posted by Matt Pollicove Jul 8, 2014

So I've found out a few things recently...

 

As I mentioned in my SCN Status, I'm happy to say that I'll be speaking this year at d-code in Las Vegas! If you're planning to go, please do your best to attend ITM118 - SAP Identity Management – Experience from a Customer Implementation Project. I don't know the exact schedule yet, but I'll be sure to let you all know. You can find more information here.

 

While the title speaks about Implementation, I'll have some good information for folks in all phases of an IDM project from a Best practices based on my 10 years expereince with IDM going back to the MaXware Identity Center.  As they say, the more things change, the more they stay the same, and some things about implementing IDM haven't changed much, if at all.

 

Also, did you know the name of our favorite SAP module has changed? Based on a post from Harald Nehring, we are now referred to as SAP Identity Management (SAP IDM)  I like it.  Much more compact and efficient.

 

So let's see over the years it's been:

 

MaXware MetaCenter (~2003)

MaXware Identity Center (~2004)

SAP NetWeaver Identity Management (2007)

SAP Identity Management (2014)

 

Nice to see the product is still changing and maturing. There's some other things changing with regards to SAP IDM and I'll be speaking more about that soon.

Taking the blog Assignment Notification task, customization and custom messages one step further, here is an explanation how to send emails, which are based on the message templates editor in Web Dynpro, in all different kind of tasks.

 

The basis therefore is the chapter Sending custom messages.

 

There is no real need to create your own message class for your custom emails, so first, create an email message template in Web Dynpro:

 

2014-07-07_16-05-31.png

 

Create a custom task like described in Sending custom messages. As recommendation, put the NOTIFICATIONTASKID into the NOTIFICATION Repository as task reference, and reference to it in the job.

 

2014-07-07_16-25-43.png

 

2014-07-07_16-20-34.png

 

That's all.

So what's this blog all about?

An (really) important part of IDM is the provisioning to other systems. And it's a part that loves to keep us on our toes, as the amount of threads and blogs about this topic show. Through my time working with IDM the provisioning stopped working quite a few times due to different reasons. And every step of the way I learned a new reason and a new solution to get it going again. To keep up with all of that and to help me solve it faster when it decides to get stuck yet again I started to write down a checklist.

 

Some time ago I posted the better part of my little checklist in a thread and through some encouragement by Matt I decided to create a blog post out of it to share the whole thing to a wider audience and explain the steps a bit more. This is my first technical blog here on SCN so I'm a little nervous and excited at the same time. ^^


Just to be clear: Not all of the points of the checklist might work for you, since we're on a Windows server with the IDM management console and use an Oracle database.

 

 

 

 

My tools

  • Access to the Management Console (MMC) of the IDM
  • Access to the Monitoring-tab via http://<portalurl:port>/idm/admin
  • SQL developer with database connection to IDM database
  • Permission to access the Windows services of the IDM-server and to start/stop the services
  • Permission to reboot the IDM-server
  • Really good connections to the database administrators

 

 

 

How do I know it's that time again?

There are three signs that I check if the provisioning is really stuck again:

  1. I look at the "Job log" in the MMC to see if the last entry for the provisioning-dispatchers is from more that 15 - 20 minutes ago (even through it was triggered in the last minutes).
  2. The provisioning queue on http://<portalurl:port>/idm/admin is only growing.
  3. The dispatchers, that are assigned to do the provisioning, are shown as running in the MMC under "Dispatchers > Status" and the timestamp for "Last check" is updated when I click on refresh.

 

If all those steps come back with a "yes", I'll get...

 


The Checklist

  1. Check the "Status"-overview in the MMC to see, if a job is showing the state "Error".
  2. Restart the provisioning dispatchers in the "Services"-menu of the server.
  3. Check for waiting tasks via the SQL developer.
  4. Check the "Windows RT conn. string" on the Database-tab of the Identity Center configuration in the MMC.
  5. Reboot the server the MMC and dispatchers are installed on.
  6. Restart the IDM database.


That's the checklist in short, if you just need a little reminder or an idea for what to look at next. I'll explain the points a bit more in detail now.

 

1. Check the "Status"-overview in the MMC to see, if a job is showing the state "Error"

 

In the MMC you'll find the "Status"-overview as the first entry under "Management".

ScreenShot089.jpg


It shows all the jobs for that Identity Center connection (in this case it's named IMP). To check for jobs that have the current state "Error", just click on the column header "State" and it will be sorted by content. If you have checked the box "Show only enabled jobs" at the bottom of the page, the jobs with error-state should be shown in red font at the top or end of that list.

If you find a job that is associated with provisioning and it's on "Error", right-click on it and start it with "Run now".

 

 

2. Restart the provisioning dispatchers in the "Services"-menu of the server.

 

Go to "Start > Administrative Tools > Services" on the IDM server and look for your dispatchers, that are assigned to the provisioning, in that list. They should be shown as started. Right-click on them and choose "Restart".

This is what gets our provisioning going most of the time.

 

 

3. Check for waiting tasks via the SQL developer.

 

Open the SQL developer and work your way through the following SQL statements:

select * from mxp_provision where msg like 'Wait for%'

This checks for tasks, that wait for the successful state of other tasks. The MSG-column gives you an auditid for the next SQL-statement. It's the ID of the task, that is blocking the execution of the first task.

 

select * from mxp_audit where auditid=<auditid>


The MSG-column now shows information about the state (e.g. failed) of the blocking task, the reason and for which user and which assigment. With these information you can decide to leave it alone to handle itself (because it's got nothing to do with the blockage) or you can use the next SQL-statement.

 

update mxp_audit set provstatus=1100 where auditid = <auditid>


This last statement sets the blocking task to the state "OK" (= 1100) and therefor the waiting task (the one you found with the first statement) can finally be executed.

 

 

4. Check the "Windows RT conn. string" on the Database-tab of the Identity Center configuration in the MMC.


When you click on the IC configuration (the "IMP" in the first screenshot), the "Database"-tab will be displayed. At the end of it you'll find the string under "Runtime engine".

ScreenShot090.jpg

Open it and test the connection on the "Connection"-tab. If it comes back as failed, correct the name of the data source and/or the logon information. Then test it again to see if it's successful now.

 

 

5. Reboot the server the MMC and dispatchers are installed on.

Well, that's pretty self-explanatory. ^^

If you don't have permission to do this yourself, have the contact data of the administrator(s) at hand, who can reboot the server for you (just like with the restart of the services in #2).

 

 

6. Restart the IDM database.

This was only necessary once (until now), but to complete my checklist I'll include it here, too. Since I don't have direct access to our oracle database, I let our database administrators handle this one for me.

 

 

 

What's more to say?

Well, that's it! I hope the list can help you when your provisioning decides to take a break again. This is - of course - nowhere near a complete list, but a result of my experience with the issue.

If you have some tips of your own to add, I absolutly welcome it! As you know "Sharing is caring". So leave a comment when you have your own little checklist for this issue or if you want to give some feedback for my blog (which I'm really looking forward to, because I am a big fan of constructive criticism and it is my first technical blog).

 

Thank you for your time and attention. I hope it was not wasted! *g*

 

Regards,

Steffi.

In order to encrypt the communication between IDM and AS Java during the Initial load or any other jobs, you may want to use HTTPs instead of HTTP for a JAVA server. However, if you choose the https protocol, you may get an error in Initial Load job. Error message looks like this

 

javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target


screen1.png

This is because the Java‘s server certificate is not trusted by your IDM java program.  All you need to do is to add server's (or root) certificate into JRE's default trust store.

 

The JRE's trust store is located under jre/lib/security. The file name is cacerts without extension.

 

Try command

keytool -importcert -file RootCA.crt -keystore cacerts

 

screen2.png

re-run the job. You will find the error is gone.

 

There are other ways to solve the problem. But I guess this is the easiest. The solution is also suitable for communicate with any other HTTPs server or LDAPs server.

HTML Reporting

This is an article focused on the reporting functionality of Identity Management with the main focus on HTML Reporting.  The article is applicable to both Identity Management 7.2 and 7.1 versions.

Identity Management needs to satisfy certain reporting requirements, such as :
  • What are the attributes of a given user?
  • What are the business roles assigned to a given user?
  • What systems does a given user has access to?
  • Which business roles are available in the system?
  • How many users/business roles, etc. are available in
    the system?
In this article I will show you how to use Identity Management’s functionality in order to create pretty, complete and useful HTML Reports.  With Identity Management you can create reports using the information which is available in the Identity Center.  The core of HTML Reporting is actually very simple – SQL Queries, which get values for a given entry type from the database.  Wrapping up those queries with some HTML and CSS code makes a fully customizable appearance for the generated reports, which gives flexibility and better user experience to the end users.  Let us start this magical tour into the wonderful world of HTML reporting with the entry typeitself:

MX_REPORT

The MX_REPORT entry type is the entry type for report requests and exists in Identity Management as of 7.1 SP2. The report is executed as an action task on the MX_REPORT entry type. As the report is a task, the task status indicates the progress of the report, i.e. pending, ok or error. By default, the MX_REPORT is not listed as a searchable entry type.  Note that the report task will always create an entry of type MX_REPORT, regardless of the provided entry type in the task definition.

Let’s take a look at the most important attributes of the MX_REPORT entry type:
  • MSKEYVALUE
  • DISPLAYNAME- display name of the report.
  • MX_OWNER
  • MX_REPORT_ENTRY
  • MX_REPORT_DATE– date on which the report was requested.
  • MX_REPORT_FORMAT– format of the report (this could be PDF,HTML,DOC, etc.)
  • MX_REPORT_RESULT– this attribute holds the full report result. It is saved as a binary in the database.
  • MX_REPORT_RESULT_REF– this attribute holds a reference to the report result, in case it is stored in a separate file server.
Reports are shown in the View Reports tab of the Identity Management User Interface.  In this tab, we have a table with 5 columns:
  • Entry – This corresponds to the value of the MX_REPORT_ENTRY attribute.
  • Status - Status of the report task (pending, OK or Error). This status is calculated based on the status of the task execution, taken from the MCMV_AUDIT view.
  • Report Date - This corresponds to the value of the MX_REPORT_DATE attribute.
  • Report Name - The name of the report. If DISPLAYNAME has a value, this value is stored, otherwise the value of MSKEYVALUE is stored here.
  • Report Result - This corresponds either to the MX_REPORT_RESULT or to the MX_REPORT_RESULT_REF attribute.
   
In order for a report to be shown in the View Reports tab, it must have a value for either MX_REPORT_RESULT  or MX_REPORT_RESULT_REF.  Otherwise, the report will be stored in the database, but it won’t be visible in the User Interface, as it has no result file attached to it.
We’ve seen the entry type for reporting, now let’s get into some real action.  Let’s create a report task, which returns all the assigned privileges and roles for a given user. To make things more beautiful, we will wrap it with some CSS and HTML, to create a good-looking report.  Let’s begin…

HTML Reporting Task
We start by creating the task – let’s name it “Create Report”.  We make it a UI task, and on the attributes tab we mark “Report task”.  This will automatically display all the attributes for the MX_REPORT entry type and create a new MX_REPORT entry type, regardless of what we have selected as the entry type of the task itself.  We select the task entry type to be MX_PERSON (which means what it will be executed on a person, but will create a report)and select some attributes to be displayed. (check the screenshot below.) The last step is to add access control to the task – let’s say we want only the administrator user to be able to create reports for entries. So we give access type – logged-in-user, with value-“Administrator” and on behalf of everybody relation.  Here is how the task definition should look like:

1.jpg2.jpg3.jpg
So far we have only a UI task, that can be executed by an Administrator and creates a new entry of type – MX_REPORT.  As there is no report_result, we don’t have an actual report ready. In order to create the report, we would attach a new empty job, with one To Identity Store pass.  In the pass destination tab, we will add MX_REPORT as an entry type, because the pass will make modifications to the newly created MX_REPORT entry.  The idea
here is, that after we create the report in the UI, we would receive the value of the MX_REPORT_ENTRY attribute, which is actually the mskey of the user, we are running the Create Report task on, and use this value as MSKEY in our queries, which will return the assigned privileges and roles. We will need MSKEYVALUE, to point to the created report, MX_REPORT_FORMAT, which will be with value – “HTML”, because if it’s in another format, when
opened in the UI, it will not be opened in the browser as needed; and MX_REPORT_RESULT, which will get its value from a custom script. In the script, we will use the magic of HTML5 + SQL Queries in order to create the report itself. Let’s create the script.

HTML Reporting Script

We create a new local script for the action task. Let’s name it HTMLReport. The script will take as input parameter the value of MX_REPORT_ENTRY (the mskey of the user, we are reporting on) and will return a binary representation of the report, which will be stored in the Identity Center.  I will explain the process of creating the script:
    1. We create a Header and Footer, which will be just static CSS and HTML code in the beginning and in the end of the HTML file. We do this to delimitate the static parts of the HTML code, so that they won’t interfere with our main logic.  The header contains the opening HTML tags and the CSS used, and the footer just contains the closing HTML tags.  For the sake of simplicity in this article, I won’t add the real CSS code I used in this example. If you want
      to use the same CSS, you can find a reference to the file at the end of the article.  Nevertheless, your header should look like this:       
    2.           <html>
                <head>
                <style type="text/css">
                <….........CSS here……………..>
                </style>
                </head>
                <body>
              And your footer should be like this:
                </table>
                </div>
                </body>
                </html>

                2. Now, let’s create the script itself. For now, we have the HTML opening tags and the CSS, and the HTML closing tags, along with the table closing                  tag. Now, let’s populate the table itself.  We will create 2 rows and 3 columns. The 1st row will contain the table headings: User Name, Assigned                Privileges and Assigned Roles. The  second row will contain the username, privileges and roles, all extracted from the Identity Center.
                    Privileges are extracted with the following query:
      select  mcothermskeyvalue from idmv_link_ext2 where mcAttrName='MXREF_MX_PRIVILEGE' and 
       mcthismskey='"+Par+"' 
                   
               
                    For roles, the query is:
                   
      select mcothermskeyvalue from idmv_link_ext2 where mcAttrName='MXREF_MX_ROLE'
      and mcthismskey="+Par+"
      
      
      
      
      
      
      
      
      
      
      
      
       
      Since Par(which is the value of MX_REPORT_ENTRY) , contains the mskey of the entry, we have to extract the mskeyvalue of that user to be shown under User Name. This is done via the following query:
      select mcmskeyvalue from idmv_entry_simple where mcmskey="+Par+"
      Getting the results from those queries as values is done via the uSelect() function.
      We store the results of those queries in variables and add them to the table elements and store this into a variable (oHTML), which represents the table body.
      The last thing left to do is to return the binary representation (hex code). This is done via the uToHex() function. We also need to add  “{HEX}” as prefix.
                  This is how our example script looks like in the end:
      //
      Main function: HTMLReport
      function HTMLReport(Par){
      var oHeader = uFromFile("C:\\Reporting\\Template\\header.html","-1","false");
      var oFooter =uFromFile("C:\\Reporting\\Template\\footer.html","-1","false");
      var AssignedPrivileges=uSelect("select mcothermskeyvalue from idmv_link_ext2 where mcAttrName='MXREF_MX_PRIVILEGE' and
      mcthismskey='"+Par+"'");
      var oList="";
      var oArray=AssignedPrivileges.split("!!");
      for(var i=0; i<oArray.length; i++){
      oList=oList+oArray[i]+'<br>';
      }
      var AssignedRoles = uSelect("select mcothermskeyvalue from idmv_link_ext2 where mcAttrName='MXREF_MX_ROLE' and 
      mcthismskey='"+Par+"' ");
      var oList2="";
      var oArray2 = AssignedRoles.split("!!");
      for(var i=0; i<oArray2.length; i++){
      oList2 = oList2 + oArray2[i] + '<br>';
      }
      var userName = uSelect("select mcmskeyvalue from idmv_entry_simple where mcmskey="+Par+" ");
      var oHTML='<div class="HTMLReport"><table><tr><td>UserName:</td><td>Assigned Privileges</td><td>Assigned Roles</td></tr>';
      oHTML+='<tr><td>'+userName+'</td><td>'+oList+'</td><td>'+oList2+'</td></tr>';
      var oHex="{HEX}"+uToHex(oHeader+oHTML+oFooter);
      return oHex;
      }
      
      We save and go back to the To Identity Store pass definition

          3. As a final step, we add the value of MX_REPORT_RESULT to be calculated via the HTMLReport  script, with MX_REPORT_ENTRY as                input  parameter. The Pass definition should look like this:


         

4.jpg


      4. Save the task and log on to the User Interface with the “Administrator” user.


 

Starting the task via the User Interface

 

Let’s execute the task on an entry. We go to manage tab and search for a person.  We select the person, go to “Choose task” and select the “Create Report” task.

5.jpg

The “Create Report” task is opened and we fill the attributes needed, then click “Save”.
6.jpg

This will save a new entry of type MX_REPORT, with MSKEYVALUE and Display Name – Superman Report, MX_REPORT_DATE – 25.04.2014, and values for DESCRIPTION and MX_REPORT_ERROR.  Saving this will execute the action task and the To Identity Store pass, which will set HTML as value to the MX_REPORT_FORMAT attribute, and will calculate the value of MX_REPORT_RESULT, using the Script we created.  To see our result, we will go to the View Reports tab:
Untitled.jpg

 

We can see our newly generated report.  We can see that it is for the Entry Clark Kent, the report task is successfully executed (Status – “OK”), the report name and the report result. If we click on the Result, we will see our report in the browser:

8.jpg

 

Pretty, isn’t it?  It contains the UserName of the user and all the assigned privileges and business roles.

 

 

You can find the header and footer files, which I used with the abovementioned script to create this example report, attached to the article.  If you like them, they are available for free usage

 

 

 

Yours truly,

Emanuil

Background

A consultant asked me about the inner working of group assignment functionality in the SAP provisioning framework 7.2 and why we are using the MX_GROUP + MX_PRIVILEGE coupling and so on.

 

So I decided it might be worth sharing it with the bigger audience and put it in a blog, after all it is non-trivial stuff and perhaps one of the more complex parts of the SAP Provisioning Framework in 7.2.

 

Anyway it been quite some time since I took on task of starting of the SAP Provisioning Framework 7.2 and it has been progressing quite a bit since then, but still the main concepts still remains pretty much the same.

 

Many of concept changes from the 7.1 framework and 7.2 actually stems from the early Notes Framework.

 

Both the way driving events by privilege assignments and dealing with modify events and group handling where first tested out in the Notes framework. Other concepts such as validation tasks roots back to origin from the GRC framework, where the need for checking assignments head of provisioning was introduced.

 

In 7.1 SAP Provisioning framework as you might know the add member task was used for this and approvals while provision was done by provision tasks, this meant dealing with rollback in case of erroneous assignment and there also would exist a time slot where you had the privilege before it was assigned in the backend.

 

There were already new features in the product for dealing with these issues so basically the 7.1 framework was growing old and not closely enough integrated with development to take early adoption for new upcoming features. So a new 7.2 SAP Provisioning framework was called for.

 

Following benefits we considered as the motivator factor for the 7.2 SAP Provisioning Framework

 

 

Main benefits

  • Proper privilege handling (you don't get privileges before you are entitled to the them

                      7.1 : rollback on privilege events 

                 vs 7.2 : pending privileges

  • Universal framework that handles complicated wait-for logic for you          
  • Hook-ins, gives new framework implementer and foundation to build on.   
  • Better visualization of what goes on.

                7.1 :  Used entry type events triggering firing, and static scripts that hard set attributes 

                7.2 :  Used a layer out task structure. Avoiding the hidden events in scripts.

  • Maintainability: less dependencies between frameworks and underlying 3rd party frameworks controlled by plugin points
  • Less hardwired. Provisioning triggering relates to privileges not entry types and attributes.

        Making it friendly to co-exist with other independent provisioning solutions that does not acquire entrytype or attribute triggering.

 

Of course there been many significant feature down the road since then such as group of privilege that was particularly important to avoiding bottleneck of assignment provisioning in 7.1. Todays context concept, attestation and advanced approval just to mention a few.


Group assignments

But today I wanted to focus on the group assignments.

In AS Java and also the Notes connector (now part of SAP Provisioning Framework), we use MX_GROUP object which is coupled with a group privilege.

 

Basically the nature of MX_GROUP is that it does not have modify events, it considered a container object.

 

If you are somewhat familiar with the SAP Provisioning framework 7.2 you know that there are account privileges (PRIV:<REP>:ONLY) for each repository,

and perhaps you also is aware that there is an internal privilege PRIV:SYSTEM:<rep> that is assigned as a result of account assignment.

The account privilege controls provisioning and deprovisiong to the repository while the system privilege controls modify events.

The event task are by default inherited from repository,

so all privileges except system privileges will disable the modify event by explicitly setting the MX_MODIFY_TASK to -1.

And likewise the PRIV:SYSTEM:<rep> will disable all event task except the modify task (which is inherited, hence not specified on the privilege).

 

The same applies for MX_GROUPs, you assign an account privilege to a group and it gets created in the repository and obtains the system privilege.

In addition however it is assigned a group privilege (PRIV:<REP>:GROUP:<IDENTIFIER> that controls the assignment of membership.

 

So given that you have an MX_GROUP with account and system privilege assigned. There is two ways of assigning members to the group.

1)     1)  Either you assign the user as a member of the group,     or        2)  you assign the group privilege to the user

 

 

So let’s take it step by step.

 

1. You assign a member to a group

This causes MXREF_MX_GROUP attribute to be updated on the user (with mskey of the group).

Since the user has the PRIV:SYSTEM:<REP> privilege and this privilege has the modify attribute MXREF_MX_GROUP enabled this will trigger a modify event in the provisioning framework.

 

modify_attribute.png

 

2. The modify task fires.

On newer version of SAP Provisioning Framework 7.2 there is checked of “User context variables”.

 

  context_variable_on_tasks.png

This enables the generation of MX_EVENT* context variables for the sessions, a prerequisite in newer framework for detecting modification event.

(Basically you can halt the execution by deselecting a job on a task in the task there, pick up the audit id from provision queue, and you can see these audit variables)

While we before used SAP_CHANGENUMBER to track these changes this is much safer and faster approach.

 

3. Update of the group privilege.

 

The task “process modify type” set context variable for different operations, and the conditionals evaluates them.

 

In this case, since the MXREF_MX_GROUP was changed the operation flag will be set, and the ‘Update group privilege’  task will run.

 

The ‘Update group privilege’ task will for each member of the group assign the group privilege.

 

If you have a bunch of users in the group they will already have this privilege so nothing will happened, for the newly added group members however the group privilege assignment will fire of a provision event.

 

modfy_update_group_privilege.png

 

4. Provisioning by group privilege.

If it has been a nested group assignment (AS Java) you would now go in the MX_GROUP switch,

but since this is a persons and not an account privilege assignment you follow to the plugin execution of 4. Exec plugin – Assignment User Membership.

Firing of the plugin task of the repository. Basically how that works is that the task “inherits” the repository from the privilege (group privilege) and

the execute plugin task obtains the plugin task through the repository hook variable as a runtime resolution. And execute a basic provision on the hook.

The execute plugin task and hook task utilizes the wait-for/after concept to halt further execution in the task tree until the plugin task completes.

 

execute_assignment.png

After having executed the respective backend operation. The assignment type is checked.

Since we are dealing with group it has to ensure MX_GROUP and MX_PRVILEGE is in sync on the user.

So it will update the MX_GROUP with the members, meaning the MXREF_MX_GROUP is updated on the user.

Since this is a multi-value reference attribute it will re-trigger the Modify task in step 3.

However the user will already have the group privilege now so it will not re-trigger.

 

Alternatively,

if you assign a group privilege to a user  you basically start the process at step 4 directly.

 

In SAP Provisioning Framework 7.2 v2.

The process is the same, but since the v2 is embedded in Java (with the advantage of improved speed)

The detection of MXREF_MX_GROUP in modify happens inside the Java code.

 

version2_assignment.png

 

And the assignments evaluation happens in the Split conditional and you are redirected to the MX_PERSON_ASSIGNMENT.

And the conditional : Person after Assignment provisioning deals with the assignment type evaluation etc.

 

version2_assignment2.png

SAP NetWeaver IdM REST API UI - calling POST method/example

 

 

            I had a problem executing a POST method, after a new security requirement was added (the Virus Scan Interface has been enabled) within IdM  REST Interface Version 2 to prevent XSRF attacks. So I had to execute a non-modifying request (GET, HEAD, OPTIONS) first, where the X-CSRF-Token header field has the value Fetch. And after I had the value from my fist call in X-CSRF-Token header field I was able to execute a modifying request (POST...). Here is an example, how I do that:

 

var xsrfTokenValue="";

var myData = new Object();

 

$.ajax({

                type: "GET",

                url : "http:host:port/idmrest/v72alpha/entries/{MSKEY}/tasks/{TASKID}",

                dataType : "json",

                async: false, 

                contentType: 'application/json', 

                headers: {

                               "X-CSRF-Token": "Fetch",

                               "X-Requested-With": "JSONHttpRequest",

                               "X-Requested-With": "XMLHttpRequest",

                               "Content-type": "application/x-www-form-urlencoded"

               },

                success: function(res, status, xhr){

                     xsrfTokenValue =xhr.getResponseHeader("X-CSRF-Token");

               }

});

$.ajax({

                type: "POST",

                url : "http:host:port/idmrest/v72alpha/entries/{MSKEY}/tasks/{TASKID}",

                dataType : "json",

                headers: {

                               "X-CSRF-Token": xsrfTokenValue,

                               "X-Requested-With": "JSONHttpRequest",

                               "X-Requested-With": "XMLHttpRequest",

                               "Content-type": "application/x-www-form-urlencoded"

               },

                data:myData,

                async: false, 

                contentType: 'application/json',

                success: function(data){

               }

});

 

 

Note:

  • Into xsrfTokenValue variable is the value for X-CSRF-Token header stored(from the GET method)
  • into my headers I have all required IdM headers.
  • Into myData(in my POST request) you can dynamically generate the Object(the needed data send back to IdM) send with the POST method

As always, we start with a quote:

 

Captain Spock: All things being equal, Mr. Scott, I would agree with you. However, all things are not equal.

 

My latest project has me working with DB2 as the IDM backend. We’ve faced several challenges along the way, many in the area of performance and some in just general development. I’ll be addressing some of the performance issues in a future post (need more information from DBAs and other research)

One of the first things I learned about DB2, is how IDM recognizes it. I’m not talking about the Oracle emulation layer, but rather what code IDM uses to determine the database type. Consider the following code from the SAP IDM RDS Solution, which I have since modified:


// Main function: sapc_prepareSQLStatement

// 12MAR2014 - MGP - Added DB2 Support, some cleanup

function sapc_prepareSQLStatement(Par){

 

var dbType = "%$ddm.databasetype%";

var script = "sapc_prepareSQLStatement";

var returnValue="";

//uWarning ("dbType: "+dbType);

 

// Processing

if ( dbType == 1 ) { // MS-SQL

// uWarning("Database Type is MS-SQL.");

// uWarning("Par: " + Par);

return Par;

}

else if ( dbType == 2 ) { // ORACLE

returnValue = uReplaceString(Par, "AS", "");

// uWarning("Database Type is Oracle.");

// uWarning("returnValue: " + returnValue);

return returnValue;

}

else if ( dbType == 5 ) { // DB2

returnValue = uReplaceString(Par, "AS", "");

// uWarning("Database Type is DB2.");

// uWarning("returnValue: " + returnValue);

return returnValue;

}

else {

uErrMsg(2, script + " SQL Task: invalid database type: " + dbType );

// return error message and empty result

return "";

}

}

 

Note that while SQL Server has a value of 1, Oracle has a value of 2, while DB2 has a value of 5. Makes you wonder what happened to 3 and 4… (Sybase and MySQL, maybe?)

 

The other thing that I discovered is that when writing values back to a table in DB2 certain values are not welcome. I needed to write a multi-value entry back to the database and I kept receiving error messages. I was finally able to get a useful error message by changing the properties of the To Database pass I was using so that it would do SQL Updating.

To Database.png

When I did that, I received the following message (data has been changed to protect the innocent):

 

SQL Update failed. SQL:INSERT into recon_roleassign_EPD values (AAA__00000,AAA__00000,BBB__00000) com.ibm.db2.jcc.am.SqlSyntaxErrorException: An unexpected token "!" was found following "PD values (NL__HR005". Expected tokens may include: "!!".. SQLCODE=-104, SQLSTATE=42601, DRIVER=3.66.46

 

I was somewhat confused since this is the “standard” delimiter, at least as far as IDM is concerned, however after checking with a knowledgeable DB2 DBA, he confirmed that not only is ‘!!’ illegal to write, but so is ‘||’. He also mentioned that there is a list out there on the web somewhere, but I was not able to find it. If anyone can find the list of illegal characters, please comment on this entry and I will update the entry. I wound up using ‘;;’ as a delimiter.

 

As an aside, I have been asked over the years, why use ‘!!’, ‘||’, or even ‘;;’ as a delimiter? The answer as I understand it, is this:

 

There’s a chance when you use any character as a delimiter that it could be part of the string. Somewhat obvious when you think about common delimiters such as comma, colon, or dash. Even possible for characters such as pipe, slash, or pound. However when you use two like characters as a delimiter the chance that it’s supposed to be part of the actual data string is greatly reduced.

 

So there you have it, DB2 = 5, and you can’t write ‘!!’ or ‘||’ to a table. What other DB2 tips do you have to share?

Matt Pollicove

SAP IDM CEI Program

Posted by Matt Pollicove May 6, 2014

I had the opportunity to attend SAP’s first CEI program session for IDM. These sessions are designed to give SAP the chance to get some information on not only what customers are looking for from SAP IDM but also what they need from it. I think this is a fantastic initiative on SAP’s part for a few reasons.

 

  1. For the first time since SAP acquired MaXware almost seven years ago, they are going out to seek information from a variety of customers about how they would use the product. Pervious CEI initiatives were aimed at a very small group of SAP IDM customers. From what I understand this new group, mostly volunteers, is about 10 times larger! During this first session, I heard a few ideas come out that I don’t think SAP thought of, but saw the value almost immediately.
  2. SAP is actively engaging customers, partners, and consultants for feedback. The first session centered on the integration of Success Factors HR with IDM and what customers would be able to expect via an enhanced “ramp up” program.
  3. There are other sessions planned to cover a variety of topics in the coming months showing a considerable amount of initiative from SAP. I’m looking forward to exploring it in more detail.

 

Based on what I saw in this presentation and from things I’ve seen at past TechEd sessions, the forums, and general research, I’m seeing the following trends at SAP regarding Identity Management. These opinions are entirely my own, although comments from SAP are, as always, welcome.

 

  1. IDM is being positioned as the key workflow engine for managing identities throughout the SAP Landscape. If information is coming from any resource involving people, IDM is where it should be processed. HCM, Success Factors, PI, XI, AD, LDAP, no matter what the source, IDM should be handling the flow of information to other systems in the landscape and applications in the Enterprise. Unlike some other “people centric” systems like GRC and CUA, IDM is uniquely designed to work in a heterogeneous environment.
  2. It appears that SAP is poised to start a new phase of growth for IDM. This isn’t just the basic, release the latest iteration of the connector list and publish a new service pack. It’s time to add some new functionality and breathe some new life into SAP IDM!

 

One of the other benefits of this call-in session was getting to hear the voices of several people I have met in my travels and others that I have only met through SCN, Hope we all get to meet up soon! (TechEd, I mean d-code anyone?)

 

I’m looking forward to seeing what SAP has planned for IDM. Here’s to the new IDM frontier in the Landscape and the Enterprise beyond!

Dear community,

 

as I am often getting the question how to build a simple example using the SAP Identity Management REST API, I am writing a small blog post about it.

 

  • Attached, you are able to find a text file. Download it do a directory of your choice and rename it to index.html.
  • Download the jQuery library from following location and save it to the same directory as jquery-1.9.1.min.js:

http://code.jquery.com/jquery-1.9.1.min.js

  • Edit the index.html file with a plain text editor (e.g. Sublime Text) and change the host definition, so that it fit's your environment:

var baseUrl = 'http://localhost:50000/idmrest/v1';

  • After storing the file, open it with a browser (e.g. Goggle Chrome), and execute the "Search for Users" functionality. You will be prompted for username/password. As result, you should see a list of MX_PERSON entries.
  • Afterwards, execute the "Get Data from Display Task" functionality for a valid MSKEY, and you will see the attributes of this entry.

 

With the Google Chrome Developer Tools, you are easily able to look behind the scenes and which data is moved over the wire.

 

I recommend following  pages and tutorials for further information:

http://en.wikipedia.org/wiki/Ajax_(programming)

http://en.wikipedia.org/wiki/Representational_state_transfer

http://en.wikipedia.org/wiki/JSON

https://developer.mozilla.org/en-US/docs/AJAX/Getting_Started

https://developers.google.com/chrome-developer-tools/

 

In addition to following blog posts:

Write your own UIs using the ID Mgmt REST API

SAPUI5 and ID Mgmt - A Perfect Combination

 

Jannis

Maybe some of you have experienced this problem and maybe not. Maybe you just knew the answer but I couldn't find it on here anywhere so when I figured it out, I figured I'd share.

 

In the current environment I'm working in, when a new account is entered into IDM, be it through IDM directly or via the HR system, the first 6 characters of the last name and a couple characters from the first name or nickname are then used to complete the MSKEYVALUE, which is in turn becomes the user's Windows and SAP login IDs. We call this the 6+2 unique ID. The problem that was occurring was that if the person had spaces in their last name, that space counted as a character. It would get squeezed out when the actual MSKEYVALUE was created but it would then leave the ID in a 5+2 state.

 

For example, a name of "Jodi Van Camp", "Van Camp" being the MX_LASTNAME, would turn out an MSKEYVALUE of "VanCaJo" when it should be "VanCamJo".

 

The bottom line was, we needed to eliminate those spaces in the last name for the purpose of creating the MSKEYVALUE.

 

I thought it would be a simple replace using a script. Maybe something like this:

 

function z_eliminateWhitespace(Par){
  var result = Par.replace(/\s+/g, "");
  return result;
}

Or maybe this:

 

function z_eliminateWhitespace(Par){
  var result = Par.replace(/\s/g, "");
  return result;
}

Or this:

 

function z_eliminateWhitespace(Par){
  var result = Par.replace(/ /g, "");
  return result;
}

Or lastly, this:

 

function z_eliminateWhitespace(Par){
  var result = Par.replace(" ", "");
  return result;
}

None of this seemed to work. I've had it happen way too many times where a SQL query or JavaScript won't work exactly the way it should in IDM as it does in other environments so this wasn't a total surprise but now what? Finally, I happen on the idea of splitting the string on the spaces and rejoining it without the spaces. This was the script I eventually came up with and it seems to work:

 

function z_eliminateWhitespace(Par){
  var result = Par.split(" ").join("");
  return result;
}

The final script had an IF line before the split / join checking Par to make sure it wasn't empty or a NULL value but you get the general idea. Hope this perhaps helps someone out there someday.

Hi All,

 

I want to share a simple example with you to demonstrate how you can utilize SAP IdM to invoke a local PowerShell script.

In my scenario I am using Quest ActiveRoles Server Management Shell for Active Directory but this should work with Windows AD cmdlets as well.

 

In my Plugins folder I have replaced the standard To LDAP directory pass with a new Shell execute pass.

Screen Shot 2014-04-03 at 22.53.01.png

In the Destination tab you should disable the option "Wait for execution" and insert the following command with your arguments.

 

cmd /c powershell.exe -Command "c://scripts//ProcessQADUser.ps1" %$rep.QARS_HOST% %$rep.QARS_PASSWORD% %MSKEYVALUE% $FUNCTION.cce_core_descryptPassword(%MX_ENCRYPTED_PASSWORD%)$$ "'%Z_ADS_PARENT_CONTAINER%'" %MX_FIRSTNAME% "'%MX_LASTNAME%'"

Screen Shot 2014-04-03 at 22.57.50.png

Please remember to separate attributes using white spaces as PowerShell will remove commas and convert the arguments into an Array.

 


Hope this helps.

 

Regards,

Ridouan

Actions

Filter Blog

By author:
By date:
By tag: