1 2 3 11 Previous Next

SAP Identity Management

155 Posts

 

This is a video tutorial showing you a basic example of how to use SAP Identity Management 8.0, and more specifically, how to synchronize and manage the user data provided by two different data sources. They can be exported from your SAP or non-SAP system. For this example, we use a TXT file, containing the user IDs and emails of the users, and the second data source is a database table containing further information about the same users.

 

 

Target group

 

The video shows a simple, understandable and easy to execute example. It is meant for users who need an introduction to basic synchronization operations in SAP Identity Management.

 

Purpose of the video

 

Along with the introduction to the basic synchronization operations, you will get knowledge of the Eclipse-based development environment in SAP Identity Management 8.0 and the new package concept.

 

Scenario

 

Using SAP Identity Management 8.0, we import the information from the file email.txt and the database table HR_Sample into the identity store of the SAP Identity Management 8.0 system. The information from both data sources is merged and uploaded to the identity store.

 

 

Result


As a result, the information from both data sources is synchronized and transferred to the identity store of an SAP Identity Management system.

Dear colleagues,

I've encountered multiple questions regarding the installation and upgrade of IDM 8.0.

Below you can find a list of the common issues that one can encounter while ugrading or installing the product

 

Second Data Source for DevStudio

 

Now the second data source is needed because of the DevStudio. This is often forgotten but it is present in the documentation. The new component Develompment Studio needs a Data Source and if you don't define one you will constantly get Login Failed or Something Went Wrong errors.

 

Another thing that might be of use is to name your data source DevStudio - as mentioned in the documentation. We have monitored some issues whith custom naming and we will have a look if this value is hardcoded somewhere but just to be on the safe side use DevStudio as naming for you Data Source.

 

Here is an example how the configuration looks like:

DataSource.PNG

 

 

Admin user does not have rights assigned in user admin


Administrator user needs specific roles in order to operate correctly. If you scratch install IDM 8.0 you need to add them in the user admin. Make sure that the Administrator user exists both in idm and ume and has the following roles:


SPML_Role - it is described how it is created in the documentation

IDM_Authenticated



HTTPS not configured


If you have decided to use the HTTPS connection then you need to use most probably the HTTPS port that can be checked from your enginge monitoring ( by default it should be 50001) Then you need to do some configurations but this is mentioned in the configuration guide - how to enable HTTPS connection.

If you will not to use HTTPS then you need to go for port 50000 and set the property IS_HTTPS=false in to this location (C:\Users\Administrator\workspace\.metadata\.plugins\org.eclipse.core.runtime\.settings\com.sap.idm-dev-studio-userinterface.prefs)

where the workspace is your eclispe workspace.



Keys.ini file is switched


You have to pay special attention to the Keys.ini file. If you are going for the scratch install you need to generate your Keys.ini file and move it to the Identity Center installation folder manualy. Please check that the Keys.ini file is correctly generated and has content.

If you go for the upgrade scenario have in mind that you have to use your old Keys.ini file. If you want to change it - you have to RE-SAVE all your resources that cointain encrypred values with this key - like the Repository Passwords and etc. That means you have to open the settings and save them once more in order to be encrypted with the new key.


Linux shell scripts are with wrong encoding


If you are using Linux\Unix OS have in mind the following - when you edit files - try to do it under this OS and not with Windows. For example if you connect to your instance with WinSCP and edit a file and then process it back to the Linux system it might get Windows encryption. This will make the shell scripts fail during execution. If you edit them under windows please save them in Unix format. For example in Notepad++ there is the option EDIT>EOL Conversion > Unix Format.

 

That would be all for now - those are the most commonly reported issues. I will further add more info if needed. Have in mind that all of the above is described in the documentation. My recomendation is to follow it striclty when doing Install\Upgrade no matter how expirienced you are.

Introduction


When migrating to SAP HANA, businesses face a new challenge sooner or later: continue to use their current CUA with all its limitations or redesign the identity and password management  vision by transitioning to SAP Identity Management (also known as SAP IdM). Most companies have a mixed landscape where each system has their own unique identities: a traditional ECC, HANA, other SAP systems but also some other non-SAP systems like Microsoft Active Directory. Managing all these identities can become a burdensome task that could have an enormous impact on profitability. Does this scenario sounds familiar? You are not alone.



Benefits of using SAP IdM with HANA


In a HANA landscape, SAP IdM enables you to manage identities in a more centralized manner, whether SAP ABAP, Java or non-SAP (MS Active Directory, Tivoli, etc..) systems are targeted. Moreover, with the great potential of high speed data analytics in HANA, SAP IdM can meet new reporting needs with the help of tools like SAP Lumira, HANA Live and others. IdM allows business to take advantage of the benefits of cloud computing, which was not possible at all with traditional tools or the Central User Administration (CUA). This is one of the many reasons why the good old CUA is being pushed away in SAP HANA.


Here is a breakdown of the main differences between the Central User Administration tool and SAP Identity Management:

1.png


CUA to IdM evolution


For a long period of time, the Central User Administration component has provided SAP clients with solid authorization and role management functions for SAP software landscapes based on ABAP.

 

Today, there is an evolution in SAP’s user management strategy thanks to SAP NetWeaver Identity Management. With this tool, businesses can benefit from centralized administration of their employees’ user accounts and authorizations across several SAP software environments. SAP IdM also offers a functional scope that surpasses that of CUA, by enabling new users to get started faster all over a system landscape, no matter whether it is a simple or complex landscape.


The image below showcases the migration process from a CUA to SAP IdM:

2.png


Business Roles

A clear enterprise business role concept and understanding is the foundation to a proper SAP IdM implementation. At first, the roles needs to be defined in SAP IdM. This can accomplished by reading system access information (roles, groups, authorizations, etc.) from target systems that will be provisioned by IdM. Once this is completed, the provisioning of business roles can be done manually (by an administrator), automatically (HR-driven for example) or through a request/approval workflow.


 

SAP IdM also supports context-based role management, which simplifies the structure of roles through dynamic role assignment based on user context information. In other words, if you have 30 roles in 100 factories, you would have:

  • 3000 entries (roles) using a conventional method
  • Only 130 entries (roles + contexts) using a context-based method

 


The main benefits that can be obtained from using context-based roles are a massive reduction in the number of roles, a reduced complexity, better granularity, improved data consistency and governance.



Password Management

SAP IdM offers different options for password management. One of them is Single Sign-On (SSO), which provides the benefit for users to login only and gain access to all systems without being prompted to log in again at each of them. This is a separate SAP product that works conjointly with IdM. As always, this is performed with the highest security and encryption standards available. One of these standards used by SSO is SAML 2.0: an XML-based protocol that uses security tokens containing assertions to pass information about an end-user (like passwords) between SAP IdM and other target systems (HANA, MS AD, etc..).


 

To alleviate costs and improve performance of a Center of Excellence (CoE), a self-service password reset task can be implemented to avoid unnecessary assistance, e-mails and support calls to the CoE. When a password is changed, it can be automatically synced using the Password Synchronization functionalities of SAP IdM. In the case of a target system where the synchronization is not an option, a Password Hook can be implemented, which acts as an automatic trigger that “catches” when a password is modified and performs a determined action.



Conclusion


With SAP Identity Management you could boost the potential of your SAP HANA landscape by incorporating a centralized identity federation solution to manage all your SAP and non-SAP systems. Furthermore, SAP IdM can help you save time and money by reducing the effort needed in managing identities and access within your business. All of this, while taking advantage of the great potential that SAP HANA can deliver. So, what benefits is your business lacking by not adopting SAP IdM and keep using the CUA?.



"There is a theory which states that if ever anybody discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened."

-Douglas Adams

 

So you would like to discover more about the current state of your IDM installation, but don’t want to bring up your SQL client and run through all of your queries. What if you could just run a quick job and get the results in a HTML file? Good, let me tell you about a solution I have come up with.

 

A couple of years ago, I was working with some of the IDM developers on just such a thing, but then we all got side tracked and the project got put on the shelf.  Then there was the thread from Fedya Toslev  which got me thinking about it again and I brought it off the shelf and cleaned it out a bit.  Following what I did in my last configuration contribution, I have posted it on Sourceforge as SAPIDMstat.

Image 1.jpg

There’s a lot of ways that this information could be arranged and presented. Frankly, I think doing it as an HTML report might be the least of them.  Certainly, this calls out for being created as a report, or maybe leveraging the extension interface. I think it's fairly easy to understand what we've done and how it can be extended, but leave me a post here (or there) and we can work through it.

 

Enjoy!

There are some important things to consider when starting with IdM 8.

 

Qualified name

When installing the database schema, you will be asked to specify the Base Qualified Name.

  • This is the System name space and should be globally unique. It would typically be the customer domain, i.e. “com.<company>”.
  • The namespace will be used for the installation, and will be the root of the package qualified name.
  • The package qualified name is used when referencing other packages.
  • This means that the System Name Space can never be changed, as this will break any package references.
  • The package names for an installation also has to be unique for the installation.
  • The only allowed characters in a qualified name is 'A'..'Z', 'a'..'z', '0'..'9', '_', '.'

So:

  • The qualified name is used to identify a package
  • The qualified name is assumed to be globally unique
  • Make sure each created package has a unique name
  • Need to have one person responsible for defining the qualified names within the name space

Public objects

  • Processes and Package scripts may be public and will be referenced through their qualified name.
  • When made public, the names have to conform to the naming convention, being unique and only using the allowed characters mentioned above.
  • Forms are automatically made public when the type is set to Search form or Display form and also have to conform to the naming convention.
  • Repository types are always public with the same restrictions on naming.

Developer Studio users

When installing the database schema, you will be asked to specify the name of the development admin user. This user is the only user that have access to the Developer Studio when the installation is done. This user has also to be defined in UME, as the authentication is done there.

In Developer Studio, this user may create other users of Developer Studio after the same users are defined in UME.

The new users can be given different type of access to packages.

The Developer Studio users does not need any IdM specific authorizations in UME, and does not need to be defined in the Identity Store as users of the IdM Admin UI or IdM UI.

“Failure is not an Option” – Gene Krantz, NASA

 

Late last year I had a chance to work on a project that centered on the use of the SAP Virtual Directory. One of the things that we needed to do was make sure that the Virtual Directory configuration would be available in a Fault Tolerant manner to comply with general best practices and specific High Availability requirements from the customer. The good news is that Virtual Directory has built in functionality to provide this via the Failover Group Configuration.

To demonstrate this I created a simple scenario where had three systems, two of which represented basic virtualization of a data source, and one other system that held the failover configuration. In this example I virtualized the HR Data sample data that comes with SAP IDM. I’ve provided a basic diagram below. I’ve prepared a quick “architecture view” here:

 

VDS Failover Group Architecture.jpg

  

To demonstrate the difference between the configurations I named one Alpha, and the other Beta. We will be able to use this later on when we demonstrate that failover is actually occurring.

 

alpha-beta config.jpg

  

The next step was configuring the Failover Group. This isn’t too hard to build, as all you need to do is set up “Single” data sources to Alpha and Beta (This time as LDAP data sources) Open the Groups, and then the Performance & Availability node. From there, right click and select New and set up the Failover Group as I have done below, again, I did this on a third VM just to make sure everything in our test scenario was properly segregated. I have posted the configurations on SourceForge so that you can have a starting point on the configurations. There’s a quick and dirty README there as well.

failover group config.jpg

 

 

You’ll probably need to fiddle around a bit to get the groups enabled, but it can be done. I found that clicking on the source entry, and then clicking on the related “Enabled” box worked best, but you’ll probably need to play with it a touch to get it working. (Note to the Development team, this is an area of the interface that could use some tweaking, at least in VDS 7.2)

Once this is done, make sure that all of the configurations are running and try it out.  I did some testing using an LDAP browser (Apache Directory Studio, can’t recommend it highly enough).  During the first run, I pointed my LDAP browser at the failover configuration and saw Alpha.

 

alpha test.jpg

 

Then I went in and stopped the Alpha configuration. I didn’t do anything fancy, just stopped VDS config. Then just wait 15 seconds or however long you reconfigured the “Servers marked as unavailable are not used for” setting. (VDS team, this could written more clearly as well) Now go back and try your LDAP browser again.  You’ll notice the read probably takes a little longer now and you might also get a Root DSE warning.  You can disregard it. We’re getting this message since we are combining two LDAP schemas via the read of Alpha and Beta.

 

beta test.jpg

 

That’s pretty much it. I know more needs to be done with it. What would you like to see? Heck, join the project and make your own changes! Look for some more open source fun coming soon!

In this post, I'll discuss how SAP IDM developers can find occurrences of any given character string, such as a table or attribute name, in the source code of their scripts, jobs, tasks or attribute definitions. Using a bit of SQL and XML, you can turn your graphical database client into a minimalist where-used list for SAP IDM.

 

Prerequisites

 

This post is intended to serve as a tutorial-style documentation for an SQL query which I originally developed in one of my past projects and later decided to publish as open source on GitHub. I'll refer to this SQL query as where-used query throughout the remainder of this document. You can download the latest version for Microsoft SQL Server here. Separate versions for Oracle and IBM DB2 exist as well, but I'll focus on SQL Server in this post exclusively.

 

You will need the following things to try out the steps of this tutorial on your own:

  • Source code of the where-used query (see download link above)
  • SAP IDM 7.2 on Microsoft SQL Server
  • SAP Identity Center Designtime based on Microsoft Management Console (MMC)
  • Microsoft SQL Server Management Studio (SSMS)
  • Database connection as admin user from SSMS to the SAP IDM database

 

Please note that although the where-used query requires admin access, all database operations it performs are read-only (i.e. SELECT). Its execution will not alter any data in the database. It's still good practice to use it in development systems only. Please don't try it in production.

 

Apart from the tools listed above, I assume some familiarity with Microsoft SQL Server Management Studio on the reader's behalf. Readers should now how to open and edit SQL files and how to connect to the database with a specific user.

 

Overview

The basic workflow for finding locations containing the search term you're interested in is:

  • In SSMS: edit the SQL code to fill in your search term
  • In SSMS: execute the where-used query as admin user
  • In SSMS: display ID, name and content of matching locations
  • In MMC: navigate to matching locations

 

A Practical Example

Let's use the where-used query to find occurrences of the database table name "mxi_attrvaluehelp" in SAP IDM. Since this is the standard value help table, it's likely to be used in a variety of locations within any SAP IDM system, provided the SAP Provisioning Framework and maybe a couple of standard jobs have been imported. Locations where this table is typically used include:

  • Legal values of Identity Store attributes
  • Source code of global or (job) local scripts
  • Anywhere within job definitions, e.g. in the source or destination of a certain pass

 

Preparing Execution

To start, open the where-used query in Microsoft SQL Server Management Studio. Scroll down in the SQL query editor window to the very bottom, and find the following XQuery statements near the end:

 

for $t in (//attribute::*, /descendant-or-self::text())
where contains(upper-case($t),upper-case("YOUR_SEARCH_TERM_HERE"))
return $t

 

Now comes the most important part: in the above code, replace YOUR_SEARCH_TERM_HERE with mxi_attrvaluehelp, so it becomes:

 

for $t in (//attribute::*, /descendant-or-self::text())
where contains(upper-case($t),upper-case("mxi_attrvaluehelp"))
return $t

 

The next step is to verify that the query will be executed as admin user.

 

Have a look at the status line in the lower right corner of Microsoft SQL Server Management Studio to verify the name of the database user used by the current query editor window. In a default installation, the user name displayed should be MXMC_ADMIN. Note that the name mxmc728_admin displayed in the following screenshot is specific to my lab environment. Yours will be different.

 

If you're currently connected as any user other than admin, switch the DB connection before proceeding.

 

where_used_basic_usage_ssms_current_user.png

 

Finally, verify that the button "Results to Grid" in the Microsoft SQL Server Management Studio toolbar is checked, so the result set returned by the where-used query will be displayed in a graphical table control, and not as plain-text. This will become important later on for navigating to the full content of matching text locations.


where_used_basic_usage_ssms_results_to_grid.png


Finally, make sure you don't have accidentally selected any of the SQL code in the query editor window with your mouse or cursor. This would result in errors during execution, because only the highlighted parts of the code would be executed. If unsure, perform a single left mouse click into the query editor window to discard any current text selection.

 

Now execute the query, e.g. by pressing F5 on your keyboard. You should expect a couple of seconds execution time only. In my lab environment, query execution returns thirty something rows. Again, your number of results will be different.

 

The column headers and first few result set rows look like this:

 

where_used_basic_usage_result_set_header.png

 

Now what can we do with that information?

 

Result Set Structure

 

Each row of the where-used query's result set is one match location, i.e. a piece of text from an SAP IDM designtime object. This piece of text contains your search term at least once. I use the term "designtime object" as a generalization here, as I don't like to spell out "attributes or tasks or jobs or scripts" all the time.


The result set has the following row structure:


Column PositionColumn NameColumn Description
1NODE_TYPEType of designtime object
2NODE_IDID of designtime object
3NODE_NAMEName of designtime object
4MATCH_LOCATION_XMLXML view on MATCH_LOCATION_TEXT (see below)
5MATCH_LOCATION_TEXTPiece of text containing search term; part of designtime object's content
6MATCH_DOCUMENTXML view on designtime object as a whole, or at least significant parts of its textual content


The first three columns, NODE_TYPE, NODE_ID and/or NODE_NAME, let us identify the designtime object a match has been found in. The remaining columns, MATCH_LOCATION_XML, MATCH_LOCATION_TEXT and MATCH_DOCUMENT, serve to display the matching content and its surrounding context.


NODE_TYPE indicates the type of designtime object a match is in. It can have one of the following values:

    1. A for Identity Center attributes
    2. T for tasks
    3. J for jobs
    4. S for global scripts

 

NODE_ID is the numeric ID which has internally been assigned by SAP IDM to this designtime object. For attributes, NODE_ID is the attribute ID, for tasks it is the task ID, and so on.

 

NODE_NAME is the user-provided name of the designtime object. For attributes, NODE_NAME is the attribute name, for tasks it is the task name, and so on.

 

MATCH_LOCATION_XML and MATCH_LOCATION_TEXT are very similar. Let's focus on MATCH_LOCATION_TEXT for now. That column contains a piece of text from the designtime object's definition. In general, this is only a smaller part, rather than the designtime object's whole content. Scripts are a notable exception to this rule, in that MATCH_LOCATION_TEXT contains their whole source code, not just a part of it.


MATCH_DOCUMENT, on the other hand, is a more complete, XML-based view on the designtime object's content. You can think of this XML document as the surrounding context of MATCH_LOCATION_*. In general, it contains user-provided text related to the designtime object's definition. In the case of attributes, for example, this XML document contains the attribute's description, allowed attribute values, SQL query used for value help and regular expression to validate attribute values. In the case of jobs, MATCH_DOCUMENT even contains the complete job definition in an internal and undocumented XML format defined by SAP.


Finally, it's important to note that the where-used query works case-insensitive and uses substring matching. Hence, searching for "mxi_attrvaluehelp" would find "mxi_AttrValueHelp", "$FUNCTION.my_function(MXI_ATTRVALUEHELP)$$" and others.

 

With that knowledge under our belt, let's inspect the result set from our example query in some more detail.

 

Matches in Attributes

The first result set row in the previous screenshot corresponds to an Identity Center attribute (NODE_TYPE=A) whose ID is 3 (NODE_ID=3) and whose name is MX_LANGUAGE (NODE_NAME=MX_LANGUAGE). As you might guess, the most likely place where any attribute definition would refer to the table mxi_AttrValueHelp is in its "Attribute values" section, so let's switch over to MMC and inspect the attribute definition of MX_LANGUAGE.


If you have multiple Identity Stores and thus multiple attributes with the same name "MX_LANGUAGE", you could have a look at MATCH_DOCUMENT (use the hyperlink) to see exactly which Identity Store this match relates to. I'll skip this detail here for brevity and simply trust that we're talking about MX_LANGUAGE from the "Enterprise People" Identity Store.

where_used_basic_usage_mx_language.png

where_used_basic_usage_mx_language_attr_values.png

As you can see, the exact content from the MATCH_LOCATION_TEXT column, "mxi_AttrValueHelp", can be found in the input field "Table name" on the "Attribute values" tab in MMC.


Matches in Jobs

Scroll further down your result set, and check if it contains any rows with NODE_TYPE=J, which indicates a match within a job. In my environment, I have multiple result set rows which all relate to the SAP standard job "ABAP Read Help Values". You may not have this job in your system at all, in which case these or similar matches will be missing from your result set. Here's an excerpt from my matches:


where_used_basic_usage_rs_job_different_match_text.png


All match locations shown in the previous screenshot are contained in one job, as we can see from the identical values for NODE_TYPE, NODE_ID and NODE_NAME in these rows. The first two rows each have different MATCH_LOCATION_TEXT values, while the values in the last two rows are identical. The latter indicates that the job contains this MATCH_LOCATION_TEXT in two different places, e.g. in two separate passes.


Again, you could open the complete job definition as XML by following the MATCH_DOCUMENT hyperlink, and inspect the XML document to figure more precisely where each matching text is hidden. To keep it short, I'll not do that here and instead switch over to MMC directly.

 

How do we locate the relevant job in MMC, first of all? Fortunately, MMC lets us navigate to any job quickly, provided we know its ID. In MMC, select "Action ==> Find" from the menu bar. When prompted, enter the job ID displayed in column NODE_ID into the "Find" input field.


Make sure that you check "Find tasks or jobs only" to speed up the search, then press "Find next":


where_used_basic_usage_action_find.png

 

where_used_basic_usage_find_by_job_id.png


MMC will take you directly to the job definition. Let's inspect some of the job's passes, e.g. the very first one "Convert Locale of ValueHelp Table to UpperCase". On the destination tab of this toDatabase pass, we can see our search term "mxi_attrvaluehelp" as part of an SQL update statement:


where_used_basic_usage_abap_read_helpvalues.png


where_used_basic_usage_mmc_convert_locale.png

Referring back to our result set, this SQL statement is exactly what we see as MATCH_LOCATION_TEXT in the first line.


Let's browse further down to the pass "Address Salutation - to HelpValues". Switch to the destination tab of this pass. It's not exactly easy to spot, but the field "Database table name" used in this pass contains the MATCH_LOCATION_TEXT from row three (or four - there's no way to tell) from the query result set.


As mentioned earlier, the existence of multiple lines containing this value indicates that the job has a corresponding number of additional locations, maybe separate passes, which also contain this text. I'll not verify this now, but end my exploration of the job "ABAP Read Help Values" job at this point.


where_used_basic_usage_mmc_to_helpvalues.png

 

I'll also deliberately skip discussion of result set line number two, whose MATCH_LOCATION_TEXT starts with "//Main function: ...". This is a match from a local script defined inside the job. Pretty much all of the information provided in the next section, matches in global scripts, is true for local scripts as well.


Matches in Global Scripts

Switch back to your query result set in Microsoft SQL Server Management Studio and scroll down until you find any line with NODE_TYPE=S, which indicates a match in a global script (JavaScript or VBScript). In my environment, there are matches in three global scripts, the first of which is sap_abap_getHelpValKey.


where_used_basic_usage_rs_global_script.png


When you have a look at the content of MATCH_LOCATION_TEXT of this result set row, you'll notice that it contains the complete JavaScript source code of this script. Unfortunately, though, all of the script's source code is displayed on one line, which makes MATCH_LOCATION_TEXT more or less unusable in this case. The content is simply too wide to be displayed on any monitor.

 

Here is where MATCH_LOCATION_XML is very handy. Click on the hyperlink of that column's value, and Microsoft SQL Server Management Studio will open a new, separate window with the complete script source code.

where_used_basic_usage_match_location_xml_global_script.png

There's a small but unavoidable oddity here, in that the content of MATCH_LOCATION_XML always has artificial markup characters "<?X " at the beginning of the first and "?> at the end of the last line. These markup characters are not part of the real content from the database, but generated dynamically by the where-used query. For people who can live with this glitch, MATCH_LOCATION_XML's hyperlink navigation provides a convenient way to display script source code, or any other longer text matching your search term, directly in Microsoft SQL Server Management Studio.


Please note that matches in one single script will always result in one single match location of the where-used query. This is true even when there are multiple occurrences of your search term in this script's source code. For example, sap_abap_getHelpValKey contains our search term "mxi_attrvaluehelp" two times, as illustrated in the following screenshot. However, there's only a single line for this script in the query result set. If in doubt, I recommend you use the "Quick Find" (Ctrl+F) function in the editor window displaying the MATCH_LOCATION_XML value. This will help you step through all occurrences easily.


where_used_basic_usage_global_script_search_term_highlight.png

Conclusion

With this, I would like to conclude our tour through the where-used query. This post has grown more lengthy than I intended to, but I hope you still found it useful. Topics I have not covered at all, or touched upon only briefly, include:

 

  • How can you use it on Oracle or IBM DB2?
  • How can you refine your search?
  • What's in the MATCH_DOCUMENT column and how can you use it?

 

I might follow up on these in future posts.

 

Admittedly, the approach presented here is a stop-gap solution lacking any integration into the official SAP IDM tools. In particular, the need for switching back and forth between Microsoft SQL Server Management Studio and MMC is tedious, of course.

 

On the other hand, my using it almost every day for refactoring or reverse engineering makes me think that other IDM developers might find it useful as well. Personally, I would love to see SAP build this or comparable capabilities into their Eclipse-based IDM designtime in a future support package of SAP IDM 8.0.

 

Let's see if it ever happens.

 

Cheers!

“Logic is the beginning of wisdom; not the end.” – Spock, Star Trek VI


I've been getting a lot of questions about managing external sources with IDM from a best practices perspective.  While I am not employed by SAP, I thought I would take the opportunity to share a few thoughts on this from both a logical viewpoint, along with my observations of watching people work with SAP IDM all these years.

 

Strictly from a compliance and governance standpoint, I've always been a fan of having IDM manage as much as possible. Reducing the use of disparate tools to manage digital identities in the SAP Landscape and the greater Enterprise will make your Internal Audit and Information Security groups happy. When we can centralize on a common tool that is capable of recording identity life cycle actions, approvals, notifications, escalations, delegations, etc., we make another step towards greater security and safety in our organizations.

 

Furthermore, since we can now put this all into a single tool, the Identity Management / Information Security team can create customized tasks and workflows to enable greater availability for these services to technical and non-technical managers, and in limited use cases, the users themselves (Self Service Password Reset and changes to personal data come to mind)

 

Ok, so this makes IDM the great equalizer.  Woo-hoo! Does this mean that we can only make changes to managed systems through IDM? I don't think so.

 

Managing the data in a system/repository from IDM is all well and good, but management of the system by its administrators should not be restricted. While I would say that it is a best practice to manipulate the data as much as possible via IDM, maintenance need not require IDM.

 

Administrators usually need to get deeper into the system and do more than IDM can support, so using IDM for everything in systems such as Active Directory or HCM might just not be possible. My advice to those that ask me would be as follows:

 

  • Whenever possible, work should be done through an audit-able system that requires logging in
  • All work sessions should be recorded in a log, along with a summary of work done, particularly when there is no login or access prompt
  • Maintenance to Production systems should always follow an established change control process
  • When doing maintenance, follow a "two-person" rule, where changes are verified by at least one other person, when data files, deletes, or modifications are being done on a large scale
  • Finally -- DOCUMENT EVERYTHING!!!!! I can't emphasize this one enough. Make sure there are thorough notes detailing what is being changed, anticipated results, scope of change, details of change, documented code, and so on.  You just can't have enough documentation.

 

Again, these are my thoughts on the issue.  I'd be interested in hearing yours.

Hi guys,

 

I have decided to present our custom extension for the standard IdM Password Change UI:

PWR Add-on.png

Here are some of the standard IdM Password Change limitations:

  • doesn't support dynamic account generation based on the user access
  • not possible to select just one or two of the available systems
  • not possible to have a password provisioning status update
  • mobile version - not supported
  • not very user friendly UI

So we have decided to create a custom solution using SAPUI5. Our UI dynamically generates the available systems based on the logged user information and allows to select only the systems you want for password change. As well, we have added a progress status and system status, so in case the password provisioning fails in IdM/back-end you will have UI error state for the failed systems. The users can also log in from a mobile phone/tablet and change their password from there.

For a similar solution you will need IdM&SAPUI5 knowledge and additional IdM development for the UI REST calls and the password provisioning logic.

In IdM you have to create 2/3 UI tasks responsible for the user data display and for the GET/POST calls to IdM, additional entry type is needed. Depends on the customer needs, you have to create additional attributes responsible for the status update and the system provisioning. For the back-end systems you will have to create a custom password change task instead of the standard one.

 

 

Kind Regards,

Simona Lincheva

We have just released SAP Identity Management 8.0 and it has General Availability now.

First several customers have successfully completed upgrade projects from Identity Management 7.2 to IdM 8.0 and new installations of IdM 8.0 during customer validation and ramp-up phases.

 

SAP Identity Management 8.0 comes with the following main improvements:

Innovative design-time IdM Developer Studio as Eclipse plug-in. It replaces the Identity Center Management Console and comes with new configuration packages concept and better usability as well as multi-user environment. Supports Visual Workflow Design which allows you to visualize conveniently and to drag and drop processes in a workflow diagram

• Extended integration capabilities with SAP products with SAP SuccessFactors connector

Improved security concept

 

You can get more insights in the following blogs:

 

 

More detailed technical information can be found on the SAP Identity Management 8.0 help portal and Release notes.

This issue is reported by many customers.

A proper note will be created but I will also have it explained here as well.

 

So the situation is as follows. A user in the database has his password changed. Then the customer expectation is that when the dispatcher is restarted it should work again. This is wrong. You should change the password in exactly two places and regenerate the scripts in order this to take effect.

step1.PNG

1) If we are speaking about the RT user you need to change the JAVA RT properties first

step2.PNG

2) Then you need to change the Windows RT connection string

step3.PNG

3) Do not forget to press APPLY otherwise your changes won't be saved

step4.PNG

4) Regenerate the scripts and then try to start the dispatcher it will be all fine now.

Every day it is fantastic to see the collaborative efforts in the IdM community with some really talented IT experts sharing their knowledge (Topic leader board take a bow :-)    ) with the greater IdM community. At times IdM can be complex when trying to realise the goals/processes set out in a project.

 

A possibly over looked tool beside the community forum is the SAP wiki. The main aim of the wiki is :

 

 

  • Create findable, usable content (solutions)
  • Capture within the workflow: solutions are in the context of a target
    audience
  • Just in Time: solutions are easily improved over time based on demand
    and usage (edit as you go)
  • Opportunity to reward contributors based on knowledge they share

 

There are already some really useful wikis like the Troubleshooting Guide that can help when setting out on the IdM journey.

 

 

 

 

idm troubleshooting guide .PNG

 

The main IdM  wiki space can be found at

 

http://wiki.scn.sap.com/wiki/display/Security/SAP+Identity+Management+-+Overview

 

 

and the main Netweaver troubleshooting wiki can be accessed here .

 

 

From the support side it would be great to get feedback if the wiki is a resource you have ever used or do you ever feel a need to contribute content to ?
All opinions both good and bad/constructive (please be gentle !) would be appreciated.

 

Thanks,

 

Chris (from SAP)

This blog post is about changing data with help of SAP Identity Management REST API v1, also multi values and role references with validity data.

I suggest using Google Chrome and the REST plugin Postman for this as test environment.

 

Prerequisites:

To be able to change data via POST request, two header variables have to be send to the Identity Management backend for security aspects. First one is X-Requested-With = JSONHttpRequest. The second one is more difficult. First, you have to send a GET request with X-CSRF-Token = Fetch to the server. You will receive a token, which has to be used in the POST, e.g. koMUjIyXs5z3qxUkAcKgJrJ0jOezwEQv2ZQ. All together:

  1. X-Requested-With = JSONHttpRequest
  2. X-CSRF-Token = <token_received_from_GET_request>

See therefore SAP Note 1806098: Unauthorized Use of Application Functions in REST Interface

 

To be able to change validity dates, you have to change the application property v72alpha.validity_enabled = true in AS Java.

See therefore SAP Note 1774751: Reading/Writing VALIDFROM and VALIDTO values with REST API

 

Change Singe Value Attributes:

This example changes first and last name.

POST http://localhost:50000/idmrest/v1/entries/<mskey_of_entry>/tasks/<task_id>?MX_FIRSTNAME=Benjamin&MX_LASTNAME=Franklin

 

Change Multi Value Attributes:

This example changes the additional phone numbers.

POST http://localhost:50000/idmrest/v1/entries/<mskey_of_entry>/tasks/<task_id>?MX_PHONE_ADDITIONAL=[{"VALUE":"555-41863218"},{"VALUE":"555-43518792"}]

 

By default, these values will be added. In order to delete values, add the changetype = delete.

POST http://localhost:50000/idmrest/v1/entries/<mskey_of_entry>/tasks/<task_id>?MX_PHONE_ADDITIONAL=[{"VALUE":"555-41863218"},{"VALUE":"555-43518792","CHANGETYPE":"DELETE"}]

 

Change Role References with Validity Dates:

Validity dates are optional. As value, use the mskey of the roles you want to assign.

POST http://localhost:50000/idmrest/v1/entries/<mskey_of_entry>/tasks/<task_id>?MXREF_MX_ROLE=[{"VALUE":"29","REASON":"Test","VALIDFROM":"1999-01-01","VALIDTO":"2049-12-31"},{"VALUE":"28","REASON":"Test","VALIDFROM":"2000-02-15","VALIDTO":"2015-06-31"}]

 

REASON, VALIDFROM, and VALIDTO are link attributes. You are also able to set the context ID by setting CONTEXTID=<mskey_of_context> as additional link attribute.

 

Privileges have to be changed my MXREF_MX_PRIVILEGE. MX_ASSIGNMENTS is only a virtual attribute and cannot be changed.

 

(For GET requests, make sure to set "List Entries on Load" in the attribute definition in order to get role or privilege assignments via REST).

 

URL Encoding:

Always make sure you are using URL encoding for these URL parameters (or use a library, which is capable of doing this), which will lead in URLs like these:

POST http://localhost:50000/idmrest/v1/entries/<mskey_of_entry>/tasks/<task_id>?MXREF_MX_ROLE=%5B%7B%22VALUE%22%3A%2229%22%2C%22REASON%22%3A%22Test%22%2C%22VALIDFROM%22%3A%221999-01-01%22%2C%22VALIDTO%22%3A%222049-12-31%22%7D%2C%7B%22VALUE%22%3A%2228%22%2C%22REASON%22%3A%22Test%22%2C%22VALIDFROM%22%3A%222000-02-15%22%2C%22VALIDTO%22%3A%222015-06-31%22%7D%5D

 

Related blog posts:

Write your own UIs using the ID Mgmt REST API

A simple example consuming SAP Identity Management REST API

When I am working on IDM projects, one of the things that I commonly find is that there is insufficient emphasis placed on the deprovisioning process. I have found this to be the case for a number of reasons, all of which usually come out during the requirements and discovery process, prior to determining a final design.

 

It's not that organizations don't know that users need to be deprovisioned, rather it's the policies that need to be acknowledged during the process. Let's talk about three common types of systems found in a SAP IDM project: Active Directory, Email, and the SAP System of your choice.

 

For various reasons, we may want to ensure that the person leaving the company might have information that we will need to hold on to some of their information. Information held on individual drive shares, emails, etc. could be required for legal or compliance reasons.  Also there's the chance that the termination is not actually a termination, access might be suspended for various types of leaves or sabbaticals. So how do we handle this? The first thing is that we need to understand what happens during the process.  For example, typical things I see are:

 

  • SAP IDM - Password is to be set to a random value, The MX_DISABLED attribute is set to a non-null value. (Note that MX_INACTIVE has some more far reaching effects. Please refer to page 92 in the document SAP NetWeaver Identity Management Identity Center - Identity Store Schema - Technical Reference.)
  • Active Directory / Exchange - User is to be disabled, moved to a separate, specific Organizational Unit, password to be reset to a random value.
  • SAP Systems - Login disabled, Password is to be set to a random value.

 

Typically this state is kept for a period of 14 to 180 days after which the accounts are permanently deleted, unless they have been reactivated.

 

One of the effects of these Business Processes, is that the SAP IDM Provisioning Framework needs to be updated.  Before considering any actions on the framework it is important to remember:

 

  • Make sure the original Framework install is backed up (I recommend a copy and paste to another part of the configuration and doing an export)
  • When editing specific tasks and jobs, make a back up copy.

 

Now you can go ahead and extend the framework. Note the following example where I have updated the Delete Plugin of the AD Connector:

SAP IDM PF Changes.png

 

Note that I make a copy of the "3 Delete AD User" task and then made a few changes, linking in the disable user task and moving the user to "ou=disabled" (for more information on this move, look here.)

 

You can use this same other approach for any other SAP IDM Connector. I suppose a more full-featured change for some connectors  would have have a conditional task to evaluate what kind of termination event this was and then either disable or straight out delete, but I have to leave some topics for future writing

 

Please let me know if this helps you or if you would take this a different way.

Originally RDS package is developed for Identity Management 7.2 and if you are an IDM 8.0 early customer and need certain functionality available in this RDS package here is a quick beginners guide which does not pretend for completeness, but rather shows minimum set of steps to enable a certain functionality - in this case Copy Identity.

 

UPDATE NOTE

This blog originally was intended simply to test how it works and was not tested on any productive environment. I have got some feedback that as the RDS 1.0 was created for an older version of IDM 7.2. SP5 and simply running the bellow steps can create some issues with schema as the schema is overwritten by RDS package. Thanks to Rene Feister who discovered this and recommended following workaround:

- create new IDM 8.0 ID store

- export the schema

- import the schema in the ID store where you previously imported RDS 1.0. and this should give you the actual 8.0 schema.

Also to make you aware that currently Rene Feister and his team develop RDS 2.0 which probably will come with IDM 8.0 SP2.

To fully benefit from all of the RDS features follow the Initial System Setup in RDS Configuration Guide in the following file RDS_NW_IDM_IDM720_SERV\SAP_BP\BBLibrary\Documentation\D04_IDM720_BB_ConfigGuide_EN_XX.doc which you can find in:

SAP Service Marketplace>Products>SAP Rapid Deployment Solutions (RDS)>Lines of Business>Finance>Enterprise Risk and Compliance>SAP Identity Management>Solution Deployment

 

1. Download the IDM 8.0 Patch 1 SCA and deploy it (e.g. via telnet) on the AS JAVA.

2. Install the IDM 8.0 Eclipse Developer environment Patch 1 from the SCA.

     In Eclipse go to Help\Install New Software\Add and choose archive and find the location of the Eclipse archive from SCA content

     IDM8.0_xxxxxxx\DATA_UNITS\IDM_CONFIG_LM_FOR_ECLIPSE_80\com.sap.idm.dev-ui-repository-8.0.2

3. Download the RDS package for IDM 7.2 - https://service.sap.com/rds-idm

     Go to SAP Service Marketplace>Products>SAP Rapid Deployment Solutions (RDS)>Lines of Business>Finance>Enterprise Risk and Compliance>SAP Identity Management>Solution Deployment and then choose SAP Identity Management RDS Content V1

4. Import schema mcc as shown on the picture.

  • Right click on Identity Store Schema\Import and find the import files folder like shown bellow.
  • In the lower right corner of the Open Dialog choose the IDM 7.2 “MCC files” and then select the 0256_IDM72_Identity_Store_Schema.mcc file.
  • From the Packages node’s context menu import the other two files:
    • 0256_IDM72_Job_Folder.mcc
    • 0256_IDM72_Provisioning_Folder.mcc

RDS1.png

5. In order to see the web forms in the manage tab you need to give the respective role to the manager user: SAPC_IDM_ADMINISTRATORS

Assign this role to the user froms the web UI

     RDS2.png

6. Then the user should be able to see the RDS forms

RDS3.png

  You can add to favorites this function copy identity  and it will appear in front of  Choose task button


RDS5.png 

 

     If still do not see the new forms in the UI then restart the IDM JMX Java application in SAP NetWeawer application server.


RDS6.png

7. Finally in the RDS package constants you should check the following

     Double click the identity store

RDS7.png

     Check that the ID of the identity store in this case “1”

RDS8.png

     Open the idm\admin UI and go to system configuration and open the package constants of the provisioning package and make sure SAP_MASTER_IDS_ID value is equal to the above ID of the identity store in this case “1” and save.

RDS9.png

 

For further details refer the Identity Management 7.20 - Setup and Provisioning (D04) document mentioned above (D04_IDM720_BB_ConfigGuide_EN_XX.doc

Actions

Filter Blog

By author:
By date:
By tag: