1 2 3 10 Previous Next

SAP Identity Management

150 Posts

There are some important things to consider when starting with IdM 8.

 

Qualified name

When installing the database schema, you will be asked to specify the Base Qualified Name.

  • This is the System name space and should be globally unique. It would typically be the customer domain, i.e. “com.<company>”.
  • The namespace will be used for the installation, and will be the root of the package qualified name.
  • The package qualified name is used when referencing other packages.
  • This means that the System Name Space can never be changed, as this will break any package references.
  • The package names for an installation also has to be unique for the installation.
  • The only allowed characters in a qualified name is 'A'..'Z', 'a'..'z', '0'..'9', '_', '.'

So:

  • The qualified name is used to identify a package
  • The qualified name is assumed to be globally unique
  • Make sure each created package has a unique name
  • Need to have one person responsible for defining the qualified names within the name space

Public objects

  • Processes and Package scripts may be public and will be referenced through their qualified name.
  • When made public, the names have to conform to the naming convention, being unique and only using the allowed characters mentioned above.
  • Forms are automatically made public when the type is set to Search form or Display form and also have to conform to the naming convention.
  • Repository types are always public with the same restrictions on naming.

Developer Studio users

When installing the database schema, you will be asked to specify the name of the development admin user. This user is the only user that have access to the Developer Studio when the installation is done. This user has also to be defined in UME, as the authentication is done there.

In Developer Studio, this user may create other users of Developer Studio after the same users are defined in UME.

The new users can be given different type of access to packages.

The Developer Studio users does not need any IdM specific authorizations in UME, and does not need to be defined in the Identity Store as users of the IdM Admin UI or IdM UI.

“Failure is not an Option” – Gene Krantz, NASA

 

Late last year I had a chance to work on a project that centered on the use of the SAP Virtual Directory. One of the things that we needed to do was make sure that the Virtual Directory configuration would be available in a Fault Tolerant manner to comply with general best practices and specific High Availability requirements from the customer. The good news is that Virtual Directory has built in functionality to provide this via the Failover Group Configuration.

To demonstrate this I created a simple scenario where had three systems, two of which represented basic virtualization of a data source, and one other system that held the failover configuration. In this example I virtualized the HR Data sample data that comes with SAP IDM. I’ve provided a basic diagram below. I’ve prepared a quick “architecture view” here:

 

VDS Failover Group Architecture.jpg

  

To demonstrate the difference between the configurations I named one Alpha, and the other Beta. We will be able to use this later on when we demonstrate that failover is actually occurring.

 

alpha-beta config.jpg

  

The next step was configuring the Failover Group. This isn’t too hard to build, as all you need to do is set up “Single” data sources to Alpha and Beta (This time as LDAP data sources) Open the Groups, and then the Performance & Availability node. From there, right click and select New and set up the Failover Group as I have done below, again, I did this on a third VM just to make sure everything in our test scenario was properly segregated. I have posted the configurations on SourceForge so that you can have a starting point on the configurations. There’s a quick and dirty README there as well.

failover group config.jpg

 

 

You’ll probably need to fiddle around a bit to get the groups enabled, but it can be done. I found that clicking on the source entry, and then clicking on the related “Enabled” box worked best, but you’ll probably need to play with it a touch to get it working. (Note to the Development team, this is an area of the interface that could use some tweaking, at least in VDS 7.2)

Once this is done, make sure that all of the configurations are running and try it out.  I did some testing using an LDAP browser (Apache Directory Studio, can’t recommend it highly enough).  During the first run, I pointed my LDAP browser at the failover configuration and saw Alpha.

 

alpha test.jpg

 

Then I went in and stopped the Alpha configuration. I didn’t do anything fancy, just stopped VDS config. Then just wait 15 seconds or however long you reconfigured the “Servers marked as unavailable are not used for” setting. (VDS team, this could written more clearly as well) Now go back and try your LDAP browser again.  You’ll notice the read probably takes a little longer now and you might also get a Root DSE warning.  You can disregard it. We’re getting this message since we are combining two LDAP schemas via the read of Alpha and Beta.

 

beta test.jpg

 

That’s pretty much it. I know more needs to be done with it. What would you like to see? Heck, join the project and make your own changes! Look for some more open source fun coming soon!

In this post, I'll discuss how SAP IDM developers can find occurrences of any given character string, such as a table or attribute name, in the source code of their scripts, jobs, tasks or attribute definitions. Using a bit of SQL and XML, you can turn your graphical database client into a minimalist where-used list for SAP IDM.

 

Prerequisites

 

This post is intended to serve as a tutorial-style documentation for an SQL query which I originally developed in one of my past projects and later decided to publish as open source on GitHub. I'll refer to this SQL query as where-used query throughout the remainder of this document. You can download the latest version for Microsoft SQL Server here. Separate versions for Oracle and IBM DB2 exist as well, but I'll focus on SQL Server in this post exclusively.

 

You will need the following things to try out the steps of this tutorial on your own:

  • Source code of the where-used query (see download link above)
  • SAP IDM 7.2 on Microsoft SQL Server
  • SAP Identity Center Designtime based on Microsoft Management Console (MMC)
  • Microsoft SQL Server Management Studio (SSMS)
  • Database connection as admin user from SSMS to the SAP IDM database

 

Please note that although the where-used query requires admin access, all database operations it performs are read-only (i.e. SELECT). Its execution will not alter any data in the database. It's still good practice to use it in development systems only. Please don't try it in production.

 

Apart from the tools listed above, I assume some familiarity with Microsoft SQL Server Management Studio on the reader's behalf. Readers should now how to open and edit SQL files and how to connect to the database with a specific user.

 

Overview

The basic workflow for finding locations containing the search term you're interested in is:

  • In SSMS: edit the SQL code to fill in your search term
  • In SSMS: execute the where-used query as admin user
  • In SSMS: display ID, name and content of matching locations
  • In MMC: navigate to matching locations

 

A Practical Example

Let's use the where-used query to find occurrences of the database table name "mxi_attrvaluehelp" in SAP IDM. Since this is the standard value help table, it's likely to be used in a variety of locations within any SAP IDM system, provided the SAP Provisioning Framework and maybe a couple of standard jobs have been imported. Locations where this table is typically used include:

  • Legal values of Identity Store attributes
  • Source code of global or (job) local scripts
  • Anywhere within job definitions, e.g. in the source or destination of a certain pass

 

Preparing Execution

To start, open the where-used query in Microsoft SQL Server Management Studio. Scroll down in the SQL query editor window to the very bottom, and find the following XQuery statements near the end:

 

for $t in (//attribute::*, /descendant-or-self::text())
where contains(upper-case($t),upper-case("YOUR_SEARCH_TERM_HERE"))
return $t

 

Now comes the most important part: in the above code, replace YOUR_SEARCH_TERM_HERE with mxi_attrvaluehelp, so it becomes:

 

for $t in (//attribute::*, /descendant-or-self::text())
where contains(upper-case($t),upper-case("mxi_attrvaluehelp"))
return $t

 

The next step is to verify that the query will be executed as admin user.

 

Have a look at the status line in the lower right corner of Microsoft SQL Server Management Studio to verify the name of the database user used by the current query editor window. In a default installation, the user name displayed should be MXMC_ADMIN. Note that the name mxmc728_admin displayed in the following screenshot is specific to my lab environment. Yours will be different.

 

If you're currently connected as any user other than admin, switch the DB connection before proceeding.

 

where_used_basic_usage_ssms_current_user.png

 

Finally, verify that the button "Results to Grid" in the Microsoft SQL Server Management Studio toolbar is checked, so the result set returned by the where-used query will be displayed in a graphical table control, and not as plain-text. This will become important later on for navigating to the full content of matching text locations.


where_used_basic_usage_ssms_results_to_grid.png


Finally, make sure you don't have accidentally selected any of the SQL code in the query editor window with your mouse or cursor. This would result in errors during execution, because only the highlighted parts of the code would be executed. If unsure, perform a single left mouse click into the query editor window to discard any current text selection.

 

Now execute the query, e.g. by pressing F5 on your keyboard. You should expect a couple of seconds execution time only. In my lab environment, query execution returns thirty something rows. Again, your number of results will be different.

 

The column headers and first few result set rows look like this:

 

where_used_basic_usage_result_set_header.png

 

Now what can we do with that information?

 

Result Set Structure

 

Each row of the where-used query's result set is one match location, i.e. a piece of text from an SAP IDM designtime object. This piece of text contains your search term at least once. I use the term "designtime object" as a generalization here, as I don't like to spell out "attributes or tasks or jobs or scripts" all the time.


The result set has the following row structure:


Column PositionColumn NameColumn Description
1NODE_TYPEType of designtime object
2NODE_IDID of designtime object
3NODE_NAMEName of designtime object
4MATCH_LOCATION_XMLXML view on MATCH_LOCATION_TEXT (see below)
5MATCH_LOCATION_TEXTPiece of text containing search term; part of designtime object's content
6MATCH_DOCUMENTXML view on designtime object as a whole, or at least significant parts of its textual content


The first three columns, NODE_TYPE, NODE_ID and/or NODE_NAME, let us identify the designtime object a match has been found in. The remaining columns, MATCH_LOCATION_XML, MATCH_LOCATION_TEXT and MATCH_DOCUMENT, serve to display the matching content and its surrounding context.


NODE_TYPE indicates the type of designtime object a match is in. It can have one of the following values:

    1. A for Identity Center attributes
    2. T for tasks
    3. J for jobs
    4. S for global scripts

 

NODE_ID is the numeric ID which has internally been assigned by SAP IDM to this designtime object. For attributes, NODE_ID is the attribute ID, for tasks it is the task ID, and so on.

 

NODE_NAME is the user-provided name of the designtime object. For attributes, NODE_NAME is the attribute name, for tasks it is the task name, and so on.

 

MATCH_LOCATION_XML and MATCH_LOCATION_TEXT are very similar. Let's focus on MATCH_LOCATION_TEXT for now. That column contains a piece of text from the designtime object's definition. In general, this is only a smaller part, rather than the designtime object's whole content. Scripts are a notable exception to this rule, in that MATCH_LOCATION_TEXT contains their whole source code, not just a part of it.


MATCH_DOCUMENT, on the other hand, is a more complete, XML-based view on the designtime object's content. You can think of this XML document as the surrounding context of MATCH_LOCATION_*. In general, it contains user-provided text related to the designtime object's definition. In the case of attributes, for example, this XML document contains the attribute's description, allowed attribute values, SQL query used for value help and regular expression to validate attribute values. In the case of jobs, MATCH_DOCUMENT even contains the complete job definition in an internal and undocumented XML format defined by SAP.


Finally, it's important to note that the where-used query works case-insensitive and uses substring matching. Hence, searching for "mxi_attrvaluehelp" would find "mxi_AttrValueHelp", "$FUNCTION.my_function(MXI_ATTRVALUEHELP)$$" and others.

 

With that knowledge under our belt, let's inspect the result set from our example query in some more detail.

 

Matches in Attributes

The first result set row in the previous screenshot corresponds to an Identity Center attribute (NODE_TYPE=A) whose ID is 3 (NODE_ID=3) and whose name is MX_LANGUAGE (NODE_NAME=MX_LANGUAGE). As you might guess, the most likely place where any attribute definition would refer to the table mxi_AttrValueHelp is in its "Attribute values" section, so let's switch over to MMC and inspect the attribute definition of MX_LANGUAGE.


If you have multiple Identity Stores and thus multiple attributes with the same name "MX_LANGUAGE", you could have a look at MATCH_DOCUMENT (use the hyperlink) to see exactly which Identity Store this match relates to. I'll skip this detail here for brevity and simply trust that we're talking about MX_LANGUAGE from the "Enterprise People" Identity Store.

where_used_basic_usage_mx_language.png

where_used_basic_usage_mx_language_attr_values.png

As you can see, the exact content from the MATCH_LOCATION_TEXT column, "mxi_AttrValueHelp", can be found in the input field "Table name" on the "Attribute values" tab in MMC.


Matches in Jobs

Scroll further down your result set, and check if it contains any rows with NODE_TYPE=J, which indicates a match within a job. In my environment, I have multiple result set rows which all relate to the SAP standard job "ABAP Read Help Values". You may not have this job in your system at all, in which case these or similar matches will be missing from your result set. Here's an excerpt from my matches:


where_used_basic_usage_rs_job_different_match_text.png


All match locations shown in the previous screenshot are contained in one job, as we can see from the identical values for NODE_TYPE, NODE_ID and NODE_NAME in these rows. The first two rows each have different MATCH_LOCATION_TEXT values, while the values in the last two rows are identical. The latter indicates that the job contains this MATCH_LOCATION_TEXT in two different places, e.g. in two separate passes.


Again, you could open the complete job definition as XML by following the MATCH_DOCUMENT hyperlink, and inspect the XML document to figure more precisely where each matching text is hidden. To keep it short, I'll not do that here and instead switch over to MMC directly.

 

How do we locate the relevant job in MMC, first of all? Fortunately, MMC lets us navigate to any job quickly, provided we know its ID. In MMC, select "Action ==> Find" from the menu bar. When prompted, enter the job ID displayed in column NODE_ID into the "Find" input field.


Make sure that you check "Find tasks or jobs only" to speed up the search, then press "Find next":


where_used_basic_usage_action_find.png

 

where_used_basic_usage_find_by_job_id.png


MMC will take you directly to the job definition. Let's inspect some of the job's passes, e.g. the very first one "Convert Locale of ValueHelp Table to UpperCase". On the destination tab of this toDatabase pass, we can see our search term "mxi_attrvaluehelp" as part of an SQL update statement:


where_used_basic_usage_abap_read_helpvalues.png


where_used_basic_usage_mmc_convert_locale.png

Referring back to our result set, this SQL statement is exactly what we see as MATCH_LOCATION_TEXT in the first line.


Let's browse further down to the pass "Address Salutation - to HelpValues". Switch to the destination tab of this pass. It's not exactly easy to spot, but the field "Database table name" used in this pass contains the MATCH_LOCATION_TEXT from row three (or four - there's no way to tell) from the query result set.


As mentioned earlier, the existence of multiple lines containing this value indicates that the job has a corresponding number of additional locations, maybe separate passes, which also contain this text. I'll not verify this now, but end my exploration of the job "ABAP Read Help Values" job at this point.


where_used_basic_usage_mmc_to_helpvalues.png

 

I'll also deliberately skip discussion of result set line number two, whose MATCH_LOCATION_TEXT starts with "//Main function: ...". This is a match from a local script defined inside the job. Pretty much all of the information provided in the next section, matches in global scripts, is true for local scripts as well.


Matches in Global Scripts

Switch back to your query result set in Microsoft SQL Server Management Studio and scroll down until you find any line with NODE_TYPE=S, which indicates a match in a global script (JavaScript or VBScript). In my environment, there are matches in three global scripts, the first of which is sap_abap_getHelpValKey.


where_used_basic_usage_rs_global_script.png


When you have a look at the content of MATCH_LOCATION_TEXT of this result set row, you'll notice that it contains the complete JavaScript source code of this script. Unfortunately, though, all of the script's source code is displayed on one line, which makes MATCH_LOCATION_TEXT more or less unusable in this case. The content is simply too wide to be displayed on any monitor.

 

Here is where MATCH_LOCATION_XML is very handy. Click on the hyperlink of that column's value, and Microsoft SQL Server Management Studio will open a new, separate window with the complete script source code.

where_used_basic_usage_match_location_xml_global_script.png

There's a small but unavoidable oddity here, in that the content of MATCH_LOCATION_XML always has artificial markup characters "<?X " at the beginning of the first and "?> at the end of the last line. These markup characters are not part of the real content from the database, but generated dynamically by the where-used query. For people who can live with this glitch, MATCH_LOCATION_XML's hyperlink navigation provides a convenient way to display script source code, or any other longer text matching your search term, directly in Microsoft SQL Server Management Studio.


Please note that matches in one single script will always result in one single match location of the where-used query. This is true even when there are multiple occurrences of your search term in this script's source code. For example, sap_abap_getHelpValKey contains our search term "mxi_attrvaluehelp" two times, as illustrated in the following screenshot. However, there's only a single line for this script in the query result set. If in doubt, I recommend you use the "Quick Find" (Ctrl+F) function in the editor window displaying the MATCH_LOCATION_XML value. This will help you step through all occurrences easily.


where_used_basic_usage_global_script_search_term_highlight.png

Conclusion

With this, I would like to conclude our tour through the where-used query. This post has grown more lengthy than I intended to, but I hope you still found it useful. Topics I have not covered at all, or touched upon only briefly, include:

 

  • How can you use it on Oracle or IBM DB2?
  • How can you refine your search?
  • What's in the MATCH_DOCUMENT column and how can you use it?

 

I might follow up on these in future posts.

 

Admittedly, the approach presented here is a stop-gap solution lacking any integration into the official SAP IDM tools. In particular, the need for switching back and forth between Microsoft SQL Server Management Studio and MMC is tedious, of course.

 

On the other hand, my using it almost every day for refactoring or reverse engineering makes me think that other IDM developers might find it useful as well. Personally, I would love to see SAP build this or comparable capabilities into their Eclipse-based IDM designtime in a future support package of SAP IDM 8.0.

 

Let's see if it ever happens.

 

Cheers!

“Logic is the beginning of wisdom; not the end.” – Spock, Star Trek VI


I've been getting a lot of questions about managing external sources with IDM from a best practices perspective.  While I am not employed by SAP, I thought I would take the opportunity to share a few thoughts on this from both a logical viewpoint, along with my observations of watching people work with SAP IDM all these years.

 

Strictly from a compliance and governance standpoint, I've always been a fan of having IDM manage as much as possible. Reducing the use of disparate tools to manage digital identities in the SAP Landscape and the greater Enterprise will make your Internal Audit and Information Security groups happy. When we can centralize on a common tool that is capable of recording identity life cycle actions, approvals, notifications, escalations, delegations, etc., we make another step towards greater security and safety in our organizations.

 

Furthermore, since we can now put this all into a single tool, the Identity Management / Information Security team can create customized tasks and workflows to enable greater availability for these services to technical and non-technical managers, and in limited use cases, the users themselves (Self Service Password Reset and changes to personal data come to mind)

 

Ok, so this makes IDM the great equalizer.  Woo-hoo! Does this mean that we can only make changes to managed systems through IDM? I don't think so.

 

Managing the data in a system/repository from IDM is all well and good, but management of the system by its administrators should not be restricted. While I would say that it is a best practice to manipulate the data as much as possible via IDM, maintenance need not require IDM.

 

Administrators usually need to get deeper into the system and do more than IDM can support, so using IDM for everything in systems such as Active Directory or HCM might just not be possible. My advice to those that ask me would be as follows:

 

  • Whenever possible, work should be done through an audit-able system that requires logging in
  • All work sessions should be recorded in a log, along with a summary of work done, particularly when there is no login or access prompt
  • Maintenance to Production systems should always follow an established change control process
  • When doing maintenance, follow a "two-person" rule, where changes are verified by at least one other person, when data files, deletes, or modifications are being done on a large scale
  • Finally -- DOCUMENT EVERYTHING!!!!! I can't emphasize this one enough. Make sure there are thorough notes detailing what is being changed, anticipated results, scope of change, details of change, documented code, and so on.  You just can't have enough documentation.

 

Again, these are my thoughts on the issue.  I'd be interested in hearing yours.

Hi guys,

 

I have decided to present our custom extension for the standard IdM Password Change UI:

PWR Add-on.png

Here are some of the standard IdM Password Change limitations:

  • doesn't support dynamic account generation based on the user access
  • not possible to select just one or two of the available systems
  • not possible to have a password provisioning status update
  • mobile version - not supported
  • not very user friendly UI

So we have decided to create a custom solution using SAPUI5. Our UI dynamically generates the available systems based on the logged user information and allows to select only the systems you want for password change. As well, we have added a progress status and system status, so in case the password provisioning fails in IdM/back-end you will have UI error state for the failed systems. The users can also log in from a mobile phone/tablet and change their password from there.

For a similar solution you will need IdM&SAPUI5 knowledge and additional IdM development for the UI REST calls and the password provisioning logic.

In IdM you have to create 2/3 UI tasks responsible for the user data display and for the GET/POST calls to IdM, additional entry type is needed. Depends on the customer needs, you have to create additional attributes responsible for the status update and the system provisioning. For the back-end systems you will have to create a custom password change task instead of the standard one.

 

 

Kind Regards,

Simona Lincheva

We have just released SAP Identity Management 8.0 and it has General Availability now.

First several customers have successfully completed upgrade projects from Identity Management 7.2 to IdM 8.0 and new installations of IdM 8.0 during customer validation and ramp-up phases.

 

SAP Identity Management 8.0 comes with the following main improvements:

Innovative design-time IdM Developer Studio as Eclipse plug-in. It replaces the Identity Center Management Console and comes with new configuration packages concept and better usability as well as multi-user environment. Supports Visual Workflow Design which allows you to visualize conveniently and to drag and drop processes in a workflow diagram

• Extended integration capabilities with SAP products with SAP SuccessFactors connector

Improved security concept

 

You can get more insights in the following blogs:

 

 

More detailed technical information can be found on the SAP Identity Management 8.0 help portal and Release notes.

This issue is reported by many customers.

A proper note will be created but I will also have it explained here as well.

 

So the situation is as follows. A user in the database has his password changed. Then the customer expectation is that when the dispatcher is restarted it should work again. This is wrong. You should change the password in exactly two places and regenerate the scripts in order this to take effect.

step1.PNG

1) If we are speaking about the RT user you need to change the JAVA RT properties first

step2.PNG

2) Then you need to change the Windows RT connection string

step3.PNG

3) Do not forget to press APPLY otherwise your changes won't be saved

step4.PNG

4) Regenerate the scripts and then try to start the dispatcher it will be all fine now.

Every day it is fantastic to see the collaborative efforts in the IdM community with some really talented IT experts sharing their knowledge (Topic leader board take a bow :-)    ) with the greater IdM community. At times IdM can be complex when trying to realise the goals/processes set out in a project.

 

A possibly over looked tool beside the community forum is the SAP wiki. The main aim of the wiki is :

 

 

  • Create findable, usable content (solutions)
  • Capture within the workflow: solutions are in the context of a target
    audience
  • Just in Time: solutions are easily improved over time based on demand
    and usage (edit as you go)
  • Opportunity to reward contributors based on knowledge they share

 

There are already some really useful wikis like the Troubleshooting Guide that can help when setting out on the IdM journey.

 

 

 

 

idm troubleshooting guide .PNG

 

The main IdM  wiki space can be found at

 

http://wiki.scn.sap.com/wiki/display/Security/SAP+Identity+Management+-+Overview

 

 

and the main Netweaver troubleshooting wiki can be accessed here .

 

 

From the support side it would be great to get feedback if the wiki is a resource you have ever used or do you ever feel a need to contribute content to ?
All opinions both good and bad/constructive (please be gentle !) would be appreciated.

 

Thanks,

 

Chris (from SAP)

This blog post is about changing data with help of SAP Identity Management REST API v1, also multi values and role references with validity data.

I suggest using Google Chrome and the REST plugin Postman for this as test environment.

 

Prerequisites:

To be able to change data via POST request, two header variables have to be send to the Identity Management backend for security aspects. First one is X-Requested-With = JSONHttpRequest. The second one is more difficult. First, you have to send a GET request with X-CSRF-Token = Fetch to the server. You will receive a token, which has to be used in the POST, e.g. koMUjIyXs5z3qxUkAcKgJrJ0jOezwEQv2ZQ. All together:

  1. X-Requested-With = JSONHttpRequest
  2. X-CSRF-Token = <token_received_from_GET_request>

See therefore SAP Note 1806098: Unauthorized Use of Application Functions in REST Interface

 

To be able to change validity dates, you have to change the application property v72alpha.validity_enabled = true in AS Java.

See therefore SAP Note 1774751: Reading/Writing VALIDFROM and VALIDTO values with REST API

 

Change Singe Value Attributes:

This example changes first and last name.

POST http://localhost:50000/idmrest/v1/entries/<mskey_of_entry>/tasks/<task_id>?MX_FIRSTNAME=Benjamin&MX_LASTNAME=Franklin

 

Change Multi Value Attributes:

This example changes the additional phone numbers.

POST http://localhost:50000/idmrest/v1/entries/<mskey_of_entry>/tasks/<task_id>?MX_PHONE_ADDITIONAL=[{"VALUE":"555-41863218"},{"VALUE":"555-43518792"}]

 

By default, these values will be added. In order to delete values, add the changetype = delete.

POST http://localhost:50000/idmrest/v1/entries/<mskey_of_entry>/tasks/<task_id>?MX_PHONE_ADDITIONAL=[{"VALUE":"555-41863218"},{"VALUE":"555-43518792","CHANGETYPE":"DELETE"}]

 

Change Role References with Validity Dates:

Validity dates are optional. As value, use the mskey of the roles you want to assign.

POST http://localhost:50000/idmrest/v1/entries/<mskey_of_entry>/tasks/<task_id>?MXREF_MX_ROLE=[{"VALUE":"29","REASON":"Test","VALIDFROM":"1999-01-01","VALIDTO":"2049-12-31"},{"VALUE":"28","REASON":"Test","VALIDFROM":"2000-02-15","VALIDTO":"2015-06-31"}]

 

REASON, VALIDFROM, and VALIDTO are link attributes. You are also able to set the context ID by setting CONTEXTID=<mskey_of_context> as additional link attribute.

 

Privileges have to be changed my MXREF_MX_PRIVILEGE. MX_ASSIGNMENTS is only a virtual attribute and cannot be changed.

 

(For GET requests, make sure to set "List Entries on Load" in the attribute definition in order to get role or privilege assignments via REST).

 

URL Encoding:

Always make sure you are using URL encoding for these URL parameters (or use a library, which is capable of doing this), which will lead in URLs like these:

POST http://localhost:50000/idmrest/v1/entries/<mskey_of_entry>/tasks/<task_id>?MXREF_MX_ROLE=%5B%7B%22VALUE%22%3A%2229%22%2C%22REASON%22%3A%22Test%22%2C%22VALIDFROM%22%3A%221999-01-01%22%2C%22VALIDTO%22%3A%222049-12-31%22%7D%2C%7B%22VALUE%22%3A%2228%22%2C%22REASON%22%3A%22Test%22%2C%22VALIDFROM%22%3A%222000-02-15%22%2C%22VALIDTO%22%3A%222015-06-31%22%7D%5D

 

Related blog posts:

Write your own UIs using the ID Mgmt REST API

A simple example consuming SAP Identity Management REST API

When I am working on IDM projects, one of the things that I commonly find is that there is insufficient emphasis placed on the deprovisioning process. I have found this to be the case for a number of reasons, all of which usually come out during the requirements and discovery process, prior to determining a final design.

 

It's not that organizations don't know that users need to be deprovisioned, rather it's the policies that need to be acknowledged during the process. Let's talk about three common types of systems found in a SAP IDM project: Active Directory, Email, and the SAP System of your choice.

 

For various reasons, we may want to ensure that the person leaving the company might have information that we will need to hold on to some of their information. Information held on individual drive shares, emails, etc. could be required for legal or compliance reasons.  Also there's the chance that the termination is not actually a termination, access might be suspended for various types of leaves or sabbaticals. So how do we handle this? The first thing is that we need to understand what happens during the process.  For example, typical things I see are:

 

  • SAP IDM - Password is to be set to a random value, The MX_DISABLED attribute is set to a non-null value. (Note that MX_INACTIVE has some more far reaching effects. Please refer to page 92 in the document SAP NetWeaver Identity Management Identity Center - Identity Store Schema - Technical Reference.)
  • Active Directory / Exchange - User is to be disabled, moved to a separate, specific Organizational Unit, password to be reset to a random value.
  • SAP Systems - Login disabled, Password is to be set to a random value.

 

Typically this state is kept for a period of 14 to 180 days after which the accounts are permanently deleted, unless they have been reactivated.

 

One of the effects of these Business Processes, is that the SAP IDM Provisioning Framework needs to be updated.  Before considering any actions on the framework it is important to remember:

 

  • Make sure the original Framework install is backed up (I recommend a copy and paste to another part of the configuration and doing an export)
  • When editing specific tasks and jobs, make a back up copy.

 

Now you can go ahead and extend the framework. Note the following example where I have updated the Delete Plugin of the AD Connector:

SAP IDM PF Changes.png

 

Note that I make a copy of the "3 Delete AD User" task and then made a few changes, linking in the disable user task and moving the user to "ou=disabled" (for more information on this move, look here.)

 

You can use this same other approach for any other SAP IDM Connector. I suppose a more full-featured change for some connectors  would have have a conditional task to evaluate what kind of termination event this was and then either disable or straight out delete, but I have to leave some topics for future writing

 

Please let me know if this helps you or if you would take this a different way.

Originally RDS package is developed for Identity Management 7.2 and if you are an IDM 8.0 early customer and need certain functionality available in this RDS package here is a quick beginners guide which does not pretend for completeness, but rather shows minimum set of steps to enable a certain functionality - in this case Copy Identity.

 

UPDATE NOTE

This blog originally was intended simply to test how it works and was not tested on any productive environment. I have got some feedback that as the RDS 1.0 was created for an older version of IDM 7.2. SP5 and simply running the bellow steps can create some issues with schema as the schema is overwritten by RDS package. Thanks to Rene Feister who discovered this and recommended following workaround:

- create new IDM 8.0 ID store

- export the schema

- import the schema in the ID store where you previously imported RDS 1.0. and this should give you the actual 8.0 schema.

Also to make you aware that currently Rene Feister and his team develop RDS 2.0 which probably will come with IDM 8.0 SP2.

To fully benefit from all of the RDS features follow the Initial System Setup in RDS Configuration Guide in the following file RDS_NW_IDM_IDM720_SERV\SAP_BP\BBLibrary\Documentation\D04_IDM720_BB_ConfigGuide_EN_XX.doc which you can find in:

SAP Service Marketplace>Products>SAP Rapid Deployment Solutions (RDS)>Lines of Business>Finance>Enterprise Risk and Compliance>SAP Identity Management>Solution Deployment

 

1. Download the IDM 8.0 Patch 1 SCA and deploy it (e.g. via telnet) on the AS JAVA.

2. Install the IDM 8.0 Eclipse Developer environment Patch 1 from the SCA.

     In Eclipse go to Help\Install New Software\Add and choose archive and find the location of the Eclipse archive from SCA content

     IDM8.0_xxxxxxx\DATA_UNITS\IDM_CONFIG_LM_FOR_ECLIPSE_80\com.sap.idm.dev-ui-repository-8.0.2

3. Download the RDS package for IDM 7.2 - https://service.sap.com/rds-idm

     Go to SAP Service Marketplace>Products>SAP Rapid Deployment Solutions (RDS)>Lines of Business>Finance>Enterprise Risk and Compliance>SAP Identity Management>Solution Deployment and then choose SAP Identity Management RDS Content V1

4. Import schema mcc as shown on the picture.

  • Right click on Identity Store Schema\Import and find the import files folder like shown bellow.
  • In the lower right corner of the Open Dialog choose the IDM 7.2 “MCC files” and then select the 0256_IDM72_Identity_Store_Schema.mcc file.
  • From the Packages node’s context menu import the other two files:
    • 0256_IDM72_Job_Folder.mcc
    • 0256_IDM72_Provisioning_Folder.mcc

RDS1.png

5. In order to see the web forms in the manage tab you need to give the respective role to the manager user: SAPC_IDM_ADMINISTRATORS

Assign this role to the user froms the web UI

     RDS2.png

6. Then the user should be able to see the RDS forms

RDS3.png

  You can add to favorites this function copy identity  and it will appear in front of  Choose task button


RDS5.png 

 

     If still do not see the new forms in the UI then restart the IDM JMX Java application in SAP NetWeawer application server.


RDS6.png

7. Finally in the RDS package constants you should check the following

     Double click the identity store

RDS7.png

     Check that the ID of the identity store in this case “1”

RDS8.png

     Open the idm\admin UI and go to system configuration and open the package constants of the provisioning package and make sure SAP_MASTER_IDS_ID value is equal to the above ID of the identity store in this case “1” and save.

RDS9.png

 

For further details refer the Identity Management 7.20 - Setup and Provisioning (D04) document mentioned above (D04_IDM720_BB_ConfigGuide_EN_XX.doc

I thought I would share some of my experiences about IdM solution design and start with connecting IdM to Leading Identity System(s). I started in new IdM-project some time ago and my first task was to come up with an IdM-concept. In order to facilitate the workshops and discussions I put together a set of questions that I must get an answer. For this blog I included some descriptive text to give background why I think the topic/question is relevant.

 

All of this blog is from my personal experience or thoughts, not the full nor only truth. Although I tried to re-write it as generic as possible it might be missing an angle or two.

 

But I hope it helps someone new to the SAP IdM and especially when SAP HCM is the Leading Identity System.

 

Integration to Leading Identity System

The identity lifecycle and various status changes of the user record in IdM should happen in controlled fashion that works as designed/according to requirements. Identities normally map to employees and the identity/employment data comes outside IdM and the lifecycle is usually controlled by events that are outside of IdM, like hiring or firing.

 

There are exceptions like technical user accounts or external workforce. Those accounts may not have an existing dedicated system where they're managed, so the full lifecycle of them can be done in IdM. Developing a set of UIs and background batch jobs that enable for example the basic administration of contractors can be achieved quite easily.

 

Questions:

  • What are the Leading Identity System(s)?
  • Single system vs. multiple systems?
  • What types of identities are supplied by Leading Identity Systems?
  • Attributes / data model
    • What will be the unique identifier (MSKEYVALUE) of the user entry in IdM?
    • What are the mandatory attributes for the user entry in IdM (plus Target Systems) and can the Leading Identity System supply all of them?
    • Does the user entry have to be enhanced with data from other system? Any transformation IdM must do the attributes?
    • Does the Leading Identity System supply an attribute/field that can be used in detecting the employment status or does it supply start/end dates?
  • Technical points in integration
    • What systems are involved? Technologies? Protocols? Frequency? Configurable out-of-the box interface like SAP HCM or LDAP or custom-interface?

 

Data Model

I typically start working on data model immediately, that's why the attributes are relevant. The data model sounds fancier than it actually is, basically a spreadsheet having an attribute map between IdM and Leading Identity Systems and Target Systems. Any data transformation rules can also be documented to the attribute map, for example if IdM is supposed to generate the MSKEYVALUE or email-address. It should also be highlighted which system "owns" which attribute, what can be maintained in IdM, what is mandatory/relevant attribute for each system and what attribute value change is relevant for each system (what attribute should trigger Modify for each target system).

 

Types of identities/User Entries

Questions:

  • What types of identities does the future IdM-solution handle:
    • Just employees / internal users?
    • Also external users?
      • Subcontractors
      • Suppliers
      • Partners
      • Customers
      • who else?
    • What is the Leading Identity System for each?

 

MSKEYVALUE

The unique identifier of the user entry in IdM is an attribute called MSKEYVALUE.


In most of my projects the leading Identity System has been SAP HCM. The easiest solution integration-wise is that HCM is already storing the username to the SY-UNAME field of InfoType 0105 which would map into MSKEYVALUE (by the "standard" attribute mapping in the HCM to IdM-interface). This way the problem with unique value is put to HCM which simplifies the IdM-solution. But on the other hand I've seen an awkward process in HR-dept to generate the username, so IdM autogenerating it and writing it back to IT0105 can also make the process better.

 

I've also had requirements to generate the MSKEYVALUE based on HCM data or use Personnel Number as MSKEYVALUE.

 


Questions:

  • Where to get the unique id/MSKEYVALUE for the user entry?
    • Is it supplied by Leading Identity System? If the Leading Identity System is SAP HCM what field is going to be mapped to the MSKEYVALUE:
      • IT0709/PERSONID_EXT
      • IT0105/SY-UNAME
      • IT0000/PERNR
        • If it is tied to IT0000/PERNR will the personnel number ever change? What should happen in IdM if the Personnel Number changes?
    • If the Leading Identity System is not SAP HCM, what (hopefully unique) field is mapped to MSKEYVALUE?
      • for example; samAccountName from AD? uid-attribute of LDAP?
      • is the value unique or must IdM be adapted somehow?
    • If the field is not supplied by Leading Identity System:
      • does IdM generate the MSKEYVALUE based on Leading Identity System data?
      • does IdM generate the MSKEYVALUE based on other means, like just an auto number?
      • what data integrity checks are needed? LDAP/AD-lookup? SAP HCM IT0105/SY-UNAME lookup?
  • Where will the MSKEYVALUE go?
    • Which Target Systems will have the MSKEYVALUE as username? Does the length or format of the Target System username affect to MSKEYVALUE?
    • Does the generated MSKEYVALUE need to be updated to SAP HCM IT0105/SY-UNAME-field?

 

User Entry Status

The user entry in SAP IdM does not have status as such. There are two attributes can be used in handling the status or replicating the status to target systems.

  • mx_disabled (true/false) => can be used in disabling the user in target system, ie. prevent login
  • mx_inactive (true/false) => can be used in deprovisioning, it makes the user entry inactive and hides it from active db-views and UI.

 

It is also possible to have a custom-attribute to represent the status of the user record in IdM or display values from leading Identity System, for example displaying the employment status of the user from SAP HCM.

 

The functionality tied into the mx_inactive differs a lot between the different versions of SAP IdM. Special consideration should be used when planning using the attribute. With the current Sp9 patch level that I am working with, inactivating the user automatically de-provisions all the access and re-activating (setting mx_inactive=false) reprovisions all access user had before inactivation (which may not be desirable).

 

(Note that the booleans in IdM work so that if the attribute exists it means logical true and no attribute for the record or removing the attribute means logical false.)

 

Questions:

  • How is the user’s status in Leading Identity System mapped to SAP IdM?
    • If SAP HCM, how to map employment statuses to mx_disabled and/or mx_inactive?
    • If Active Directory, how to map userAccountControl to mx_disabled and/or mx_inactive?
    • If any other system, how to map the statuses to mx_disabled and/or mx_inactive?
    • If no status but validity dates are sent, what happen should happen in IdM when validity end is met?
    • Any other means of identifying the identity status change from Leading Identity System?


Joiners/Leavers/Transfers processes


Hire

Big benefit of IdM is the automatic on-boarding and provisioning process. A lot of automated provisioning actions can be built into on-boarding.


Questions:

  • Any special processes in on boarding like approval steps (for example when the Leading Identity System is not 100% trusted) or will the identity be written into IdM as-is?
  • Will the employees starting in future be imported once they exist in Leading Identity System? (so the access could have been requested and put into place for the first day)
    • If SAP HCM is the Leading Identity System, then the integration step that moves the employee data from Staging to Productive Id Store must be configured so that at least the mandatory attributes are written immediately to Productive Id Store (by default the future values are stored as pending values that become visible once the validity start date is met).

 

Technically the on/off-boardning processing can take place in the event task that writes the identity to Productive Id Store or event tasks in Productive Id Store.

 

Withdrawal

Along with the hire and automatic user creation the other big benefit of automated identity management is the automatic off-boarding and de-provision.


Questions:

  • What happens on withdrawal?
    • Is the identity kept in IdM to reserve the MSKEYVALUE or other unique attributes? If so what status does the withdrawn entry have disabled vs. inactive?
    • Any grace period before losing any access? Same rules for all target systems?
    • If the user entry is to be deleted, is there a grace period before the entry really is deleted? Can a new user have the same MSKEYVALUE?
    • Any requirements on recovering access on:
      • re-hire
      • employee changes mind and does not exit
      • expired contractor or temp gets an extension
      • key person or C-title holder leaves, any emergency disabling of access?

 

Long Absence

What happens to the user entry during long absence (like maternity/paternity leave, study leave, long illness..) can vary a lot depending on the corporate or legal practices.


Questions:

  • What happens on long absence
    • Does the user log-in to any systems during absence? (for example to see electronic pay-slip in SAP HCM’s ESS?)
    • Is the user disabled without losing any access during the absence?
    • Is any access removed?

 

Rehire

Questions:

  • Will there be a new MSKEYVALUE on rehire?
  • Will the old access rights be restored back?

 

Position / Organization Change or change in any other attribute that drives access

 

Questions:

  • Any access driven by position (or other organization-related HCM-attribute)?
    • Context based assignment?
    • Dynamic groups?
    • Does the email-address change (to have a country specific domain name)?
    • Does the AD-container of the user change (if the "LDAP-path" has organisational data)?
    • Does any of the usernames change if any country specific prefixes exist?
    • What rights must expire that are driven by current location or organization specific attributes?
  • Any grace periods if access is driven by attribute value and old access must remain for a while? (adding attribute validity to existing attribute value)

 

Employee Name Change


Questions:

  • How is the change replicated to IdM? Just 1:1 immediate change or are there any legal matters or policies involved or for example does the employee have to request to have the new name(s) in the IT-systems?
  • What attributes are driven by employee’s first/middle/last names and how should the change be reflected in IdM?
    • MSKEYVALUE?
    • Email-address and proxy email-addresses?
    • Usernames to Target Systems?
    • Display name?

 

Concurrent Employment and Personnel Numbers in SAP HCM

If Personnel Number is used as MSKEYVALUE it should be noted that Personnel Number can change (depending on the HCM solution). Also, in the HCM data model there are two personnel number fields; PERSONID_EXT and PERNR. The former field means roughly the "unique identifier of the employee in HCM" and the latter as the "unique identifier of the job contract". For the first "contract" the value of the Personnel Number field is the same as Person Id Ext.


The employee can have multiple job contracts:

  • the employee may be a temp that gets new contract after the previous one has expired
  • the employee may be a temp that works for different organisational units in same HCM-implementation (concurrent employment scenario)
  • the employee may be going to expat assignment to another country / org unit (or may be returning from one)


Generally it is safer to map the Person Id Ext to MSKEYVALUE as it won't change.

 

It should be noted that HCM will export the old personnel number with employment status “withdrawn” and new personnel number with “active” employment status, the IdM solution must be configured accordingly.

 

External contractor becomes employee or employee becomes external contractor

With todays requirements for flexible job market I guess there are more contractors than ever, so most of the IdM-projects have had something special for contractors vs internals.

 

Questions:

  • Will there be new MSKEYVALUE in the employment change?
  • If the same MSKEYVALUE is kept what happens to any attributes that highlight that user is “external” or "internal"?
    • email address (for example if email has prefix, like “ext-“)
    • displayname (for example if displayname has word “external” for externals)
  • What happens to existing access rights?
    • Does the user have to re-apply for them?
    • Are rights automatically moved to new identity if the job role stays the same?

     Managing identities and their permissions is important for the companies. Along with performing specific operations, companies also would like to have details/reports on top of the identity management data. These reports could cover the current state of assigned privileges, for example, or what was the situation last year. Some of the requirements for such reports are common to the different companies, but it is also true that there are company specifics. By creating reports on identity management data, companies could do some analyses and if needed execute proper actions.

 

     In the following article we'll create, modify and extend an analytical report. For this purpose we'll get the SAP IdM data and via SAP Lumira we'll create nice looking reports, share them with colleagues and add options to filter/drill-down into the report so that we could discover the information that is important for us. The described approach is applicable to any IdM version and all supported DBs.

 

     Here are the technical details of our configuration:

  • Installed, configured and operational instance of SAP Identity Management 7.2 SP9, which is used to manage identities in a multinational company. As DB for this IdM instance we’ll use MS SQL Server and we have created user with proper select permissions for this DB
  • Installed SAP Lumira Desktop, an MS Windows application that will be used for building the analytical report
  • Created account in SAP Lumira Cloud, an environment that will be used to publish and after that access the analytical report

 

Landscape.png

 

 

     We'll go through:

 

     As a result we'll create a sample analytical report that combines several information sets inside and provides filtering capabilities. The report is easily accessible and could be used for a daily business. In our case the report is almost real-time - the data could be refreshed based on time schedule or on demand. Last but not least, a similar report could be created on top of any SAP IdM version if you use the public APIs (DB views) that are exposed, the same way we did it here.

Filt6.png

 

 

     Additional information and options for IdM reporting could be found in:

     As described in the master document of this blog post series, we'll start with creating a simple IdM report and it will be used for later enhancements.

 

Part 1: Create initial version of IdM report via SAP Lumira

Create DataSet

 

     The first step in creating an IdM report is to define the requirements for a given report. Our initial requirements are the following:

The report should contain all company employees and assigned business roles, presented in a tabular format.

The report should be accessible in an easy way (e.g. via web browser).

 

     When the requirements are fulfilled, we should proceed with creating the report. The next step is to connect from SAP Lumira Desktop to the SAP IdM instance. This is needed to extract the identity data and form the report. Connection is done via JDBC directly to the IdM DB. Once the connection is established, the information is extracted in so called DataSets. To achieve that, we'll do the following:

Start SAP Lumira Desktop

Open File->New or use the shortcut (Ctrl+N)

Select the option “Query with SQL” from the Source options list

NewDS1.png

Select the DB type. In this example we use MS SQL, but you should select the proper type based on your installation.

NewDS2.png

Provide connection details and connect to the DB. It is also helpful to save the connection details, that way we’ll not be asked to enter them each time we try to connect to the DataBase.

NewDS3.png

The next step is to specify which data will be extracted in the DataSet. We should define the name of the DataSet and provide the query definition.

NewDS4.png

In this case the name of the DataSet will be RolesSCNDS and the query will extract display name of the business role, identifier of the person, identifier of the business role and information about the time when the business role was created. To extract that details, we'll use 2 of the DB views that IdM provides:

          SELECT

               entries.mcDisplayName as RoleDisplayName,

               links.mcThisMSKEY as RolePersonIdentifier,

               links.mcOtherMSKEY as RoleIdentifier,

               links.mcOtherMSKEYVALUE as RoleName,

               entries.mcCreated as RoleCreated

          FROM

               idmv_link_ext as links inner join idmv_entry_simple_all as entries on links.mcOtherMSKEY = entries.mcMSKEY

          WHERE

               links.mcThisOcName = 'MX_PERSON' and

               links.mcOtherOcName = 'MX_ROLE'

Press the “Preview” button to see whether the data will be loaded into the table at the bottom right part of the window. If everything is correct, we should see several records in the table.

Pressing the “Create” button will create the DataSet.

 

 

 

Create Table visualization

 

     After we completed the creation of the DataSet, we will navigate to the SAP Lumira Desktop main screen. The next step of the process is to create visualization objects that will display the content of the report.

Make sure that “Visualize” tab is selected in SAP Lumira Desktop.

Vis1a.png

We'll select Table from the templates.

Vis1.png

Add “RolePersonIdentifier” and “RoleDisplayName” dimensions as “Rows Axis”. This could be done via drag-and-drop or by using the "+" button.

Vis2.png

 

 

 

Create Story Board

 

     Now that we constructed the table, we should create “Story Board” where to add the table and combine it with other graphical objects later.

We should navigate to “Compose” tab in SAP Lumira Desktop.

SB1.png

As we do not have other Story Boards created, the application will ask us to create one. We’ll name this Story Board RolesSCNSB and press the “Create” button.

Change the default title of the Story Board to “Roles”. Double click the title to open it for editing.

SB2.png

Drag and drop the visualization object that is available in the pane on the left side.

SB3.png

As a result, the following visualization object is displayed:

SB4.png

 

 

 

Save SAP Lumira Desktop document

 

     As we have the initial version of the report ready, we could also save our work in SAP Lumira Desktop if we want to modify the report later.

To do that we'll open File->Save As (or Ctrl+Shift+S) and specify the name of the document.

Sav1.png

We could save the changes locally or on the cloud.

 

 

 

Publish the report

 

     The next step of this part is to publish the created story Board to SAP Lumira Server and share it.

Prior to publishing the report, we should guarantee that the network settings are correctly maintained. We'll do that from File->Preferences menu (or Ctrl+P) and then go to Network. Check the proxy settings and make sure that the SAP Lumira Cloud URL is entered.

Pub1.png

To publish the report, we need to go to the “Share” tab.

Pub2.png

Select the RolesSCNSB and press the “Publish to SAP Lumira Cloud” button.

Pub3.png

We’ll be asked to provide our credentials for the cloud environment.

Pub4.png

After we enter that details and connect to the cloud environment, we need to confirm the publication. Shortly after that the DataSet and the Story Board will be published to SAP Lumira Cloud.

Pub5.png

 

 

 

Open and share the report

 

     After we publish the report to SAP Lumira Cloud, let’s try to open it.

We need to open SAP Lumira Cloud and provide the credential details. We’ll be navigated to the cloud environment and will see all objects that we have published. In our case these are the Story Board and the DataSet.

Sh1.png

Click over the Story Board RolesSCNSB and the report will be opened.

Sh2.png

We could also share the report with other people. To do that, we'll open “Share” from the Story Board context menu.

Sh3.png

Change the access to public and save the sharing URL.

Sh4.png

Sh5.png

* In this case the report will be visible to everybody, but there are also options we could use to share a report to specific people or groups.

 

     Now the initial version of our report is created, published and accessible to other colleagues. Any later changes that we’ll do will be automatically available via the sharing URL that we have copied in the previous step

 

 

 

As we are ready with the initial report, we could proceed further to the next step and modify the report

     As described in the master document of this blog post series, after we created the initial version of the report, we'll proceed further with modification of that report.

 

Part 2: Modify initial version of IdM report created via SAP Lumira

 

     The report that was created contains just few attributes and later on we’ll see how that details could be extended. Now we’ll take a look at how the report could be modified.

Modify visualization label

 

     We can start modifying the table visualization by changing its title.

     To do that, we need to open the Visualize tab in the SAP Lumira Desktop.

Double click the title of the table to open it for editing and specify the new name e. g. “Person Details”.

Mod1.png

 

 

 

Modify column labels

 

     We could also rename the table columns as the current names are inherited from the Dimension names. To modify the column names, we need to change the dimension names.

From the context menu of “RolePersonIdentifier” dimension, select “Rename…” and specify the new dimension name. In our case let’s name it “Person Identifier”.

Mod2.png

Repeat that procedure for “RoleDisplayName” and set the new name to “Role Display Name”.

Now we could save the changes of the document and republish the Story Board to SAP Lumira Cloud. We'll be asked to confirm the overwrite, as the content is already published to the cloud. As a result of this, the report is updated and if we use the sharing URL that we defined earlier, the new visualization will be displayed as follows:

Mod3.png

 

 

As we are ready with the modifications, we could proceed further to the next step and combine information from several DataSets.

Actions

Filter Blog

By author:
By date:
By tag: