1 2 3 8 Previous Next


117 Posts

All sorts of problems

Bad things can happen to your authorization concept for many reasons:

  1. The security department lacks the education and skill
  2. The security department is understaffed
  3. External service provider built something ridiculous
  4. Legacy authorization concept is not suitable for the new business situation
  5. Political situation in the organization does not play in the security team’s flavour
  6. New functionality (like module) is implemented without clear requirements (which in security means: broad or inconsistent access) and is kept that way after the go-live
  7. SAP does not give the customers the technical means to implement the required security standards
  8. The quality of work done by the members of the security team is inconsistent and varies from very good to very bad (so everyone is demoralized and people break other people’s work every day)
  9. Different departments run their own security or have very inconsistent security requirements and means of implementing them
  10. The technical and philosophical means used in the project (and in the operations) are sub-optimal or simply wrong (value roles, composite roles with task based mini-roles etc.)
  11. The scope of the project is broad and the internal or external delivery is on a high quality standard, unfortunately the after-the-project mode (and resources) don’t allow the security team to keep the level of security so high (so the project was actually a waste)
  12. …and many other situations


These are just some examples that lead to bad things rather sooner than later. Why am I starting a blog post with such a list? Well, you have certain technical tools, certain political and organizational power as well as a certain size of the security team. Your security levels needs to be aligned with these things (and many others as well… we can argue which of the factors are the most important, which are less important and which are rather minor – please leave a comment below).

For this very reason I ask my customers many questions before we even start a project. Questions like: is your management difficult or supportive? Do you have an internal auditing department? How do you check the quality of the work in your team? What system releases do you have available? When did you perform the last big clean up or redesign of your roles or security in general? What are the roles that you’ve defined for yourself and to what extent you stick to them? Do you work centrally or the responsibility (and authorizations and needs etc.) are distributed?

Answers to these questions should help you build roles (generally security) tailored to the organization’s needs (and to the needs of the security admins there) as well as guarantee that the roles you build (generally security) will survive for long enough so that the externals and internals on the team can sleep well.


Here comes the challenge

I know, I am talking about too general things without indicating what is what I want to say in this blog. Let me briefly describe my motivation for this blog first. Under my recent blog “On the way to granularity” I received a comment about the difficulty with S_TABU_DIS and NAM access in the real world, specifically in the area of the Financials. That blog was meant to inform readers about technical means for the granular security access (that things are available and you can optimally use them for your benefit and security).

But the question is less about the technical means and more about the organizational obstacles as well as about the challenge to implement granularity in the finite time (with everyone still aboard and happy when it is finished).

Kesayamol Siriporn asks (using my own words): There is a table LFA1 which contains sensitive information. This table is assigned to an authorization group FA. This group contains 955 tables in the system where I’ve just checked. Users that use LFA1 also use other tables from that group, but not all 955 probably. The question and the challenge is how to authorize for the access to the table LFA1 and other tables from its group so that the workload produced by the approach is not overwhelming for the security team and on the other hand we can still call the approach secure. We also need to consider the users using queries and generic table access transactions (SE11, SE16(N), SM30, SM31, SM34) on these tables.

Choose the right tool for the job

The question is specific, so I will covered the specific details to my best knowledge. On the other hand the question generally asks to what extent we can or should implement the highest possible granularity and how to do it efficiently.

The following points and questions come to mind:

  1. I am pretty sure there are standard means (transactions, reports, queries) how to work with the data in the tables mentioned in our question. Has all standard options been considered for the end-users? Are we forced to come to the lowest level – database tables?
    1. These standard tools will be maintained and developed further by SAP so the upgrade (also mentioned as a point in the challenging question) does not pose any threat or force periodical redesign.
    2. For these standard tools we have transactions available (as the primary building block for the role menus as well as the main door for the users which we can use to talk about the problem with normal mortals).
    3. For these standard tools we have SU24 defined which provides you the “templates” for the roles and you need to define only a couple of fields and values and primarily consider the organizational aspect (org. fields etc.) based on the data.
    4. The organizational aspect (like using the org. fields) is simpler than maintaining long lists of tables for generic table access transactions. You can tell if a user can have access to a specific cost center, profit center, vendor or a purchase order easier than whether than user can see one of those 955 tables in the LFA1’s authorization group (use any other example from your daily work here).
  2. There are many possible answers to the point above:
    1. “We don’t know. We have no clue what standard tools we can use” – is this because you are new to SAP? Or you pay money to people who don’t know their job? Or have you just started considering what you can do about this challenge and the direct table access sounded like the easiest option? Please inform yourself first before jumping to conclusions.
    2. “Yes, the tools are available, but our users cannot use them” – and why is that? Too much training would be needed? Release of the system is too low? Or what is the problem exactly?
    3. A variation of the previous point: “Yes, the tools are available, but the users don’t want to use them as we traditionally use table access and we cannot take that away from users”- ok, but this is not a technical problem. You must either find enough workforce to perform a clean-up and remove the generic tools (like queries or transactions like SE11/16) without bothering the users (the access is the same, but the technical way is more sustainable) or get yourself enough political support so that you can take the generic tools officially away and start giving granular access from scratch and carefully (please see “from scratch” as shortcut for the purpose of the storytelling here, not literally).
    4. “No, the tools we need are not available in the standard” – well, then you can consider building them maybe? Why not custom development?
    5. We must also consider that in some cases “Table access can be fast and efficient for the super-users”. But in this case the challenge is not to secure the whole system and all the tables one by one for users to be able to use them, we are talking about a handful of users and I would say handful of DB tables as well. That leads to the next point.
  3. How many users (and their roles) need to touch the DB tables? How granular the access needs to be?
    1. Example: Some customers use various desktop tools that are easy to operate and they give these tools to the hands of the managers. These users are then privileged by definition as well as the number of the people is limited by definition. In that case it can be enough to create a little delta role, that gives this type of access to a several handpicked individuals. Surely this approach will not work in a large organization, but it will work well in the smaller ones.
    2. Note that queries mentioned in the question and the transactions for queries are just a variant of the “desktop tools” mentioned above.
    3. Please consider that if you try to build a concept around access to table group FA (or even based on table names) which is just one of the hundreds of table groups, doesn’t that mean a commitment that you will be careful and super-secure in other areas as well? Balance is an important think (also for Karma…). If you build such a concept because you think it is a good idea, will your colleagues think that as well? In case you get hit by a bus, will there be people that will be able to maintain your concept further and especially – will those people see the need for your super granular access or they change it to same * (star / asterisk) type of approach as soon as you’re gone?
  4. The question specifically mentions queries. Well, queries are very special beasts. Not technically, but the way they are used or misused (organizationally). There are several things about queries we must keep in mind:
    1. Queries are not super friendly from the transporting perspective
    2. People are generally lazy creatures
    3. Users claim (and it may or may not be the truth) that the data are only available in the production system and so they can only create their queries there.
    4. Users claim (same truth problem) that the flexibility of their “questions” (translated into the queries) is very high and that requires them to be flexible as well – they don’t want (or can) spend time on organizational problems – like testing in test system, developing queries in the development system, transporting queries etc.
    5. Sometimes the organization is difficult and transporting based on the forms and workflows and procedures would take weeks to happen as well as weeks would be needed to have the paper-work done.
    6. If you want to do the thing correctly, do the following: you create queries, you test the queries, you create transactions for the queries, you put those transactions into the role menu and you maintain SU24 for those transactions, you’re doing it the way it was designed and meant to be. Of course the question is – can you do it (enough time? Not too many organizational obstacles) and do you want to do it? Too many variables come to play here, there is no universal approach to this challenge.
  5. Let’s say you want to build a concept around the 955 tables in the group (just an example, ok?). The easiest option would be if SAP provided concept there for you. If they could make the problem ten times smaller by splitting one huge group into ten smaller ones, the size of the challenge immediately shrinks. As I said I am not an expert on all the FI tables sitting in group FA. I would not know how to split the tables into 10 groups (random number which would make the maintenance 10x easier if we went this way). I can’t even say if it is possible or wise in this case. Why I am saying all this is that you can theoretically do it yourself. If I am not mistaken, you can reassign the tables to different groups. The problem here is that it is a modification and your hard work will be overwritten with the next upgrade. That is why getting SAP to do it is the only way I see if you concentrate on “I don’t like this table to be in this group” part of the challenge.
    1. Here please consider a different organizational and also a technical aspect. If SAP did this – if they implemented such a “better concept” (whatever that would mean), it would change the behaviour for everyone. Every customer. That could mean customers that don’t want that are also forced to adjust themselves to the new way. It can mean that changes would be required to keep the system running although no change is needed or wanted by the customer. That means I can’t see SAP doing much here (even if it was a wise idea which I cannot judge competently).
  6. If you want to go all the way down to the tables and authorize for them one by one (because your users are used to it for example – well, you caused yourself the problem, do you see it?) the correct approach would be very similar to what I said above about queries:
    1. Define tables to which granular access needs to be granted
    2. Create parameter transactions for these tables (via SM30 or so)
    3. Maintain S_TABU_NAM proposal for the transaction
    4. Use the transactions in the menus of the roles and pull SU24 proposals for them
    5. Enjoy….!
  7. Consider also a different example. Let’s forget about financials for a moment. Let’s talk about table group &NC& for a moment (or the empty value – no assignment – which is translated to &NC& implicitly). Do you understand what the problem is here? Do you understand that the problem with this “group” is much, much bigger than with group FA or any other group?
    1. Go to table TDDAT and search for CCLASS = &NC&. That gives the word “huge” like in “huge problem” a “huge” new meaning.
    2. Check your custom tables. How many of them are assigned no group or the “group for all garbage”?
    3. Why was someone so lazy that he (she) put the table into this crazy group? Why not a better group? Why not a concept behind the thing?
    4. Go to SU24 and look for objects there (not only transactions!) that have &NC& as the proposed value.
    5. Do you understand that if that value gets into a role and that roles is given to users, they have access to dozens of thousands of tables in the system?
    6. Out of curiosity – has any auditor ever told you about this being a problem? Is that on your to-do list to make sure you are all well on this front? You can answer the question either for &NC& or generally the generic table access transactions.
    7. Point for SAP: Take care of the SU24 proposals with this value as well as make sure it is not needed. You know what the risk is, we know what the risk is. Give us the solution. All of us and now. Not that every customer must perform a SU24 clean-up of these magic (cursed) values.


If you made it all the way here, dear reader, also consider that you have the same problem not only with S_TABU_DIS and NAM, but also with S_PROGRAM for example. With S_RFC. With job administration and in many other areas as well. Find and implement the approach that is both secure and pragmatic, so that you can live that approach. So that your team and your organization can live that approach.

What can I say to summarize this in a nice way… Well… hold on. You’re doing it for the brighter future, for your children and their children. They will remember you for being secure and also pragmatic and for choosing the right tools for the right job. Good luck!

SAP delivers attack detection patterns with SAP Enterprise Threat Detection, and in the course of time there will be more. However, you need to have the possibility to get patterns from elsewhere – and sometimes in a hurry. So it is essential that patterns can be easily created using the tools provided by SAP Enterprise Threat Detection.


There are two main steps for creating attack detection patterns. First, you make a series of filters to reduce the stored events to a subset that is of interest. Second, you specify what aspects of this subset you are going to measure and define the attack detection pattern. I am going to focus on the second step so my starting point is shown in the screenshot below.


06-11-2014 12-54-53.jpg

The first filter restricts the set to the events of the last hour, the second filter further restricts the set to events relevant to the security audit log, and the final filter restricts the set to those security audit log events that have an Event ID equal to AU2 (representing failed logon attempts). I could now explore many aspects of this subset but I simply want to create my pattern. So I choose Create Pattern.


06-11-2014 12-56-25.jpg


Let’s say my pattern should be automatically executed every 30 minutes and should generate a low-severity alert when the number of failed logons from any one terminal exceeds 5 in the preceding hour. I enter the relevant details.


06-11-2014 12-57-46.jpg


Now a couple of OK clicks and I am done.


There is no programming knowledge necessary to create an attack detection pattern. For normal people, this is generally a good thing. Creating your patterns is easy enough, so you can focus on the more challenging aspect – finding patterns to create.

With SAP NetWeaver Application Server ABAP 7.40 it is possible to synchronize ABAP Users to a DBMS system especially to SAP HANA . This blog describes the configuration steps that are necessary to set up the functionality and the different features.


Use Cases

  • SAP NetWeaver Business Warehouse (SAP NetWeaver BW), needs a 1:1 user mapping to map analytic privileges of the database to the virtual analysis authorizations of the SAP NetWeaver BW
  • Your users run applications that access the database directly. You must assign privileges to the user in the database.
  • As an ABAP developer, to create SAP HANA objects, you must have a SAP HANA user.
  • Use the DBMS user management function of SAP NetWeaver AS when you have the users of a single, standalone SAP NetWeaver AS ABAP to synchronize with the users of the DBMS.


1. In more complex use cases, use SAP Identity Management (SAP ID Management). Such use cases include the following:

  • You need to distribute user data across a variety of systems in a landscape.
  • You want to synchronize the users of multiple clients of SAP NetWeaver AS ABAP with the underlying DBMS.

2. Currently the possibility to synchronize users to a DBMS system is implemented only for SAP HANA as database system. It is however possible to connect any other database system that is supported by the SAP Neweaver AS ABAP by a customer implementation of the class interface IF_DBMS_USER. The implementation for SAP HANA is done in class CL_DBMS_USER_HDB.


Configuration Steps


1. Create the Database User for the Database Connection

SAP NetWeaver Application Server (SAP NetWeaver AS) uses a database user to perform user management operations on database users. The database user requires the following attributes.

  • The database user must log on with user name and password.

  • The database user has a productive password.

  • You have assigned the database user the following privileges

Necessary authorizations for SAP HANA user administrators:


PrivilegePrivilege TypeDescription
USER ADMINSYSTEMEnables you to maintain users in the DBMS.

Enables you to grant and revoke roles.

Note: This privilege also grants a user in SAP HANA the authorizations to create and delete roles.

CATALOG READSYSTEMEnables you to display role assignments granted by users other than the user created for the database connection, for example the system user _SYS_REPO.
EXECUTE on the procedure GRANT_ACTIVATED_ROLESQLEnables you to grant roles created in the SAP HANA repository to DBMS users.
EXECUTE on the procedure REVOKE_ACTIVATED_ROLESQLEnables you to revoke roles created in the SAP HANA repository to DBMS users.

You can also use several personalized DBMS user administrators instead of one fixed technical user that is configured in the database connection. In this case you need to create DBMS user administrators having the same user name as the ABAP user administrators. In the following step (Setup a database connection) you can select between these 2 options.

2. Add a Database Connection

In transaction DBCO: Add a database connection in table DBCON with Change View “Description of Database Connections”: Overview  for the database user and database type HDB:


Steps in Detail:

  1. Start transaction DBCO.
  2. Choose New Entries.
  3. Enter a name for the database connection.
  4. Enter „HDB“ for the database type.
  5. Enter the name of the DBMS user for the connection.
  6. Enter the password for this user. Note: The password must be productive.
  7. Enter the connection information: <hostname>:<port>.
  8. Save your entries.



Optional: Using a Personalized User Administrator in SAP HANA

If you do not want to use one technical user administrator in SAP HANA you can also define in the database connection that the current ABAP user administrator is authenticated in SAP HANA . Precondition is that the user administrator exists in SAP HANA having exactly the same user name as in the ABAP system and having the authorizations mentioned above. You can then set up the database connection as described in SAP Note 2005856

The current ABAP user is then forwarded to SAP HANA in an assertion ticket.

Alternative Steps in Detail (When Using the Personalized User):

  1. Start transaction DBCO.
  2. Choose New Entries.
  3. Enter a name for the database connection.
  4. Enter „HDB“ for the database type.
  5. Enter <space>as name of the DBMS user for the connection.
  6. Enter any password. (It will not be used)
  7. Enter the connection information: @SSO;HOST=<hostname:port>;DBNAME=<name of DB>
  8. Save your entries.


In both cases we recommend you protect the connection with Secure Sockets Layer (SSL).

For more information, see the SAP HANA Security Guide and SAP Note 1718944

3. Enter Database Connection in Table USR_DBMS_SYSTEM

Enter the name of the database connection and the client in the USR_DBMS_SYSTEM view with Maintain Table View (transaction SM30)


Steps in Detail:

  1. Start transaction SM30.
  2. Enter the USR_DBMS_SYSTEM table and choose Maintain.
  3. Choose New Entries.
  4. Enter the name of the connection and the ABAP client.
  5. Save your entries.


Only customize one ABAP client. The same user ID on different ABAP clients can represent different users with different authorizations. It is not good practice to map user from different clients to the same DBMS user. If you need to support multiple ABAP clients, use SAP Identity Management (SAP ID Management). SAP ID Management has the tools to ensure that users in multiple client represent a single person or identity.

Administration of Users

You can use transaction SU01 for single user maintenance or the ABAP report  RSUSR_DBMS_USERS  for mass synchronization between ABAP and SAP HANA users.


Maintaining Users in ABAP Transaction SU01

In transaction SU01 a new tab named "DBMS" will appear if all configuration steps have been done correctly:


Creation  of Users


Steps in Detail:

  1. Start transaction SU01.
  2. Enter the user name and create the new user.
    NetWeaver Application Server (SAP NetWeaver AS) ABAP enters the given ABAP user ID for the DBMS user ID by default. Not all DBMS systems support the same user IDs as SAP NetWeaver AS ABAP. Other DBMS systems may have other restrictions. You can change the SAP HANA user name if needed. If the user name is left empty no SAP HANA user will be created.  If you desire other default values or blank user names for certain users you can implement the BAdI BADI_DBMS_USERNAME_MAPPING.  See also SAP Note 1927767.
  3. Enter data as required, such as Last Name or Initial Password.
  4. You must also enter an initial password for the DBMS user. 
    Note: SAP NetWeaver AS ABAP and the DBMS have independent security policies. We recommend that you make these security policies as similar as possible. For example: You can create all possible security policies in SAP NetWeaver AS ABAP to match any security policy in SAP HANA. You cannot create all possible security policies in SAP HANA to match any security policy in SAP NetWeaver AS.
    For more information, see chapter 7.1 Password Policy in thehttp://help.sap.com/hana/hana_sec_en.pdf SAP HANA Security Guide.
  5. Save your entries.


Note: There is NO synchronization of productive passwords. As soon as a user changes his password on one side they are out of sync.


Editing Users

Changes to the ABAP user do not effect the DBMS user with the following exceptions:

  • Administrative lock: Locking or unlocking the ABAP user locks or unlocks the DBMS user.
  • Initial password: As the administrator, you set the initial passwords independently. Users change their own passwords in the separate password change facilities of the different systems.
  • You cannot change the DBMS user mapped to the ABAP user directly. You must delete the DBMS user assignment and save before you can assign an existing DBMS user.
  • Assignment of DBMS authorizations
    For SAP HANA, you
    can only add a remove system privileges for privileges that were assigned by the user configured for the database connection. If you try to remove system privileges assigned by a different user, there is no error message. Although the privilege appears to be removed, the next time you view the user in User Management (transaction SU01), the privilege is still assigned. Exception is repository roles, which are always assigned by the user _SYS_REPO. If you have the required privileges you can remove repository roles.


Deleting Users

When deleting an ABAP user, you are prompted to confirm the deletion of a corresponding SAP HANA user if it exists. Choosing Yes deletes the users in both systems.


Using the Report RSUSR_DBMS_USERS

The  report RSUSR_DBMS_USERS allows mass synchronization between ABAP and DBMS users. There are several user selection possibilities to exactly select the ABAP users that shall be synchronized to the DBMS system.  The report documentation in the system  is quite exhaustive. It is recommended to have a look at it.


Please also see SAP Note 1927767 and SAP Note 2068639


Selection criteria for the report:

  • User
  • User type
  • User group
  • Users having a certain ABAP role assigned
  • Users without corresponding SAP HANA users


It is recommended to first start the report in selection mode to check whether the right ABAP users are selected. Then several updates can be run on the DBMS users.


Available functions:

  • Remove mappings to DBMS users
  • Create and map DBMS users. As in SU01 the BAdI BADI_DBMS_USERNAME_MAPPING can be used to configure the name of the DBMS user that is created.
  • Assign DBMS roles
  • Remove DBMS roles
  • Update user attributes   (Such as e-mail and SNC mapping)


Using the Check Report RSUSR_DBMS_USERS_CHECK

When you synchronize database management system (DBMS) user management with SAP NetWeaver Application Server (SAP NetWeaver AS) user management, you must periodically check that the users SAP NetWeaver AS expects are still available.
This can happen, for example, when a database administrator deletes a DBMS user without the SAP NetWeaver AS administrator knowing about it.



Steps in Detail:

  1. Start report RSUSR_DBMS_USERS_CHECK with ABAP: Program Execution (transaction SA38).
  2. Choose Select inconsistent users.
  3. Enter a range of users.
    Note: To reduce the runtime of the report for systems with large numbers of users, you can specify individual user names or ranges to search for inconsistent data.
  4. Choose Execute.
  5. SAP NetWeaver AS ABAP returns the list of users that are inconsistent, if any. These users are SAP NetWeaver AS ABAP users for which a mapping is saved, but the user saved in the mapping does not exist in the DBMS.
  6. Decide how to handle any inconsistent users.
  7. Choose Back F3.

  8. Enter users or ranges of users and select the appropriate action.
    Create the DBMS user: SAP NetWeaver AS ABAP creates a matching DBMS user. The user has an initial password. You must inform the owner of the users about the new DBMS user and the initial password.
    Remove the mapping: SAP NetWeaver AS ABAP deletes the mapping to the missing DBMS user. Any scenarios dependent on that user in both systems no longer work.

  9. Choose Execute.

Otto Gold

On the way to granularity

Posted by Otto Gold Oct 16, 2014

Let’s start with S_TABU_DIS and S_TABU_NAM

We still remember the times when it was not so easy to authorize for generic tools for the access to database tables (transactions such as SE16, SE17, SM30, SM31 or SM34). The only option was the authorization object S_TABU_DIS, which lets one authorize on the level of authorization groups (groups of tables). Just to summarize -> it means that you permit access to a certain group of tables which means that the user can either access all of these tables or none of them. Some people tried tricks with reassigning the tables to different groups

Then S_TABU_NAM object has been introduced which has made it possible to authorize for a single table which was something many MANY (!!) authorizations administrator wanted and prayed for. Now you can maintain parameter transactions for the tables you need to authorize for, maintain the S_TABU_NAM proposal for that parameter transaction in SU24 and via the role menu get the S_TABU_NAM instances all “Standard” in the role.

And how this S_TABU_NAM works exactly? In the module VIEW_AUTHORITY_CHECK, the system checks S_TABU_NAM only if the authorization check on S_TABU_DIS was unsuccessful. This procedure enables both the retention of the previous table access concept and the superposed use of both authorization objects. Notes 1500054 and 1434284 are provided for information regarding the optimum use of this enhancement.

If you build roles via menus and understand the benefit of SU24, you will never give any table access which is not necessary or which you cannot link back to why it had been given when your auditor asks (assuming you understand the “Standard” instance type and know “sun over the mountains” icon and its magic).

Technical details for the interested:

  1. You can see what group the table is assigned in table TDDAT. The combination TABNAME and CCLASS is what you are looking for.
  2. It is probably more convenient for you to find this information somewhere in the SAP standard screens. Then I can recommend you transaction SE11 > provide the name of the table and click “Display”. Then in the main menu Utilities > Assign Authorization Group.
  3. Note that not every table is assigned to a group. Or a meaningful group. Note that table group &NC& is equivalent to “empty value”. Beware of SAP standard SU24 proposals that pull the &NC& value for S_TABU_DIS-DICBERCLS field. But that would be another story.
  4. If you want to learn more about the authorization concept options for generic table access or simply want to have everything describe in one place, please find your way to OSS Note 1434284 - FAQ| Authorization concept for generic table access.
  5. Avoid coding your own S_TABU_* objects’ (all objects in the family) authorization checks at all costs. Use function module VIEW_AUTHORITY_CHECK for this purpose every time. You can see OSS Note 1481950 - New authorization check for generic table access for some details (in combination with 1434284 above!!).
  6. Note: changing authorization group of a standard table is a modification!
  7. Warning: Be careful with banning the S_TABU_DIS object completely. It should not be used as a hardcoded authority check in the SAP standard code any more (if you find it, outside of VIEW_AUTHORITY_CHECK, please inform us about it here!), but you can still find it in TSTCA (check in SE93 – authorizations needed to start a transaction). Because the S_TABU_DIS/ NAM logic is implemented in the VIEW_AUTHORITY_CHECK function module, TSTCA mechanism does not know about it (does not use this way!) and so S_TABU_DIS in TSTCA must still be authorized for using the object not some “friend” object like with DIS and NAM. In case you find TSTCA in SAP standard transactions, you can also consider reporting it here and we can see if we can get rid of it once and for all somehow.
  8. S_TABU_DIS and NAM get a little mention in Frank Buchholz’ blog ABAP Development Standards concerning Security. Unfortunately it does not mention the information about not using S_TABU_* checks hardcoded in the code and the need for VIEW_AUTHORITY_CHECK but maybe you can just believe me on that one.

I must also remind you about the blog by Greg Capps: Reduce the Risk of SAP Direct Table Access.


Then we got S_RFC, RFCTYPE = FUNC

We used to have the same problem with authorizing for S_RFC. You may have noticed that S_RFC gets generated automatically by the PFCG framework when you put a function module into a menu of a role (yes, that works!). Unfortunately what gets generated is a S_RFC instance with RFCTYPE = FUGR. This means that by putting a function module into a menu of a role your role will get S_RFC instance generated which will authorize for all function modules in the function group.

The good news is that there is better granularity possible here since RFCTYPE = FUNC has been introduced. It means you can (MANUALLY!) authorize for a single function module.

It works very much like S_TABU_DIS and NAM: At run time the first check is for the function group executed. If this check fails a second check for the function module is executed. By this behaviour no changes are to be expected during upgrade, but a more granular authority check can be activated on demand. It also share something with S_TCODE – generated entries you cannot edit (because they correspond to the menu entries): Note that the S_RFC standard authorizations discussed in this note are not authorization default values but automatically created start authorizations analogous to S_TCODE. Therefore, they cannot be edited.

If anyone from SAP reads this I would be interested to know if anyone plans to generate S_RFC type FUNC in PFCG either as a default option (after installation or upgrade of the system) or as a default option once a customizing switch is changed (PRGN/SSM_CUST?). That would be wonderful.

Let me share a workaround for type FUNC if you have the time (or the strict requirement) or the urge to make your roles super secure. What you can do is you can manually add new SU24 proposals to the function modules that you want to use (you already are using) in your roles: S_RFC with RFCTYPE = FUNC. Then when you create your role menus and SU24 gets pulled into the authorizations as well as S_RFC RFCTYPE = FUGR gets generated by the PFCG for you, you have the necessary authorization needed to use your functions modules covered twice. Once by FUGR, once by FUNC. Now if you deactivate the instance with RFCTYPE = FUGR, you have a role authorized for S_RFC values which it really needs and not all the function modules that happen to be in the function groups.


Technical details for the interested:

  1. S_RFC type FUNC has been introduced with OSS Note 931251 - Security Note: Authority Check for Function Modules.
  2. OSS Note 1640733 - PFCG: Additional S_RFC authorization describes the mechanism how PFCG generates standard instances for S_RFC object for (remote enabled) function modules in the menu of a role.
  3. OSS Note 1749485 - PFCG: Problems when updating start authorizations mentions the generated instances for S_START and S_SERVICE objects based on the role’s menu entries just like we get for S_RFC.

Anyway I hope you see my point. Just like S_TABU_DIS got more granular with S_TABU_NAM, so did S_RFC (although within one object).


…and now we’ve got S_PROGNAM

And finally… here we are getting to the point why I reminded you about old and known facts above – as an introduction to the “get-more-granular” movement which now has a brand new member. Let me introduce you to S_PROGRAM’s younger brother S_PROGNAM. Please check the spelling to see the difference once again;-).

So what is this new S_PROGNAM? It is a possibility to authorize for individual programs rather than via program groups. Note that you must activate the feature to be able to use it, for existing customers using existing authorization concepts it does not change anything (backwards compatible).

The programmatic submit of reports is secured by the authorization group (old S_PROGRAM) the report is assigned to. In case the authorization group is empty, the report may be executed without an initial authorization check. How I see the new check (if active) it checks your authorizations every time (every time you start a program using the API which also takes care of S_PROGNAM). Which means it does not “just happen” when you call SUBMIT <program> in your custom code. If any of my assumptions is wrong, I will update the texts once I learn the facts (and can cite them via an OSS note).

As a consequence of this new granularity and flexibility you can authorize for only those programs that are really needed and if you work carefully and patiently (and manually), you may get up into a world where S_PROGRAM does not have * in the value and S_PROGNAM is used in combination with SU24 proposals and role menus. Happy hardening (of your security).


Technical details for the interested:

  1. To learn more about the new S_PROGNAM object start with the note 1946079 - Initial Authorization Check in Function SUBMIT_REPORT. Note that this authority check IS OPTIONAL and you must turn it on (see point 3 below).
  2. Note that although this S_PROGNAM object is quite new, it is back-ported all the way to NW 700 SP4 (which a LOOONG time ago!). In case you run an older system, you can consider importing the correction instructions if you can upgrade for whatever reason. If I am not mistaken, by default the mechanism and the object exist in the NetWeaver systems 740 onwards. Try transaction SACF and you will see.
  3. To be able to use the new S_PROGNAM you need to have the SACF transaction (switchable authorizations framework installed first). For more information about what that is you can read OSS Note 1922808 - SACF: FAQ - Supplementary application information and Note 1908870 - SACF: Workbench for switchable authorization scenarios.
  4. To read an interesting discussion about the old S_PROGRAM navigate here: http://scn.sap.com/message/6903382.


P. S.: Rumours have it that we can expect more granularity coming for other objects as well. A candidate that some people are waiting for (like DSAG – German User Group in its materials) is S_GUI that would give the admins the granularity to decide about export / import feature for each program separately. In case anyone has any updates on this one, I would love to hear about it.


Questions for SAP:

1) Will you change the S_RFC behaviour in PFCG? So that PFCG generates S_RFC type FUNC instead of FUGR now when such option is available? Even if you don't make it a mainstream thing for everyone, would you at least consider a switch (PRGN/SSM_CUST) that would let customers switch that on/ change the current default behaviour? Note: we are well aware of the limit on the number of values in a PFCG instance, especially when names of functions are so long.

2) Would you consider an option to check S_TABU_NAM first (before S_TABU_DIS) or provide a switch to do this so that in the authorization trace the more granular access comes first? Then the info which table it was if the check was unsuccessful comes first and makes it easier for the normal and also the lazy to spot the value which must go to SU24/ role in PFCG?).

3) Would you consider cleaning the TSTCA table records to remove S_TABU_DIS from there (as that is not considered for NAM&DIS mechanism because that only works via VIEW_AUTHORITY_CHECK)?

4) Would you tell us why you decided to perform the check on S_TABU_DIS before S_TABU_NAM? Ideally put that into some OSS note (or KBA?) and let us read it there - from the official source.

5) Although it is unlikely, has that ever been considered to retire S_TABU_DIS object one day? Would you consider a switch that would deactivate S_TABU_DIS in the system so that customers can force more granular access only?

6) Can you provide any updates on S_GUI getting more granularity as well? Like when, new object or new field, SACF or standard delivery etc.?


Interesting points from the discussion:

Martin Voros recommends note 2041892 - Logging of call of generic table accesses to your attention.

A bundle of information about the solution can be found at http://scn.sap.com/docs/DOC-58501.


Formalities over, why bother with yet another security product?


I have had the same model of Swiss Army Knife for over thirty years. At the time I got it, it was probably the top of the range model. I worked in research and development for quite a few years and I would have felt naked without it.  Probably all the tools have been used in one way or another, often not for their intended purpose. Usually only a sub-set of the tools got used on a regular basis, and now that I am in software the main tool is the bottle opener. The great thing about such a device is its general purpose nature. You can do almost anything with it and a little imagination. Sometimes you need to do something; you whip it out and its “job done”. Other times though, it’s only better than nothing in an emergency – I would not like to carve roast beef with it, for example. I have sometimes really fumbled and sweated trying to achieve something that with the right tool would have been accomplished in seconds without risk to whatever I was working on.


The same applies to software but people tend to believe otherwise. They are looking for a magic solution to every problem when, in reality, the best you can hope for is to have the right combination of general purpose and specialized tools. SAP Enterprise Threat Detection is like the carving knife in the kitchen – the best tool for its purpose.

I have been following the news on the Shellshock vulnerability the last few days (more information here, here, here, and here) - the vulnerability affects millions of systems and devices. And, a lot of SAP customers run UNIX/Linux systems and consequently have unpatched Bash vulnerabilities that should be patched. But what’s the criticality for SAP customers? Would an SAP customer be vulnerable to application-level attacks taking advantage of this vulnerability? Would an SAP customer with services exposed externally be vulnerable to this type of exploit?


Over the weekend Rob Kelly, a colleague of mine, and I spent some time thinking through security ramifications for our clients; Rob spent some time attempting to exploit this vulnerability at the application level on a NetWeaver Gateway and an ABAP AS system front ended by Web Dispatcher. The good news is, SAP has standardized on the C Shell for a lot of their *NIX scripts, and external services are not script-based. PI/PO developers might use Bash scripts but these normally can't be invoked directly.


The primary consideration for SAP customer is a separation of duties issue. One of the critical technical separation of duties conflicts is that between development and system administration. With this vulnerability, a developer could release code that allowed them to execute arbitrary commands and thus gain system administrator access.


However, SAP customers following the common sense security practices outlined below would already find they have processes in place to address to address this specific risk:


  1. Removing access for Developers to execute OS level commands in production.
    1. The env command used to exploit this vulnerability is defined in SM69 by default; developers should not have access to make modifications in SM69.
    2. Developers shouldn’t have the ability to set up background jobs that would allow them to pass additional parameters.
    3. Don’t forget to remove the ability to run report RSBDCOS0 in SA/SE38.
  2. Addressing infrastructure security
    1. OS-level logins should be restricted to administrators
    2. Restrict the ability to obtain console level access via firewall – do your users need the ability to SSH to your application or database servers?
  3. Following secure software development practices
    1. Run Code Inspector or a like product to ensure input validation for user defined variables being passed to OS Commands (via function module SXPG_COMMAND_EXECUTE, PI/PO scripts, or otherwise).
    2. Have another developer peer review code for workbench requests as part of your change control process.


While the BASH shell should definitely be patched, having these controls in place should mitigate this risk of shellshock being exploited on SAP systems significantly. Practically speaking, most customers can afford to wait to apply this patch in their SAP landscapes during their normal patch maintenance cycle.


What about you? Has anyone else out there explored the implications of the shellshock in their SAP landscapes?


Note: As mentioned in the comments, SAP has released a note on Shellshock: 2072994 - "ShellShock“ vulnerability (CVE-2014-6271).



thanks for joining the Webinar: “Security in an age of Big Data and proliferating Systems”. The recording is available here:



I want to share first the most important links:


SAP Single Sign-On



SAP Identity Management



SAP Enterprise Threat Detection

On October 15, SAP’s new product SAP Enterprise Threat Detection went into ramp-up. Find detailed information about SAP Enterprise Threat Detection here: http://scn.sap.com/docs/DOC-58501. If you are interested in becoming a ramp-up customer, go to the SAP Service Marketplace.


SAP Cloud Identity



SAP Cloud Identity Service



http://service.sap.com/roadmaps --> authentication required



Matthias Kaempfer

This is a close look at the advanced cyber defense portfolio of Telekom and T-Systems.

I once had a long term and intense 3-year project with T-Systems and there are still strong ties between me and the good folks at T-Systems on a personal level.


This made me write this blog, out of the fascination of the topic and the people. By no means - this is no marketing stint, and I have no commercial ties to TSI.  I also promised in one of my last blogs to make a report about the Cyber Center and a lot of people mentioned interest.


Given the old project ties, it is no wonder that in my new long-term security project at a different customer site we

are still in contact and keep talking. I heard of this newly opened, with much media coverage and even more German politician opened “Cyber Defense Center” in the former German capital Bonn (now capital of Deutsche Telekom). It really interested me and I kept researching about the technology behind the story.



I had the chance for a longer talk with Dr. Karl-Friedrich Thier, Senior Security Consultant of the Business Unit Cyber Security at T-Systems International. We talked for nearly two hours about the technology used and the strategies behind the cyber defense center. It is not only used by Deutsche Telekom itself to protect their huge network, but is also available as a service to (usually very large) customers from T-Systems. There is also one operational aspect (Telekom Cyber Defense Center in Bonn) and one service-level, customer aspect (Advanced Cyber Defense as a service by T-Systems)


But why am I so excited about this ACD? It is more than the usual “firewall bigger and higher” or the casual “we handle the largest DDOS”. It is a complete new philosophy of cyber defense and – as opposed to an abstract philosophy – extremely well executed and put into broad practice. The last sentence is my impression.


The idea and intention behind this center and the used software and hardware, project and people is (according to their web site):


“Companies that don't adapt their cyber detection and response capabilities to this threat constantly lag behind the complex and targeted attacks. To free themselves from this risky and frustrating cycle of playing "catch-up," companies need to construct an intelligent security management system that links information from a range of data sources and analyzes it in real time. The goal of this proactive approach is not only to protect the company from known attacks, but also to identify unknown attacks and quickly initiate countermeasures.”


The technology behind the Cyber Defense Center is very diverse and colorful. A lot of tools are used, like different instruments in an orchestra.

It all starts with building situational awareness. Deutsche Telekom operates  around 180 “honeypots” that mimic vulnerable systems around the world which attract all kinds of attackers. By watching and measuring the hacking attempts, you get a pretty good overview of the actual tools used, current attack vectors in favor and organizations using them. The deployed honeypots are actually mostly raspberry PI’s, btw, that are quite cool gadgets.


Watch 180 honeypots live in action



The results are public and can be viewed on the web site http://www.securitydashboard.eu/. In parallel, there is an automated watch of twitter and news feeds for related activity based on used keywords. Here you see in summary in real-time, where actually threats are brewing. Like a weather radar.


The actual network operations and analytics is performed by tools of RSA, which have a huge large-scale portfolio for network operations analytics and threat detection. But it is all thread-based and pattern based.


The “art of security” is, to look for the right patterns to react upon. And this is something you can’t buy – you have to collect and build it yourself over time. And it is constantly changing. This is one major task of every cyber security center. This is the core IP (Intellectual Property) that makes you excel over all other approaches.


Security analytics is complemented by forensic and advanced malware detection tools like FireEye®. Rather than scanning for specific patterns, FireEye executes potential threats in an isolated virtual machine (Sandbox), and monitors its behavior. Like a virus, that is contained and captured in a laboratory section. The attacker, however, doesn’t see a VM but rather a physical workstation or server. One of the cool features is a “time warp” where you can fool “sleeper Trojans” that will sleep for some time before starting.


There are dozen more interesting and used tools of various companies, but they all underline the various aspects of network security

All these tools are nothing without the proverbial orchestration. The core of the cyber defense center is central organization, different present skillsets from analysts to operators and squad leaders that can act on the spot on any actual thread.


The concept of a Security Operation Center SOC




People as success factors


is the mentioned underline in the slide  and this phrase is taken very seriously here in the ADC-concept.


As shown on this picture, it is the staffing and the organization together with the elected software and hardware tools, that makes this Cyber Defense Center so powerful. A lot during the talk with Dr. Thier, we did not discuss software and feature, but how an organization needs to cater to the need of customers. Every customer is different, every customer has other threat and risk areas and there is no “one size fits all”, especially not in security.


The fascination of the setup was, that there was a deep knowledge of security, both commercial as well as governmental at the people involved at T-Systems, and in conjunction with the well selected software and the organizational strength of a large organization, that this all together made a great picture.


The key of all this technology (like the patterns) is how they are applied, the intellectual property behind the tools. This is how it all works well together.


One of the questions that came to my mind is, if regular customers (and even my customers are not really small) would afford such an organization. Probably not, the reduced risk would be in sharp contrast to the big investments in center, people and technology. But I see in the future a convergence of on premise strategies that are self-contained and services like the Advance Defense Center that will like a shell surround the overall strategy.


We will see, how the security strategies everywhere are evolving in the future.


(Disclaimer: This is not a sales pitch, but if you look into a European case of applied security for large networks, this is someone you should talk to or at least, even if you do this on a much smaller scale, you should learn from the big Ones)

Over the last few years there have been indications of rising interest in SAP systems by white hatters and black hatters, and I guess any color in between. In any case the world has got more dangerous for systems in general, not least because they are increasingly interconnected and exposed in ways that were unthinkable (for most) in the past. Although traditional security solutions remain vital for minimizing the attacks on your system landscape, you can and should assume that there will be unhealthy activity within your defensive perimeters. Determined attackers are likely to get through eventually and the best technical precautions might be nullified by internal personnel or by social engineering tricks.


These are well known dangers, and there appears to be a serious gap in the coverage of SAP systems by existing security products. These lack insight into SAP business software and also run up against what is essentially a big-data problem - that is, how to analyze the security-relevant data that exists in the landscape. Later this year, SAP plans to go into ramp-up with a new product designed to address exactly this issue.


For customers who may be interested in joining the ramp-up, further information can be found at www.service.sap.com/public/rampup on the tab Upcoming Ramp-Ups.


More information on SAP Enterprise Threat Detection will be available here on SCN when it goes into ramp-up.

When my little but big company, that I started 10 years ago and foster ever since, started the venture last year to change the scope of our company from SAP PI, Basis, Data Center Consulting and helping managing complex SAP landscapes on an European scale to SAP Security, it was a feeling like in the good old Internet times. It was a timewarp to the change of the Millennium or to the appearance of the Apple II and IBM PC in the 80s. Exciting times.


Approached by IBM to become a strategic partner in the IBM/SAP Security world, we were very pleased that such a “big” company was really trusting us to work with them in the major league.

We looked also at the surrounding economy, the world of Pen Testing, Security Administration and Operation and SIEM (Security Information and Event Management) in the so little, so big SAP Universe.


(Just to explain what a Pen Test is: It is a Penetration Test, where dedicated security personal is trying to break the SAP-System and get their way into it. This breach attempt is made on all levels: Network, Infrastructure, Basis hacks, RFC Hacks, SAPGUI hacks, but also social hacks like email phishing and password sniffing).

We also choose our prefered vendor for SAP penetration testing. But to make a long story short and to come to my actual point, it is easy to say “We make now Security”, especially in the SAP world, to choose a product and go ahead try to hack the planet.


A good security breach is more than a tool. Like everything else, it is a deep knowledge about networks, infrastructure, attack vectors and the tools needed and used. If you don't want to use a commercial tool, than you still have a good choice.


One of the tools you need,  when you start pen testing, is the KALI distribution, maintained by the folks of Offensive Security.

The Linux KALI distribution is Open Source and has a long history started as a tool collection long time ago. They also have an online class with a certification that is commercial, but everybody in the industry will agree that it is a very demanding certificate with a tough exam. This means, that it will prove a work like experience and hands-on expertise.


But besides the certificate, these are the “tool of the trade” and you should be able to make any pen test even without commercial tools. There is a great companion book, and if you really want to start looking at the Pen test World, get the KALI distro on your laptop, get the book, start NMAP and practice.

But even if you try to learn the "Top 10 Tools" that KALI emphasizes, you will need a lot of practicing to become fluent in a pentration test workflow.

(If you are by coincidence at your customer site, try to run an NMAP full scan by plugging in your private laptop in the corporate switch and count the time until security stands at your desk. If this is more than 15 minutes, give them a security session). (OK, this is maybe not the brightest idea, but you got the story).


Kali has also invented the motto: “The quieter you become, the more you are able to hear”. And this is really true, not only for all security matters in the SAP world, but in the other corporate IT world as well.




Security needs a very thorough understanding not only of large data center infrastructure and the surrounding networks, but also a lot of patience, listening and exploring. No tool will replace your knowledge and your abilities to map a complex SAP Network. And the SAP world adds big twist to Pen Testing. I have seen the one or the other Pen tester (usually right out of college, but sold as the security consultant)  from outside of the SAP world, using an open source tool and then asking around: “OK, looks like I am in with SAPALL, but what do I do now?”

Things like hacking via RFC and SMGW/Gateway means knowledge of programming, ABAP and Java the like.


It is one part to be a loudspeaker, touting all your hacker experience in the world, go with a tattoo on the forehead to Blackhat Las Vegas and pretend to be the coolest kid in the universe. Like someone said, maybe your little teen sister is impressed, but not the CISO.


I had a longer conversation with some partners about a good way of approaching my customers and I thought of things like a German Blog, Twitter, weekly reports on threads and new findings. But at the end, this would be just noise. After a short while, nobody would listen anymore. We decided, that the quiet way, the conservative but most trustworthy approach was just to call, meet and talk. Talk about the needs, their local threats and  findings and how to handle all these large and small security issues.


Security, especially Penetration Testing and discovering true vulnerabilities that could make or break a company in its hardest case (see my blog) makes a trustworthy relationship a base requirement for every customer situation. Showing first and foremost that you are a responsible person, guiding the customer on the risk assessment through the differentiation of hype or real risk is a demanding task in the SAP world of large installations. Knowing the hack is one thing, but waging the risk, the cost of the process to fix the gaps and making everything fit into an overall security strategy is a complete different world.


I like the challenge of this professional spread: Between the fun of serious hacking and testing on the one side and the serious presentation on the other end - to put on your black suit and put the findings in a real perspective.


(edited for content grammar and political correctnes)

Frank Koehntopp

Designing for Security

Posted by Frank Koehntopp Aug 27, 2014

There are two distinct ways on how you can build security into your software:


  • have your software tested and/or hacked, and start applying technology to plug the holes and keep the bad guys out
  • think about how your software could be mis-used and make sure your design prevents that


Or, as Gary McGraw just wrote, in much better words:


Screenshot 2014-08-27 20.27.23.png


Unfortunately the concept of "anticipating attacks" seems to be quite alien for the average developer - recognized by responding to a threat scenario with "but why would someone do that?".


It also seems to be hard to teach. There is a new effort that I think has lots of promise: the IEEE Center for Secure Design tries to tackle the problem from the design angle. This is their mission statement:


The IEEE Computer Society's CSD will gather software security expertise from industry, academia and government. The CSD provides guidance on:

  1. Recognizing software system designs that are likely vulnerable to compromise.
  2. Designing and building software systems with strong, identifiable security properties.

The CSD is part of the IEEE Computer Society's larger cybersecurity initiative, launched in 2014.


If you're interested in the topic, I would encourage you to read their document. It tries to explain the most common design flaws that lead to vulnerabilities. Every security architect in your team should have read (and understood) those, ideally:


Screenshot 2014-08-27 20.33.41.png


These are the topics explained in more details in the PDF (click on the image to read it):














In 2012, American agencies under the lead of SIFMA where running the first cyber-attack stress test on financial institutions on Wall Street.

One year later, it was repeated in London, with a broader approach and more detailed preparation. This stress test and the results are stunning. Everyone who has to do with security should look at the scenario and should ask if their organization has an answer to the raised question:

How would we behave, how would we address all the issues that where surfaced during the organized cyber-attack?.

This is nothing that only affects Wall Street or London City’s financial district. This scenario can hit every company in the world.

Since I recently won a price in Germanys largest IT magazine, CT, in a storytelling contest, let’s recount the tale of a cyber-attack war game in a novel way.

And since I am German, (as SAP is), let’s assume the story does happen in SAP Homeland, in Germany and Carl B. Max, the CEO of AUTOBAHN AG, (“Fast is GOOD”) is still asleep in his home near his headquarter in Frankfurt am Main, Germany’s financial district.


The Sequence of events that lead to the dissapearance of the German Autobahn AG:

At 6:00 AM in the morning, Twitter, Facebook and  the German Autobahn-Forum “The Fast and the Faster”, are showing up first posts: How bad the German Autobahn is, full of potholes, governed by too much speed limits, too much traffic jams.

At 6:30, more serious posts and accusations are added: Pictures of deadly accidents because of potholes on the fastest parts of the autobahn. The idea of a class action lawsuit is mentioned.

At 8:00, the posts have piled up to a veritable shitstorm.

At 8:30, the Twitter and Facebook accounts, maintained by the PR-Department of Autobahn AG have been hacked and are posting strange and bogus replies to the accusations. The impression of ignoring and downplaying the accusations are immanent.

At 8:45, Carl B. Max, CEO of Autobahn AG, is arriving at the office.

At 9:00, rogue High Frequency Trader are starting an attack on the stock of AUTOBAHN AG. They are short trading the stocks within seconds to a level, where regular trading algorithms, due to the high trading volume and dropping values, are suddenly releasing stop loss orders. This is generating an automatic trading avalanche, resulting in a landslide on the course of the AAG stock.

At 9:30, Social Medias are full of speculation on bad financial deals that are threatening the future results of Autobahn AG. The PR-Account of the company speaker is hacked and false PR statements are send to the world wide press. Since nobody knows, who was adressed and what was published, counteractions became difficult.

At 10:00, Carl Max is calling for a press conference at the headquarter in the office Tower at the “Frankfurter Kreuz” near the Airport. He demands actual financial statements from his CFO that he can present as a testimonial to the press, that everything is good.

In the middle of his calls, the telephone became dead. A massive DDoS attack is driven on the VoIP based telephone center. A special VoIP virus, dedicated to this equipment eats its ways through the Ethernet based phone infrastructure. Only calls via mobile can be done. “Can’t be reached for comments” was the phrase for the hour.


At 10:15, the SAP system crashes. Restore of the backup is necessary. The IT is detecting, that all tapes from the last 4 weeks are damaged, due to an error in the backup procedure. The SAN stopped working with a damaged hardware.

At 10:30, the CFO finds out that all numbers in the SAP Business Warehouse systems are corrupt. It is unclear, if the backup does contain non-manipulated figures.

At 11:00, the rogue high frequency trading continues in London, after the London exchange opened. The landslide of the courses goes on

At 12:00, Carl Max can’t present any reliable numbers to the press. The attack is not mentioned.

The plea to the large stock exchanges for suspension of their stock trading is not granted, since AUTOBAHN AG can’t present any figures for proof and no one can’t be reach to comment on the incidence.

At 15:00, NYSE in Wall Street is opening. The rogue trading leads to a suspension of trade, when the company value was hitting one cent and the stock was rated as a penny stock.

At 17:00, when the German Stock Exchange in Frankfurt closed, Deutsche Autobahn AG is “pleite”, bankrupt.

Do you think this is not for real?

Fiction? You wish, but it is real life truth. Every single point of this cyber-attack already happened. Some of them are even common threads, like manipulation of social media or high frequency trades. Ever thought about how reliable a VoIP or how vulnerable a Microsoft Lynx Server is? And especially in a corporate Environment?

Some of them are recent developments, like the new “attack vector” of manipulating BI-cubes with the intent to lead the hacked co to false decisions.

And the backup? Guess how often I have seen this happen in 20 years? More than you would think, and it was always an internal problem of slobby backups, not even a hackers attack.

At the end, Quantum Dawn recommended at first and foremost, to establish a fast,, clear and direct communication on attacks. Don’t keep such attack secret. There must be internal and external (governmental, if this is a broad attack) communication ways that will react within minutes. These attacks are maybe criminal, but given the world wide state of politics, this attack can even be initiated by governments as part of a global warfare.

And you need an alerted IT who can countermeasure this thread in unison.

Really, think of your company you are in: Who would you call if you see an attack on a SAP system? And who can respond immediately?


More Materials:

Deloitte as audit company was part of the cyber trial, Here are their findings

And some great video for it also from Deloitte: Cyber Security. Evolved.

And also check my first blog in this series of security papers: THINK Security: Towards a new horizon

It is interesting to watch the security world undergoing a dramatic change. The classic world of protecting the good SAP system against the evil with a good firewall and relying on the closed SAP ABAP technology (known only by the good guys) does not longer live up to the promise.


The old security assurance, that SAP is so isolated and so exotic in the company network, that nobody will enter the premise, is slowly deteriorating over the last decade. Suddenly, the Internet, the Extranet, the VPN'ns are all over the place, connected straight to the ECC core system. SAP Hacks is a standard program on any blackhat convention.

While there are so many new Security Technologies in Firewalls, Appliances and software security frameworks, the Security World at SAP is still old-fashioned. But this is also a tribute to the ever growing complexity of the SAP eco systems. The impression, to live behind a secure wall in a secret garden, is just the glorified view to the past.

It is easy to say "Fix and harden the SAPRouter and Web Dispatcher". But what if you have thousands of routes and dozen of routers and web dispatchers? Just keeping them always up to date is a job by itself.

Customers need to learn to manage this complexity in a new way. I know a lot of SAP sites that are discussing continuous patching, upgrading, testing and enhancing. But this by itself is a daunting task. One of my larger customers has 60 SAP systems in one tier, all related and connected. Multiply this by three and you have Dev, QA and PROD tier with 180 machines. Tell me how to do "permanent changes" to this landscape and ensure maximum security while testing all 60 app systems in unison every time after patch day. In theory, you can add unlimited resources, 24x7 uninterrupted strategies and unlimited budget. Yes, you can solve it. But in economic terms, it is not feasible. It is the old economic story of limited resources and limited spend money.

The first step in a new strategy for security is risk assessment. There was a great blog of Balancing Danger and Opportunity in the New World of Cyber Domain

a great summary of Derek Klobucher about the keynote speech of  Gen. Michael Hayden (retired NSA Chief) who spoke to the attendants of the SAP Retail Forum 2013 .


Hayden drastically stated the new security paradgm : "If you have anything of value, you have been penetrated,” Hayden said. “You’ve got to survive while penetrated -- operate while someone else is on your network, wrapping your precious data far more tightly than your other more ordinary data.”

He basically stated, that Security is no longer about vulnerabilty alone. He introduced the formula, that Risk is always a relative value for your assets.


Risk = vulnerabilty x consequence.

This is the most important message for the near future for everyone involved in security You need to manage risk. Security risk in time and over time.

That must be the goal, even more for such a critical system like the central SAP system. The new security paradigms lives in real time, defending against frequent attacks, internal and external threats to capture or manipulate data. Organizations must face the new complexity, new organizational challenges and security risk management.

And with the risk, you need to change the security thinking from “defending walls” like in medieval castles to “pattern recocgnition”, to an approach, where you anticipate the next attack while it is building up. Here, the technologies of Big Data, SIEM and Artificial Intelligence emerging. In Germany, T-Systems and Telekom have a great "Real Life" showcase: the "Advanced Cyber Defense Center" in Bonn. (Maybe I do a blog one day about it).

Yes, this is a very complex and demanding world. And this is why even big companies need to talk, act and cooperate on security issues.

But this is the topic of my next block:” Quantum Dawn – What SAP Data Centers can learn from SIFMA war games”. 

Just relying on your good old firewall is a thing of the past.

For most SSO issue, the Logon Trace is needed to find the root cause.


In ABAP system, actually, the logon trace is the development trace of work process. Normally we use the important Note:

#495911 - Trace analysis for logon problems

After get the trace, we can use the Security Audit Log to locate the work process which handled the logon to find the real reason why logon failed.


But sometime, if the security audit log is not active or there is no entry logged in audit log, it becomes difficult to find the work process.


For HTTP Logon issue, I found we can use ICM trace to locate the work process.

Firstly, Raise the ICM trace level to 3.

This can be done in the SMICM, use menu “Goto -> Trace Level -> Set”:


(Also remember go to SM50 to raise trace level to 3 on “Security” component for DIA work processes.)


Then reproduce the issue, and after that change all the trace levels back to default value.


Now let go to check the ICM trace. Use the reproduce timestamp to find related trace:


(Here I recommend the free software Notepad++, it can search large text file very fast. Show the result in list and can locate to position of file by double-clicking.)

Then we can search by such keyword “IcmHandleOOBData”, in the result following lines are what we need:


[Thr 140080821593856] IcmHandleOOBData: Received data on 1st MPI (seqno: 1, type=6, reason=Request processed in wp(6)): 42/23079/0

[Thr 140080821593856] IcmHandleOOBData: request will be processed in wp 6

Here the "wp 6" mean the work process number 6 handled this logon.


Then we can go to check the dev_w6 to find the related trace, we can use timestamp or keyword "note 320991" to search:


In these logon trace, we can find the root cause of why logon failed.

Segregating Warehouse Responsibilities using standard Inventory Management and Warehouse management authorizations


In certain situations there can be a requirement to separate logistical processes in a SAP system on a detailed level.  This is usually the case when different parties are responsible to perform different logistical processes and / or are responsible for different parts of the same warehouse. 

Examples of the situations where the requirements could occur are:

  • A third party executes logistical activities and manages a part of the  plant and warehouse.  In these parts of the plant and warehouse this third party is responsible for the stock.
  • ‘Special’ materials are stored in certain parts of the warehouse and should only be handled by a certain set of users.

This separation in responsibilities can be depicted in SAP by setting up different plants and warehouses that can subsequently be authorized on. But these solutions would mean a redesign of the logistical landscape and additional administrative activities would be needed during day to day operations.  Avoiding this redesign and administrative burden would require effective authorization restrictions on organizational elements lower than plant and warehouse. The requirement of controlling who executes IM and WM processes on a detailed level can be met using standard SAP authorizations in combination with IM/ WM customizing without setting up additional plants and warehouses.  This blog discusses this solution for segregating warehouse responsibilities. 

Content of this blog

This blog explains when this solution can be used, when it should not be used, how it works and what it can and cannot do.  It also gives an overview of the activities that need to be performed to implement the solution. The solution is based on my own investigation and experience, but also information from several notes, knowledge base articles and threads was used and combined to create a complete solution.

The solution and when to use it

You can use the solution when you need to differentiate between different groups of users who can perform IM /WM activities within parts of the same plant and warehouse.

The SAP SAP WM customzing and  authorization elements ‘storage location’ and ‘storage type’ form the basis for the solution. By properly defining the WM customizing authorizing on the authorization elements  them you can:

  • Restrict IM movements based on storage location  to certain groups of users ( next to the normal restriction on movement type and plant)
  • Ensure that ‘allowed processes ’ are  defined in WM customizing ( like storage type search settings ) so during WM processes users that needs to execute them are not hampered by authorization checks
  • Restrict ‘manual’  WM movements based on the ‘source’ and ‘destination’ storage type to certain groups of users (next to the normal restriction on warehouse and WM movement type)

By authorizing on these two elements (storage location and storage type), you can create an authorization setup that only allows users with certain roles to perform specific IM and resulting WM movements for specific storage locations and restrict who can make ‘manual’ WM movements for specific storage types. In this case ‘Manual’ WM movements refer to transfer orders that are not triggered by an IM movement or other specific Logitical actios. For example the transfer orders of the movement type 999 that can be created manually via transaction LT01.

With such an authorization setup only the party that is responsible for the storage locations and storage types can keep control over the movements of stock located there while normal ‘Allowed’ warehouse processes are performed in a regulated manner and are not hampered by authorization restrictions.

When not to use it

Only use it when there is a hard requirement that these restrictions are enforced by the system. Implementing and maintaining the solution (for WM) can be complex.  If there is no hard requirement to enforce these restrictions in the system on such a detailed level don’t do it. In case checking if procedural agreements are adhered to is sufficient do not use authorizations for it.  It also makes no sense to put in effect restrictions in SAP if there are no physical restrictions as well.  if SAP blocks a user from moving materials from one part of the warehouse to another but there is no physical  restriction ( like a locked door or a fence) the person can still just move the materials and not register it. 


Before this solution can be implemented a number of things need to be clear. If these aspects are not clear the solution cannot be implemented correctly and will only work partly or not at all.  The following must be determined:

  • Ownership of all Storage locations
  • Ownership of all Storage types
  • Clearly defined logistical processes
  • Which party executes which steps in these process

Combined ownership of storage locations and storage types should be avoided as much as possible as this will complicate and can (partially) undermine the solution. Where ever possible ownership of storage types for interim bins have to be determined as well.

The concept

Inventory Management

When an IM movement is made an authorization check on plant and movement type is executed. If the user is not authorized the movement cannot be made. By settings made in customizing a subsequent check can be activated whenever a movement is made for a certain storage location. This customizing switch is set per storage location. By default this customizing setting is off.   When this customizing setting for a storage location is activated it will trigger an authorization check for the combination of movement type, plant, storage location ( and of course activity)  whenever a IM movement is made using this storage location. The authorization object checked is M_MSEG_LGO.  See also SAP Knowledge Base Article 1668678.

So by only granting the roles for a certain party with the storage location/plants they are responsible for in combination with the movement types they are allowed to perform the required segregation in responsibilities can be made.

When a storage location to storage location movement is made both the ‘Source’ and ‘Destination’ storage locations are checked in case the customizing check is set for both storage locations. This would mean that a movement betweens storage locations ‘owned’ by different parties is blocked by authorizations. In those cases a ’two –step’ storage location to storage location movement can be made wherein the sending party executes the first step and the receiving party executed the second step. See also SAP note 205448.   

Warehouse management

The solution for warehouse management is more complicated and is based on the SAP WM Customizing like the concept of storage type search (strategies).

Authorization check for all transfer orders:

During the creation of a TO an authorization check on Warehouse is performed in all cases (Field LGNUM of object L_LGNUM).  At that point no check on Storage type is performed (LGTYP is checked with DUMMY) See also Knowledge base article 1803389. In case the user is not authorized for the warehouse the TO cannot be created

Authorization checks in relation to WM customizing:

When a transfer order is created, SAP will try to determine which storage type to pick the material from (source) or which storage type to put this material (destination).

To determine where to pick from SAP checks if it can find a suitable source storage type for removal by searching in the ‘storage type search’ table defined in WM customizing.  This search uses a number of variables like reference movement type, warehouse, pick strategy indicator in the material master and special stock indicator to find a suitable storage type. In case a suitable source storage type is found and used in the transfer order no extra check is performed.

The same method is used to determine the storage type to put away the material. In that case a suitable destination storage type is searched for in the ‘storage type search’ table in WM customizing.   In case found no extra authorization check is performed.

In a lot of cases WM movements are triggered by logistical activities like IM movements.  Under normal circumstances  the ‘storage type search’ WM customizing is properly defined for the logistical process , the necessary material master data is setup and the TO can be created without issues and without needing explicit authorization for the source or  destination storage types. This because it is an ‘allowed’ process and as such the extra authorization checks are not needed.

In case no suitable source or destination storage type is found in the  ‘storage type search ’ table and the user creates the transfer order in the foreground the user can enter a source or destination storage type manually. In that case and extra authorization check is executed.   This check is on the combination of Storage type and Warehouse.  The same object _LGNUM is used, for this check but now the field LGTYP is not checked with DUMMY but for the storage type (see FORM BERECHTIGUNG_LGTYP of include FL000F00). This check is performed because the entered storage type is not found as a suitable storage type in the search strategy (see include LL03AF6I). This check on object L_LGNUM is executed separately for the destination and source storage type.   Also when the users creates the transfer order in the foreground and changes the source or destination storage type into a storage type that is not part of the applicable ‘storage type search ‘ table entry this extra authorization check on the source and / or destination storage type is executed.  See also Knowledge base article 1803389. A thread that also mentions this is http://scn.sap.com/thread/775605

Using what is explained above this extra authorization check can be used to restrict the deviations that a user can make compared to the ‘allowed’ processes that are defined in the WM customizing.  By only granting authorization for the storage types the user is responsible for the user can only make deviations to these storage types. This can be considered technically correct as the stock located there is under this user’s responsibility.

Authorization checks for ‘manual’ transfer orders

Some WM movements can be created manually and are not triggered by other activities like IM.  For instance transaction code LT01 to create a TO manually can be used. Normally these movements are WM supervision movement types like 999 .  Not all WM Movements can be created manually. Which WM movement types can be used to manually create TO’s depends on customizing.  For all movements that are created manually an authorization check on WM movement type in combination with Warehouse is executed. The object that is checked is L_BWLVS.  Also the general check on warehouse is executed.  During the creation of manual transfer orders the concept of ‘storage type search’ and authorizations also applies. By not setting up ‘storage type search‘ customizing  for those movements the extra authorization check is always executed.  By only providing authorization for the storage types s users can only move stock between these storage types they control using these ‘manual’ movements


  1. By restricting the access on IM level (movement type, plant and storage location) or other actions that trigger a Transfer order the authorization for the subsequent WM Movement  is restricted as well. If the user has authorization for the action with this the user also has authorization for the subsequent TO, but the manipulation of the storage types the material is picked or put away can be restricted to those defined as applicable in the storage type search (WM customizing) and those that are controlled by the authorization of the user (using roles)
  2. The manual WM movements can be restricted based on movement types and to those storage types  that are controlled by the user’s authorizations (using roles)

What it cannot do

Warehouse management:

No authorization check on storage type is performed when a TO is confirmed. The Warehouse is checked but the storage type is not checked (Object L_LGNUM with DUMMY). This means that anybody with authorization for the warehouse and confirming any TO can confirm a TO for that warehouse. There is no way to restrict on storage type during TO confirmation using standard SAP.  Because a Transfer order needs to have been created in order for it to be confirmed and the creation of the TO is controlled this gap is not crucial for the solution. Also the storage type cannot be altered during confirmation.

Inventory Management:

In almost all situations a material document will contain a storage location.  There are however a few situations where a material document does not contain a storage location. This is when a goods receipt is performed and the materials are consumed upon receipt. This happens for instance if a PO has a cost center as account assignment.  You must determine if these situations are relevant and if this gap is relevant for your situation.  If for example goods receipts are always performed by one of the parties then only one of these parties should have the authorization to do goods receipts. Although this party could potentially do a goods receipt while the PO erroneously contains a storage location which is not ‘owned’ by this party they can still do the goods receipt. This will not be an issue as they are responsible for all goods receipts.   In case multiple parties need to be able to perform goods receipts for different storage location you can include an authorization check (on e.g. the storage location in the PO) using BADI MB_CHECK_LINE_BADI.   This is however not standard SAP.

How to set it up

Inventory Management:

The more easy part is the authorization restriction for Inventory Management.   This can be done in four steps:

1) Activate the check on storage location:

Activate the check on object M_MSEG_LGO in customizing (menu path “Materials Management --> Inventory Management and      Physical  Inventory --> Authorization Management --> Authorization Check for Storage Locations”) See also SAP Knowledge Base        Article 16686


2) Make storage location an organizational level:

Use program ‘PFCG_ORG_FIELD_CREATE’ to make the field LGORT an organizational level. See SAP note 727536

3) Update SU24 for relevant transaction codes:

All transactions that create, change or display IM movements need to be updated to have object M_MSEG_LGO set as ‘proposed = Y’  so that the object is populated in PFCG during role maintenance.

4) All roles that contain these transactions need to be updated to contain the M_MSEG_LGO object with the right plants, storage           locations, movement types and activity.  Important to know is that the check on M_MSEG_LGO is also performed when a material           document is displayed. This means that also roles that provide display access to material documents ( like MB51) need to be updated to include the authorizations with activity ‘03’

Warehouse management

Setting up the solution for warehouse management is a more tricky part and consists of three steps:

1) Set up all necessary storage type search strategies to cover ALL ‘allowed’ processes:

Stock removal and stock placement storage types search entries have to be setup in WM customizing for all ‘allowed’ processes for which no additional authorization check on storage type is needed.

2) Make sure that the necessary master data (material master data etc) is set up correctly so that the correct storage type search can be found and used during 'allowed' processes


3) Update the roles:


All roles that contain the object L_LGNUM need to be updated so that they contain the authorization for the storage types belonging to the parties they are for. Please note that the object has no activity field and that some display transactions related to WM check on this object as well with DUMMY for the field LGTYP.


What to consider during implementation

Please keep in mind the below aspects in order to successfully deploy this solution:

  1. WM storage type search (strategies/sequences):  All ‘allowed’ scenarios must be covered by stock removal and stock placement strategies else authorization checks on storage type will be triggered which can fail because the user is not authorized while he/she should be able to perform the step in the process. Considering there are many variables involved there are many strategies to be maintained.  Having the processes clear and involving a specialist in SAP WM is essential in order to cover everything needed.
  2. Material master data:  In order for SAP to find the correct storage type in the ‘storage type search’ table the material master data fields like stock placement and stock removal strategy indicators need to be set correctly. This is crucial for the solution to work.  As there are a lot of material master records this can be quite some work. Most issues after introducing this solution will most probably be because of the incorrect or missing material master WM data.
  3. Training (of key users): especially the WM part of the solution can be complex. Training of (key) users is important in order for them to understand the concept and to find the right solution when goods ‘get stuck’.
  4. (temporary) Super role:   It can be very useful to (temporarily) have a sort of ‘super user ’ role available that can make transfer orders between storage types handled by different parties ( including those for dynamic bins). This can be done by granting this role authorization for all storage types or by creating a WM movement type that has search strategies for all storage types and granting access to that movement type. By assigning this role to a limited number of key users during the first phase after go-live a work around is available when a material movement gets ‘stuck’ while a real solution ( like material master data ,  WM search strategies of authorization roles changes) are being investigated and followed up. 


Filter Blog

By author:
By date:
By tag: