1 2 3 8 Previous Next

Security

114 Posts
Otto Gold

On the way to granularity

Posted by Otto Gold Oct 16, 2014


Let’s start with S_TABU_DIS and S_TABU_NAM

We still remember the times when it was not so easy to authorize for generic tools for the access to database tables (transactions such as SE16, SE17, SM30, SM31 or SM34). The only option was the authorization object S_TABU_DIS, which lets one authorize on the level of authorization groups (groups of tables). Just to summarize -> it means that you permit access to a certain group of tables which means that the user can either access all of these tables or none of them. Some people tried tricks with reassigning the tables to different groups

Then S_TABU_NAM object has been introduced which has made it possible to authorize for a single table which was something many MANY (!!) authorizations administrator wanted and prayed for. Now you can maintain parameter transactions for the tables you need to authorize for, maintain the S_TABU_NAM proposal for that parameter transaction in SU24 and via the role menu get the S_TABU_NAM instances all “Standard” in the role.

And how this S_TABU_NAM works exactly? In the module VIEW_AUTHORITY_CHECK, the system checks S_TABU_NAM only if the authorization check on S_TABU_DIS was unsuccessful. This procedure enables both the retention of the previous table access concept and the superposed use of both authorization objects. Notes 1500054 and 1434284 are provided for information regarding the optimum use of this enhancement.

If you build roles via menus and understand the benefit of SU24, you will never give any table access which is not necessary or which you cannot link back to why it had been given when your auditor asks (assuming you understand the “Standard” instance type and know “sun over the mountains” icon and its magic).


Technical details for the interested:

  1. You can see what group the table is assigned in table TDDAT. The combination TABNAME and CCLASS is what you are looking for.
  2. It is probably more convenient for you to find this information somewhere in the SAP standard screens. Then I can recommend you transaction SE11 > provide the name of the table and click “Display”. Then in the main menu Utilities > Assign Authorization Group.
  3. Note that not every table is assigned to a group. Or a meaningful group. Note that table group &NC& is equivalent to “empty value”. Beware of SAP standard SU24 proposals that pull the &NC& value for S_TABU_DIS-DICBERCLS field. But that would be another story.
  4. If you want to learn more about the authorization concept options for generic table access or simply want to have everything describe in one place, please find your way to OSS Note 1434284 - FAQ| Authorization concept for generic table access.
  5. Avoid coding your own S_TABU_* objects’ (all objects in the family) authorization checks at all costs. Use function module VIEW_AUTHORITY_CHECK for this purpose every time. You can see OSS Note 1481950 - New authorization check for generic table access for some details (in combination with 1434284 above!!).
  6. Note: changing authorization group of a standard table is a modification!
  7. Warning: Be careful with banning the S_TABU_DIS object completely. It should not be used as a hardcoded authority check in the SAP standard code any more (if you find it, outside of VIEW_AUTHORITY_CHECK, please inform us about it here!), but you can still find it in TSTCA (check in SE93 – authorizations needed to start a transaction). Because the S_TABU_DIS/ NAM logic is implemented in the VIEW_AUTHORITY_CHECK function module, TSTCA mechanism does not know about it (does not use this way!) and so S_TABU_DIS in TSTCA must still be authorized for using the object not some “friend” object like with DIS and NAM. In case you find TSTCA in SAP standard transactions, you can also consider reporting it here and we can see if we can get rid of it once and for all somehow.
  8. S_TABU_DIS and NAM get a little mention in Frank Buchholz’ blog ABAP Development Standards concerning Security. Unfortunately it does not mention the information about not using S_TABU_* checks hardcoded in the code and the need for VIEW_AUTHORITY_CHECK but maybe you can just believe me on that one.

I must also remind you about the blog by Greg Capps: Reduce the Risk of SAP Direct Table Access.

 

Then we got S_RFC, RFCTYPE = FUNC

We used to have the same problem with authorizing for S_RFC. You may have noticed that S_RFC gets generated automatically by the PFCG framework when you put a function module into a menu of a role (yes, that works!). Unfortunately what gets generated is a S_RFC instance with RFCTYPE = FUGR. This means that by putting a function module into a menu of a role your role will get S_RFC instance generated which will authorize for all function modules in the function group.

The good news is that there is better granularity possible here since RFCTYPE = FUNC has been introduced. It means you can (MANUALLY!) authorize for a single function module.

It works very much like S_TABU_DIS and NAM: At run time the first check is for the function group executed. If this check fails a second check for the function module is executed. By this behaviour no changes are to be expected during upgrade, but a more granular authority check can be activated on demand. It also share something with S_TCODE – generated entries you cannot edit (because they correspond to the menu entries): Note that the S_RFC standard authorizations discussed in this note are not authorization default values but automatically created start authorizations analogous to S_TCODE. Therefore, they cannot be edited.

If anyone from SAP reads this I would be interested to know if anyone plans to generate S_RFC type FUNC in PFCG either as a default option (after installation or upgrade of the system) or as a default option once a customizing switch is changed (PRGN/SSM_CUST?). That would be wonderful.

Let me share a workaround for type FUNC if you have the time (or the strict requirement) or the urge to make your roles super secure. What you can do is you can manually add new SU24 proposals to the function modules that you want to use (you already are using) in your roles: S_RFC with RFCTYPE = FUNC. Then when you create your role menus and SU24 gets pulled into the authorizations as well as S_RFC RFCTYPE = FUGR gets generated by the PFCG for you, you have the necessary authorization needed to use your functions modules covered twice. Once by FUGR, once by FUNC. Now if you deactivate the instance with RFCTYPE = FUGR, you have a role authorized for S_RFC values which it really needs and not all the function modules that happen to be in the function groups.

 

Technical details for the interested:

  1. S_RFC type FUNC has been introduced with OSS Note 931251 - Security Note: Authority Check for Function Modules.
  2. OSS Note 1640733 - PFCG: Additional S_RFC authorization describes the mechanism how PFCG generates standard instances for S_RFC object for (remote enabled) function modules in the menu of a role.
  3. OSS Note 1749485 - PFCG: Problems when updating start authorizations mentions the generated instances for S_START and S_SERVICE objects based on the role’s menu entries just like we get for S_RFC.

Anyway I hope you see my point. Just like S_TABU_DIS got more granular with S_TABU_NAM, so did S_RFC (although within one object).

 

…and now we’ve got S_PROGNAM

And finally… here we are getting to the point why I reminded you about old and known facts above – as an introduction to the “get-more-granular” movement which now has a brand new member. Let me introduce you to S_PROGRAM’s younger brother S_PROGNAM. Please check the spelling to see the difference once again;-).

So what is this new S_PROGNAM? It is a possibility to authorize for individual programs rather than via program groups. Note that you must activate the feature to be able to use it, for existing customers using existing authorization concepts it does not change anything (backwards compatible).

The programmatic submit of reports is secured by the authorization group (old S_PROGRAM) the report is assigned to. In case the authorization group is empty, the report may be executed without an initial authorization check. How I see the new check (if active) it checks your authorizations every time (every time you start a program using the API which also takes care of S_PROGNAM). Which means it does not “just happen” when you call SUBMIT <program> in your custom code. If any of my assumptions is wrong, I will update the texts once I learn the facts (and can cite them via an OSS note).

As a consequence of this new granularity and flexibility you can authorize for only those programs that are really needed and if you work carefully and patiently (and manually), you may get up into a world where S_PROGRAM does not have * in the value and S_PROGNAM is used in combination with SU24 proposals and role menus. Happy hardening (of your security).

 

Technical details for the interested:

  1. To learn more about the new S_PROGNAM object start with the note 1946079 - Initial Authorization Check in Function SUBMIT_REPORT. Note that this authority check IS OPTIONAL and you must turn it on (see point 3 below).
  2. Note that although this S_PROGNAM object is quite new, it is back-ported all the way to NW 700 SP4 (which a LOOONG time ago!). In case you run an older system, you can consider importing the correction instructions if you can upgrade for whatever reason. If I am not mistaken, by default the mechanism and the object exist in the NetWeaver systems 740 onwards. Try transaction SACF and you will see.
  3. To be able to use the new S_PROGNAM you need to have the SACF transaction (switchable authorizations framework installed first). For more information about what that is you can read OSS Note 1922808 - SACF: FAQ - Supplementary application information and Note 1908870 - SACF: Workbench for switchable authorization scenarios.
  4. To read an interesting discussion about the old S_PROGRAM navigate here: http://scn.sap.com/message/6903382.

 

P. S.: Rumours have it that we can expect more granularity coming for other objects as well. A candidate that some people are waiting for (like DSAG – German User Group in its materials) is S_GUI that would give the admins the granularity to decide about export / import feature for each program separately. In case anyone has any updates on this one, I would love to hear about it.

 

Questions for SAP:

1) Will you change the S_RFC behaviour in PFCG? So that PFCG generates S_RFC type FUNC instead of FUGR now when such option is available? Even if you don't make it a mainstream thing for everyone, would you at least consider a switch (PRGN/SSM_CUST) that would let customers switch that on/ change the current default behaviour? Note: we are well aware of the limit on the number of values in a PFCG instance, especially when names of functions are so long.

2) Would you consider an option to check S_TABU_NAM first (before S_TABU_DIS) or provide a switch to do this so that in the authorization trace the more granular access comes first? Then the info which table it was if the check was unsuccessful comes first and makes it easier for the normal and also the lazy to spot the value which must go to SU24/ role in PFCG?).

3) Would you consider cleaning the TSTCA table records to remove S_TABU_DIS from there (as that is not considered for NAM&DIS mechanism because that only works via VIEW_AUTHORITY_CHECK)?

4) Would you tell us why you decided to perform the check on S_TABU_DIS before S_TABU_NAM? Ideally put that into some OSS note (or KBA?) and let us read it there - from the official source.

5) Although it is unlikely, has that ever been considered to retire S_TABU_DIS object one day? Would you consider a switch that would deactivate S_TABU_DIS in the system so that customers can force more granular access only?

6) Can you provide any updates on S_GUI getting more granularity as well? Like when, new object or new field, SACF or standard delivery etc.?

 

Interesting points from the discussion:

Martin Voros recommends note 2041892 - Logging of call of generic table accesses to your attention.

A bundle of information about the solution can be found at http://scn.sap.com/docs/DOC-58501.

 

Formalities over, why bother with yet another security product?

 

I have had the same model of Swiss Army Knife for over thirty years. At the time I got it, it was probably the top of the range model. I worked in research and development for quite a few years and I would have felt naked without it.  Probably all the tools have been used in one way or another, often not for their intended purpose. Usually only a sub-set of the tools got used on a regular basis, and now that I am in software the main tool is the bottle opener. The great thing about such a device is its general purpose nature. You can do almost anything with it and a little imagination. Sometimes you need to do something; you whip it out and its “job done”. Other times though, it’s only better than nothing in an emergency – I would not like to carve roast beef with it, for example. I have sometimes really fumbled and sweated trying to achieve something that with the right tool would have been accomplished in seconds without risk to whatever I was working on.

 

The same applies to software but people tend to believe otherwise. They are looking for a magic solution to every problem when, in reality, the best you can hope for is to have the right combination of general purpose and specialized tools. SAP Enterprise Threat Detection is like the carving knife in the kitchen – the best tool for its purpose.

I have been following the news on the Shellshock vulnerability the last few days (more information here, here, here, and here) - the vulnerability affects millions of systems and devices. And, a lot of SAP customers run UNIX/Linux systems and consequently have unpatched Bash vulnerabilities that should be patched. But what’s the criticality for SAP customers? Would an SAP customer be vulnerable to application-level attacks taking advantage of this vulnerability? Would an SAP customer with services exposed externally be vulnerable to this type of exploit?

 

Over the weekend Rob Kelly, a colleague of mine, and I spent some time thinking through security ramifications for our clients; Rob spent some time attempting to exploit this vulnerability at the application level on a NetWeaver Gateway and an ABAP AS system front ended by Web Dispatcher. The good news is, SAP has standardized on the C Shell for a lot of their *NIX scripts, and external services are not script-based. PI/PO developers might use Bash scripts but these normally can't be invoked directly.

 

The primary consideration for SAP customer is a separation of duties issue. One of the critical technical separation of duties conflicts is that between development and system administration. With this vulnerability, a developer could release code that allowed them to execute arbitrary commands and thus gain system administrator access.

 

However, SAP customers following the common sense security practices outlined below would already find they have processes in place to address to address this specific risk:

 

  1. Removing access for Developers to execute OS level commands in production.
    1. The env command used to exploit this vulnerability is defined in SM69 by default; developers should not have access to make modifications in SM69.
    2. Developers shouldn’t have the ability to set up background jobs that would allow them to pass additional parameters.
    3. Don’t forget to remove the ability to run report RSBDCOS0 in SA/SE38.
  2. Addressing infrastructure security
    1. OS-level logins should be restricted to administrators
    2. Restrict the ability to obtain console level access via firewall – do your users need the ability to SSH to your application or database servers?
  3. Following secure software development practices
    1. Run Code Inspector or a like product to ensure input validation for user defined variables being passed to OS Commands (via function module SXPG_COMMAND_EXECUTE, PI/PO scripts, or otherwise).
    2. Have another developer peer review code for workbench requests as part of your change control process.

 

While the BASH shell should definitely be patched, having these controls in place should mitigate this risk of shellshock being exploited on SAP systems significantly. Practically speaking, most customers can afford to wait to apply this patch in their SAP landscapes during their normal patch maintenance cycle.

 

What about you? Has anyone else out there explored the implications of the shellshock in their SAP landscapes?

 

Note: As mentioned in the comments, SAP has released a note on Shellshock: 2072994 - "ShellShock“ vulnerability (CVE-2014-6271).

Hello,

 

thanks for joining the Webinar: “Security in an age of Big Data and proliferating Systems”. The recording is available here:

http://event.on24.com/r.htm?e=789207&s=1&k=133592B7DC2226AD939B1A9CD1972808

 

I want to share first the most important links:

 

SAP Single Sign-On

http://scn.sap.com/community/sso

 

SAP Identity Management

http://scn.sap.com/community/idm

 

SAP Enterprise Threat Detection

On October 15, SAP’s new product SAP Enterprise Threat Detection went into ramp-up. Find detailed information about SAP Enterprise Threat Detection here: http://scn.sap.com/docs/DOC-58501. If you are interested in becoming a ramp-up customer, go to the SAP Service Marketplace.

 

SAP Cloud Identity

http://scn.sap.com/docs/DOC-49579

 

SAP Cloud Identity Service

 

Roadmaps

http://service.sap.com/roadmaps --> authentication required

 

Regards

Matthias Kaempfer

This is a close look at the advanced cyber defense portfolio of Telekom and T-Systems.

I once had a long term and intense 3-year project with T-Systems and there are still strong ties between me and the good folks at T-Systems on a personal level.

 

This made me write this blog, out of the fascination of the topic and the people. By no means - this is no marketing stint, and I have no commercial ties to TSI.  I also promised in one of my last blogs to make a report about the Cyber Center and a lot of people mentioned interest.

 

Given the old project ties, it is no wonder that in my new long-term security project at a different customer site we

are still in contact and keep talking. I heard of this newly opened, with much media coverage and even more German politician opened “Cyber Defense Center” in the former German capital Bonn (now capital of Deutsche Telekom). It really interested me and I kept researching about the technology behind the story.

ACDC_Erff.JPG


 

I had the chance for a longer talk with Dr. Karl-Friedrich Thier, Senior Security Consultant of the Business Unit Cyber Security at T-Systems International. We talked for nearly two hours about the technology used and the strategies behind the cyber defense center. It is not only used by Deutsche Telekom itself to protect their huge network, but is also available as a service to (usually very large) customers from T-Systems. There is also one operational aspect (Telekom Cyber Defense Center in Bonn) and one service-level, customer aspect (Advanced Cyber Defense as a service by T-Systems)

 

But why am I so excited about this ACD? It is more than the usual “firewall bigger and higher” or the casual “we handle the largest DDOS”. It is a complete new philosophy of cyber defense and – as opposed to an abstract philosophy – extremely well executed and put into broad practice. The last sentence is my impression.

 

The idea and intention behind this center and the used software and hardware, project and people is (according to their web site):

 

“Companies that don't adapt their cyber detection and response capabilities to this threat constantly lag behind the complex and targeted attacks. To free themselves from this risky and frustrating cycle of playing "catch-up," companies need to construct an intelligent security management system that links information from a range of data sources and analyzes it in real time. The goal of this proactive approach is not only to protect the company from known attacks, but also to identify unknown attacks and quickly initiate countermeasures.”

 

The technology behind the Cyber Defense Center is very diverse and colorful. A lot of tools are used, like different instruments in an orchestra.

It all starts with building situational awareness. Deutsche Telekom operates  around 180 “honeypots” that mimic vulnerable systems around the world which attract all kinds of attackers. By watching and measuring the hacking attempts, you get a pretty good overview of the actual tools used, current attack vectors in favor and organizations using them. The deployed honeypots are actually mostly raspberry PI’s, btw, that are quite cool gadgets.

 

Watch 180 honeypots live in action

sec_dashboard.JPG

 

The results are public and can be viewed on the web site http://www.securitydashboard.eu/. In parallel, there is an automated watch of twitter and news feeds for related activity based on used keywords. Here you see in summary in real-time, where actually threats are brewing. Like a weather radar.

 

The actual network operations and analytics is performed by tools of RSA, which have a huge large-scale portfolio for network operations analytics and threat detection. But it is all thread-based and pattern based.

 

The “art of security” is, to look for the right patterns to react upon. And this is something you can’t buy – you have to collect and build it yourself over time. And it is constantly changing. This is one major task of every cyber security center. This is the core IP (Intellectual Property) that makes you excel over all other approaches.

 

Security analytics is complemented by forensic and advanced malware detection tools like FireEye®. Rather than scanning for specific patterns, FireEye executes potential threats in an isolated virtual machine (Sandbox), and monitors its behavior. Like a virus, that is contained and captured in a laboratory section. The attacker, however, doesn’t see a VM but rather a physical workstation or server. One of the cool features is a “time warp” where you can fool “sleeper Trojans” that will sleep for some time before starting.

 

There are dozen more interesting and used tools of various companies, but they all underline the various aspects of network security

All these tools are nothing without the proverbial orchestration. The core of the cyber defense center is central organization, different present skillsets from analysts to operators and squad leaders that can act on the spot on any actual thread.

 

The concept of a Security Operation Center SOC

 

TSI_ACDC.jpg

 

People as success factors

 

is the mentioned underline in the slide  and this phrase is taken very seriously here in the ADC-concept.

 

As shown on this picture, it is the staffing and the organization together with the elected software and hardware tools, that makes this Cyber Defense Center so powerful. A lot during the talk with Dr. Thier, we did not discuss software and feature, but how an organization needs to cater to the need of customers. Every customer is different, every customer has other threat and risk areas and there is no “one size fits all”, especially not in security.

 

The fascination of the setup was, that there was a deep knowledge of security, both commercial as well as governmental at the people involved at T-Systems, and in conjunction with the well selected software and the organizational strength of a large organization, that this all together made a great picture.

 

The key of all this technology (like the patterns) is how they are applied, the intellectual property behind the tools. This is how it all works well together.

 

One of the questions that came to my mind is, if regular customers (and even my customers are not really small) would afford such an organization. Probably not, the reduced risk would be in sharp contrast to the big investments in center, people and technology. But I see in the future a convergence of on premise strategies that are self-contained and services like the Advance Defense Center that will like a shell surround the overall strategy.

 

We will see, how the security strategies everywhere are evolving in the future.

 

(Disclaimer: This is not a sales pitch, but if you look into a European case of applied security for large networks, this is someone you should talk to or at least, even if you do this on a much smaller scale, you should learn from the big Ones)

Over the last few years there have been indications of rising interest in SAP systems by white hatters and black hatters, and I guess any color in between. In any case the world has got more dangerous for systems in general, not least because they are increasingly interconnected and exposed in ways that were unthinkable (for most) in the past. Although traditional security solutions remain vital for minimizing the attacks on your system landscape, you can and should assume that there will be unhealthy activity within your defensive perimeters. Determined attackers are likely to get through eventually and the best technical precautions might be nullified by internal personnel or by social engineering tricks.

 

These are well known dangers, and there appears to be a serious gap in the coverage of SAP systems by existing security products. These lack insight into SAP business software and also run up against what is essentially a big-data problem - that is, how to analyze the security-relevant data that exists in the landscape. Later this year, SAP plans to go into ramp-up with a new product designed to address exactly this issue.

 

For customers who may be interested in joining the ramp-up, further information can be found at www.service.sap.com/public/rampup on the tab Upcoming Ramp-Ups.

 

More information on SAP Enterprise Threat Detection will be available here on SCN when it goes into ramp-up.

When my little but big company, that I started 10 years ago and foster ever since, started the venture last year to change the scope of our company from SAP PI, Basis, Data Center Consulting and helping managing complex SAP landscapes on an European scale to SAP Security, it was a feeling like in the good old Internet times. It was a timewarp to the change of the Millennium or to the appearance of the Apple II and IBM PC in the 80s. Exciting times.

 

Approached by IBM to become a strategic partner in the IBM/SAP Security world, we were very pleased that such a “big” company was really trusting us to work with them in the major league.

We looked also at the surrounding economy, the world of Pen Testing, Security Administration and Operation and SIEM (Security Information and Event Management) in the so little, so big SAP Universe.

 

(Just to explain what a Pen Test is: It is a Penetration Test, where dedicated security personal is trying to break the SAP-System and get their way into it. This breach attempt is made on all levels: Network, Infrastructure, Basis hacks, RFC Hacks, SAPGUI hacks, but also social hacks like email phishing and password sniffing).


We also choose our prefered vendor for SAP penetration testing. But to make a long story short and to come to my actual point, it is easy to say “We make now Security”, especially in the SAP world, to choose a product and go ahead try to hack the planet.

 

A good security breach is more than a tool. Like everything else, it is a deep knowledge about networks, infrastructure, attack vectors and the tools needed and used. If you don't want to use a commercial tool, than you still have a good choice.

 

One of the tools you need,  when you start pen testing, is the KALI distribution, maintained by the folks of Offensive Security.

The Linux KALI distribution is Open Source and has a long history started as a tool collection long time ago. They also have an online class with a certification that is commercial, but everybody in the industry will agree that it is a very demanding certificate with a tough exam. This means, that it will prove a work like experience and hands-on expertise.

 

But besides the certificate, these are the “tool of the trade” and you should be able to make any pen test even without commercial tools. There is a great companion book, and if you really want to start looking at the Pen test World, get the KALI distro on your laptop, get the book, start NMAP and practice.

But even if you try to learn the "Top 10 Tools" that KALI emphasizes, you will need a lot of practicing to become fluent in a pentration test workflow.

(If you are by coincidence at your customer site, try to run an NMAP full scan by plugging in your private laptop in the corporate switch and count the time until security stands at your desk. If this is more than 15 minutes, give them a security session). (OK, this is maybe not the brightest idea, but you got the story).

 

Kali has also invented the motto: “The quieter you become, the more you are able to hear”. And this is really true, not only for all security matters in the SAP world, but in the other corporate IT world as well.

 

hack_key.jpg

 

Security needs a very thorough understanding not only of large data center infrastructure and the surrounding networks, but also a lot of patience, listening and exploring. No tool will replace your knowledge and your abilities to map a complex SAP Network. And the SAP world adds big twist to Pen Testing. I have seen the one or the other Pen tester (usually right out of college, but sold as the security consultant)  from outside of the SAP world, using an open source tool and then asking around: “OK, looks like I am in with SAPALL, but what do I do now?”

Things like hacking via RFC and SMGW/Gateway means knowledge of programming, ABAP and Java the like.

 

It is one part to be a loudspeaker, touting all your hacker experience in the world, go with a tattoo on the forehead to Blackhat Las Vegas and pretend to be the coolest kid in the universe. Like someone said, maybe your little teen sister is impressed, but not the CISO.

 

I had a longer conversation with some partners about a good way of approaching my customers and I thought of things like a German Blog, Twitter, weekly reports on threads and new findings. But at the end, this would be just noise. After a short while, nobody would listen anymore. We decided, that the quiet way, the conservative but most trustworthy approach was just to call, meet and talk. Talk about the needs, their local threats and  findings and how to handle all these large and small security issues.

 

Security, especially Penetration Testing and discovering true vulnerabilities that could make or break a company in its hardest case (see my blog) makes a trustworthy relationship a base requirement for every customer situation. Showing first and foremost that you are a responsible person, guiding the customer on the risk assessment through the differentiation of hype or real risk is a demanding task in the SAP world of large installations. Knowing the hack is one thing, but waging the risk, the cost of the process to fix the gaps and making everything fit into an overall security strategy is a complete different world.

 

I like the challenge of this professional spread: Between the fun of serious hacking and testing on the one side and the serious presentation on the other end - to put on your black suit and put the findings in a real perspective.

 

(edited for content grammar and political correctnes)

Frank Koehntopp

Designing for Security

Posted by Frank Koehntopp Aug 27, 2014

There are two distinct ways on how you can build security into your software:

 

  • have your software tested and/or hacked, and start applying technology to plug the holes and keep the bad guys out
  • think about how your software could be mis-used and make sure your design prevents that

 

Or, as Gary McGraw just wrote, in much better words:

 

Screenshot 2014-08-27 20.27.23.png

 

Unfortunately the concept of "anticipating attacks" seems to be quite alien for the average developer - recognized by responding to a threat scenario with "but why would someone do that?".

 

It also seems to be hard to teach. There is a new effort that I think has lots of promise: the IEEE Center for Secure Design tries to tackle the problem from the design angle. This is their mission statement:

 

The IEEE Computer Society's CSD will gather software security expertise from industry, academia and government. The CSD provides guidance on:

  1. Recognizing software system designs that are likely vulnerable to compromise.
  2. Designing and building software systems with strong, identifiable security properties.

The CSD is part of the IEEE Computer Society's larger cybersecurity initiative, launched in 2014.

 

If you're interested in the topic, I would encourage you to read their document. It tries to explain the most common design flaws that lead to vulnerabilities. Every security architect in your team should have read (and understood) those, ideally:

 

Screenshot 2014-08-27 20.33.41.png

 

These are the topics explained in more details in the PDF (click on the image to read it):

 

  • EARN OR GIVE, BUT NEVER ASSUME, TRUST

 

  • USE AN AUTHENTICATION MECHANISM THAT CANNOT BE BYPASSED OR TAMPERED WITH


  • AUTHORIZE AFTER YOU AUTHENTICATE


  • STRICTLY SEPARATE DATA AND CONTROL INSTRUCTIONS, AND NEVER PROCESS CONTROL INSTRUCTIONS RECEIVED FROM UNTRUSTED SOURCES

 

  • DEFINE AN APPROACH THAT ENSURES ALL DATA ARE EXPLICITLY VALIDATED


  • USE CRYPTOGRAPHY CORRECTLY


  • IDENTIFY SENSITIVE DATA AND HOW THEY SHOULD BE HANDLED


  • ALWAYS CONSIDER THE USERS


  • UNDERSTAND HOW INTEGRATING EXTERNAL COMPONENTS CHANGES YOUR ATTACK SURFACE


  • BE FLEXIBLE WHEN CONSIDERING FUTURE CHANGES TO OBJECTS AND ACTORS

In 2012, American agencies under the lead of SIFMA where running the first cyber-attack stress test on financial institutions on Wall Street.


One year later, it was repeated in London, with a broader approach and more detailed preparation. This stress test and the results are stunning. Everyone who has to do with security should look at the scenario and should ask if their organization has an answer to the raised question:


How would we behave, how would we address all the issues that where surfaced during the organized cyber-attack?.


This is nothing that only affects Wall Street or London City’s financial district. This scenario can hit every company in the world.


Since I recently won a price in Germanys largest IT magazine, CT, in a storytelling contest, let’s recount the tale of a cyber-attack war game in a novel way.

And since I am German, (as SAP is), let’s assume the story does happen in SAP Homeland, in Germany and Carl B. Max, the CEO of AUTOBAHN AG, (“Fast is GOOD”) is still asleep in his home near his headquarter in Frankfurt am Main, Germany’s financial district.


page6_pic3.jpg


The Sequence of events that lead to the dissapearance of the German Autobahn AG:


At 6:00 AM in the morning, Twitter, Facebook and  the German Autobahn-Forum “The Fast and the Faster”, are showing up first posts: How bad the German Autobahn is, full of potholes, governed by too much speed limits, too much traffic jams.


At 6:30, more serious posts and accusations are added: Pictures of deadly accidents because of potholes on the fastest parts of the autobahn. The idea of a class action lawsuit is mentioned.


At 8:00, the posts have piled up to a veritable shitstorm.

At 8:30, the Twitter and Facebook accounts, maintained by the PR-Department of Autobahn AG have been hacked and are posting strange and bogus replies to the accusations. The impression of ignoring and downplaying the accusations are immanent.

At 8:45, Carl B. Max, CEO of Autobahn AG, is arriving at the office.


At 9:00, rogue High Frequency Trader are starting an attack on the stock of AUTOBAHN AG. They are short trading the stocks within seconds to a level, where regular trading algorithms, due to the high trading volume and dropping values, are suddenly releasing stop loss orders. This is generating an automatic trading avalanche, resulting in a landslide on the course of the AAG stock.

At 9:30, Social Medias are full of speculation on bad financial deals that are threatening the future results of Autobahn AG. The PR-Account of the company speaker is hacked and false PR statements are send to the world wide press. Since nobody knows, who was adressed and what was published, counteractions became difficult.


At 10:00, Carl Max is calling for a press conference at the headquarter in the office Tower at the “Frankfurter Kreuz” near the Airport. He demands actual financial statements from his CFO that he can present as a testimonial to the press, that everything is good.

In the middle of his calls, the telephone became dead. A massive DDoS attack is driven on the VoIP based telephone center. A special VoIP virus, dedicated to this equipment eats its ways through the Ethernet based phone infrastructure. Only calls via mobile can be done. “Can’t be reached for comments” was the phrase for the hour.

.

At 10:15, the SAP system crashes. Restore of the backup is necessary. The IT is detecting, that all tapes from the last 4 weeks are damaged, due to an error in the backup procedure. The SAN stopped working with a damaged hardware.

At 10:30, the CFO finds out that all numbers in the SAP Business Warehouse systems are corrupt. It is unclear, if the backup does contain non-manipulated figures.


At 11:00, the rogue high frequency trading continues in London, after the London exchange opened. The landslide of the courses goes on


At 12:00, Carl Max can’t present any reliable numbers to the press. The attack is not mentioned.

The plea to the large stock exchanges for suspension of their stock trading is not granted, since AUTOBAHN AG can’t present any figures for proof and no one can’t be reach to comment on the incidence.


At 15:00, NYSE in Wall Street is opening. The rogue trading leads to a suspension of trade, when the company value was hitting one cent and the stock was rated as a penny stock.


At 17:00, when the German Stock Exchange in Frankfurt closed, Deutsche Autobahn AG is “pleite”, bankrupt.


Do you think this is not for real?


Fiction? You wish, but it is real life truth. Every single point of this cyber-attack already happened. Some of them are even common threads, like manipulation of social media or high frequency trades. Ever thought about how reliable a VoIP or how vulnerable a Microsoft Lynx Server is? And especially in a corporate Environment?


Some of them are recent developments, like the new “attack vector” of manipulating BI-cubes with the intent to lead the hacked co to false decisions.

And the backup? Guess how often I have seen this happen in 20 years? More than you would think, and it was always an internal problem of slobby backups, not even a hackers attack.


At the end, Quantum Dawn recommended at first and foremost, to establish a fast,, clear and direct communication on attacks. Don’t keep such attack secret. There must be internal and external (governmental, if this is a broad attack) communication ways that will react within minutes. These attacks are maybe criminal, but given the world wide state of politics, this attack can even be initiated by governments as part of a global warfare.


And you need an alerted IT who can countermeasure this thread in unison.


Really, think of your company you are in: Who would you call if you see an attack on a SAP system? And who can respond immediately?


P.S.:

More Materials:

Deloitte as audit company was part of the cyber trial, Here are their findings

And some great video for it also from Deloitte: Cyber Security. Evolved.


And also check my first blog in this series of security papers: THINK Security: Towards a new horizon


It is interesting to watch the security world undergoing a dramatic change. The classic world of protecting the good SAP system against the evil with a good firewall and relying on the closed SAP ABAP technology (known only by the good guys) does not longer live up to the promise.

 

The old security assurance, that SAP is so isolated and so exotic in the company network, that nobody will enter the premise, is slowly deteriorating over the last decade. Suddenly, the Internet, the Extranet, the VPN'ns are all over the place, connected straight to the ECC core system. SAP Hacks is a standard program on any blackhat convention.


While there are so many new Security Technologies in Firewalls, Appliances and software security frameworks, the Security World at SAP is still old-fashioned. But this is also a tribute to the ever growing complexity of the SAP eco systems. The impression, to live behind a secure wall in a secret garden, is just the glorified view to the past.

It is easy to say "Fix and harden the SAPRouter and Web Dispatcher". But what if you have thousands of routes and dozen of routers and web dispatchers? Just keeping them always up to date is a job by itself.


Customers need to learn to manage this complexity in a new way. I know a lot of SAP sites that are discussing continuous patching, upgrading, testing and enhancing. But this by itself is a daunting task. One of my larger customers has 60 SAP systems in one tier, all related and connected. Multiply this by three and you have Dev, QA and PROD tier with 180 machines. Tell me how to do "permanent changes" to this landscape and ensure maximum security while testing all 60 app systems in unison every time after patch day. In theory, you can add unlimited resources, 24x7 uninterrupted strategies and unlimited budget. Yes, you can solve it. But in economic terms, it is not feasible. It is the old economic story of limited resources and limited spend money.


The first step in a new strategy for security is risk assessment. There was a great blog of Balancing Danger and Opportunity in the New World of Cyber Domain

a great summary of Derek Klobucher about the keynote speech of  Gen. Michael Hayden (retired NSA Chief) who spoke to the attendants of the SAP Retail Forum 2013 .

 

Hayden drastically stated the new security paradgm : "If you have anything of value, you have been penetrated,” Hayden said. “You’ve got to survive while penetrated -- operate while someone else is on your network, wrapping your precious data far more tightly than your other more ordinary data.”

He basically stated, that Security is no longer about vulnerabilty alone. He introduced the formula, that Risk is always a relative value for your assets.


sdnSec.jpg


Risk = vulnerabilty x consequence.


This is the most important message for the near future for everyone involved in security You need to manage risk. Security risk in time and over time.


That must be the goal, even more for such a critical system like the central SAP system. The new security paradigms lives in real time, defending against frequent attacks, internal and external threats to capture or manipulate data. Organizations must face the new complexity, new organizational challenges and security risk management.


And with the risk, you need to change the security thinking from “defending walls” like in medieval castles to “pattern recocgnition”, to an approach, where you anticipate the next attack while it is building up. Here, the technologies of Big Data, SIEM and Artificial Intelligence emerging. In Germany, T-Systems and Telekom have a great "Real Life" showcase: the "Advanced Cyber Defense Center" in Bonn. (Maybe I do a blog one day about it).


Yes, this is a very complex and demanding world. And this is why even big companies need to talk, act and cooperate on security issues.

But this is the topic of my next block:” Quantum Dawn – What SAP Data Centers can learn from SIFMA war games”. 


Just relying on your good old firewall is a thing of the past.

For most SSO issue, the Logon Trace is needed to find the root cause.

 

In ABAP system, actually, the logon trace is the development trace of work process. Normally we use the important Note:

#495911 - Trace analysis for logon problems

After get the trace, we can use the Security Audit Log to locate the work process which handled the logon to find the real reason why logon failed.

 

But sometime, if the security audit log is not active or there is no entry logged in audit log, it becomes difficult to find the work process.

 

For HTTP Logon issue, I found we can use ICM trace to locate the work process.

Firstly, Raise the ICM trace level to 3.

This can be done in the SMICM, use menu “Goto -> Trace Level -> Set”:

1.gif

(Also remember go to SM50 to raise trace level to 3 on “Security” component for DIA work processes.)

 

Then reproduce the issue, and after that change all the trace levels back to default value.

 

Now let go to check the ICM trace. Use the reproduce timestamp to find related trace:

2.gif

(Here I recommend the free software Notepad++, it can search large text file very fast. Show the result in list and can locate to position of file by double-clicking.)

Then we can search by such keyword “IcmHandleOOBData”, in the result following lines are what we need:

3.gif

[Thr 140080821593856] IcmHandleOOBData: Received data on 1st MPI (seqno: 1, type=6, reason=Request processed in wp(6)): 42/23079/0

[Thr 140080821593856] IcmHandleOOBData: request will be processed in wp 6

Here the "wp 6" mean the work process number 6 handled this logon.

 

Then we can go to check the dev_w6 to find the related trace, we can use timestamp or keyword "note 320991" to search:

4.gif

In these logon trace, we can find the root cause of why logon failed.

Segregating Warehouse Responsibilities using standard Inventory Management and Warehouse management authorizations


Background/Situation


In certain situations there can be a requirement to separate logistical processes in a SAP system on a detailed level.  This is usually the case when different parties are responsible to perform different logistical processes and / or are responsible for different parts of the same warehouse. 


Examples of the situations where the requirements could occur are:

  • A third party executes logistical activities and manages a part of the  plant and warehouse.  In these parts of the plant and warehouse this third party is responsible for the stock.
  • ‘Special’ materials are stored in certain parts of the warehouse and should only be handled by a certain set of users.


This separation in responsibilities can be depicted in SAP by setting up different plants and warehouses that can subsequently be authorized on. But these solutions would mean a redesign of the logistical landscape and additional administrative activities would be needed during day to day operations.  Avoiding this redesign and administrative burden would require effective authorization restrictions on organizational elements lower than plant and warehouse. The requirement of controlling who executes IM and WM processes on a detailed level can be met using standard SAP authorizations in combination with IM/ WM customizing without setting up additional plants and warehouses.  This blog discusses this solution for segregating warehouse responsibilities. 


Content of this blog


This blog explains when this solution can be used, when it should not be used, how it works and what it can and cannot do.  It also gives an overview of the activities that need to be performed to implement the solution. The solution is based on my own investigation and experience, but also information from several notes, knowledge base articles and threads was used and combined to create a complete solution.


The solution and when to use it


You can use the solution when you need to differentiate between different groups of users who can perform IM /WM activities within parts of the same plant and warehouse.


The SAP SAP WM customzing and  authorization elements ‘storage location’ and ‘storage type’ form the basis for the solution. By properly defining the WM customizing authorizing on the authorization elements  them you can:

  • Restrict IM movements based on storage location  to certain groups of users ( next to the normal restriction on movement type and plant)
  • Ensure that ‘allowed processes ’ are  defined in WM customizing ( like storage type search settings ) so during WM processes users that needs to execute them are not hampered by authorization checks
  • Restrict ‘manual’  WM movements based on the ‘source’ and ‘destination’ storage type to certain groups of users (next to the normal restriction on warehouse and WM movement type)


By authorizing on these two elements (storage location and storage type), you can create an authorization setup that only allows users with certain roles to perform specific IM and resulting WM movements for specific storage locations and restrict who can make ‘manual’ WM movements for specific storage types. In this case ‘Manual’ WM movements refer to transfer orders that are not triggered by an IM movement or other specific Logitical actios. For example the transfer orders of the movement type 999 that can be created manually via transaction LT01.


With such an authorization setup only the party that is responsible for the storage locations and storage types can keep control over the movements of stock located there while normal ‘Allowed’ warehouse processes are performed in a regulated manner and are not hampered by authorization restrictions.


When not to use it


Only use it when there is a hard requirement that these restrictions are enforced by the system. Implementing and maintaining the solution (for WM) can be complex.  If there is no hard requirement to enforce these restrictions in the system on such a detailed level don’t do it. In case checking if procedural agreements are adhered to is sufficient do not use authorizations for it.  It also makes no sense to put in effect restrictions in SAP if there are no physical restrictions as well.  if SAP blocks a user from moving materials from one part of the warehouse to another but there is no physical  restriction ( like a locked door or a fence) the person can still just move the materials and not register it. 


Prerequisites


Before this solution can be implemented a number of things need to be clear. If these aspects are not clear the solution cannot be implemented correctly and will only work partly or not at all.  The following must be determined:

  • Ownership of all Storage locations
  • Ownership of all Storage types
  • Clearly defined logistical processes
  • Which party executes which steps in these process

Combined ownership of storage locations and storage types should be avoided as much as possible as this will complicate and can (partially) undermine the solution. Where ever possible ownership of storage types for interim bins have to be determined as well.


The concept


Inventory Management


When an IM movement is made an authorization check on plant and movement type is executed. If the user is not authorized the movement cannot be made. By settings made in customizing a subsequent check can be activated whenever a movement is made for a certain storage location. This customizing switch is set per storage location. By default this customizing setting is off.   When this customizing setting for a storage location is activated it will trigger an authorization check for the combination of movement type, plant, storage location ( and of course activity)  whenever a IM movement is made using this storage location. The authorization object checked is M_MSEG_LGO.  See also SAP Knowledge Base Article 1668678.


So by only granting the roles for a certain party with the storage location/plants they are responsible for in combination with the movement types they are allowed to perform the required segregation in responsibilities can be made.


When a storage location to storage location movement is made both the ‘Source’ and ‘Destination’ storage locations are checked in case the customizing check is set for both storage locations. This would mean that a movement betweens storage locations ‘owned’ by different parties is blocked by authorizations. In those cases a ’two –step’ storage location to storage location movement can be made wherein the sending party executes the first step and the receiving party executed the second step. See also SAP note 205448.   


Warehouse management


The solution for warehouse management is more complicated and is based on the SAP WM Customizing like the concept of storage type search (strategies).


Authorization check for all transfer orders:


During the creation of a TO an authorization check on Warehouse is performed in all cases (Field LGNUM of object L_LGNUM).  At that point no check on Storage type is performed (LGTYP is checked with DUMMY) See also Knowledge base article 1803389. In case the user is not authorized for the warehouse the TO cannot be created


Authorization checks in relation to WM customizing:


When a transfer order is created, SAP will try to determine which storage type to pick the material from (source) or which storage type to put this material (destination).


To determine where to pick from SAP checks if it can find a suitable source storage type for removal by searching in the ‘storage type search’ table defined in WM customizing.  This search uses a number of variables like reference movement type, warehouse, pick strategy indicator in the material master and special stock indicator to find a suitable storage type. In case a suitable source storage type is found and used in the transfer order no extra check is performed.


The same method is used to determine the storage type to put away the material. In that case a suitable destination storage type is searched for in the ‘storage type search’ table in WM customizing.   In case found no extra authorization check is performed.


In a lot of cases WM movements are triggered by logistical activities like IM movements.  Under normal circumstances  the ‘storage type search’ WM customizing is properly defined for the logistical process , the necessary material master data is setup and the TO can be created without issues and without needing explicit authorization for the source or  destination storage types. This because it is an ‘allowed’ process and as such the extra authorization checks are not needed.


In case no suitable source or destination storage type is found in the  ‘storage type search ’ table and the user creates the transfer order in the foreground the user can enter a source or destination storage type manually. In that case and extra authorization check is executed.   This check is on the combination of Storage type and Warehouse.  The same object _LGNUM is used, for this check but now the field LGTYP is not checked with DUMMY but for the storage type (see FORM BERECHTIGUNG_LGTYP of include FL000F00). This check is performed because the entered storage type is not found as a suitable storage type in the search strategy (see include LL03AF6I). This check on object L_LGNUM is executed separately for the destination and source storage type.   Also when the users creates the transfer order in the foreground and changes the source or destination storage type into a storage type that is not part of the applicable ‘storage type search ‘ table entry this extra authorization check on the source and / or destination storage type is executed.  See also Knowledge base article 1803389. A thread that also mentions this is http://scn.sap.com/thread/775605


Using what is explained above this extra authorization check can be used to restrict the deviations that a user can make compared to the ‘allowed’ processes that are defined in the WM customizing.  By only granting authorization for the storage types the user is responsible for the user can only make deviations to these storage types. This can be considered technically correct as the stock located there is under this user’s responsibility.


Authorization checks for ‘manual’ transfer orders


Some WM movements can be created manually and are not triggered by other activities like IM.  For instance transaction code LT01 to create a TO manually can be used. Normally these movements are WM supervision movement types like 999 .  Not all WM Movements can be created manually. Which WM movement types can be used to manually create TO’s depends on customizing.  For all movements that are created manually an authorization check on WM movement type in combination with Warehouse is executed. The object that is checked is L_BWLVS.  Also the general check on warehouse is executed.  During the creation of manual transfer orders the concept of ‘storage type search’ and authorizations also applies. By not setting up ‘storage type search‘ customizing  for those movements the extra authorization check is always executed.  By only providing authorization for the storage types s users can only move stock between these storage types they control using these ‘manual’ movements


Conclusion:

  1. By restricting the access on IM level (movement type, plant and storage location) or other actions that trigger a Transfer order the authorization for the subsequent WM Movement  is restricted as well. If the user has authorization for the action with this the user also has authorization for the subsequent TO, but the manipulation of the storage types the material is picked or put away can be restricted to those defined as applicable in the storage type search (WM customizing) and those that are controlled by the authorization of the user (using roles)
  2. The manual WM movements can be restricted based on movement types and to those storage types  that are controlled by the user’s authorizations (using roles)


What it cannot do


Warehouse management:


No authorization check on storage type is performed when a TO is confirmed. The Warehouse is checked but the storage type is not checked (Object L_LGNUM with DUMMY). This means that anybody with authorization for the warehouse and confirming any TO can confirm a TO for that warehouse. There is no way to restrict on storage type during TO confirmation using standard SAP.  Because a Transfer order needs to have been created in order for it to be confirmed and the creation of the TO is controlled this gap is not crucial for the solution. Also the storage type cannot be altered during confirmation.


Inventory Management:


In almost all situations a material document will contain a storage location.  There are however a few situations where a material document does not contain a storage location. This is when a goods receipt is performed and the materials are consumed upon receipt. This happens for instance if a PO has a cost center as account assignment.  You must determine if these situations are relevant and if this gap is relevant for your situation.  If for example goods receipts are always performed by one of the parties then only one of these parties should have the authorization to do goods receipts. Although this party could potentially do a goods receipt while the PO erroneously contains a storage location which is not ‘owned’ by this party they can still do the goods receipt. This will not be an issue as they are responsible for all goods receipts.   In case multiple parties need to be able to perform goods receipts for different storage location you can include an authorization check (on e.g. the storage location in the PO) using BADI MB_CHECK_LINE_BADI.   This is however not standard SAP.


How to set it up


Inventory Management:

The more easy part is the authorization restriction for Inventory Management.   This can be done in four steps:


1) Activate the check on storage location:


Activate the check on object M_MSEG_LGO in customizing (menu path “Materials Management --> Inventory Management and      Physical  Inventory --> Authorization Management --> Authorization Check for Storage Locations”) See also SAP Knowledge Base        Article 16686


     M_MSEG_LGO.png


2) Make storage location an organizational level:


Use program ‘PFCG_ORG_FIELD_CREATE’ to make the field LGORT an organizational level. See SAP note 727536


3) Update SU24 for relevant transaction codes:


All transactions that create, change or display IM movements need to be updated to have object M_MSEG_LGO set as ‘proposed = Y’  so that the object is populated in PFCG during role maintenance.


4) All roles that contain these transactions need to be updated to contain the M_MSEG_LGO object with the right plants, storage           locations, movement types and activity.  Important to know is that the check on M_MSEG_LGO is also performed when a material           document is displayed. This means that also roles that provide display access to material documents ( like MB51) need to be updated to include the authorizations with activity ‘03’


Warehouse management


Setting up the solution for warehouse management is a more tricky part and consists of three steps:


1) Set up all necessary storage type search strategies to cover ALL ‘allowed’ processes:


Stock removal and stock placement storage types search entries have to be setup in WM customizing for all ‘allowed’ processes for which no additional authorization check on storage type is needed.


2) Make sure that the necessary master data (material master data etc) is set up correctly so that the correct storage type search can be found and used during 'allowed' processes

 

3) Update the roles:

 

All roles that contain the object L_LGNUM need to be updated so that they contain the authorization for the storage types belonging to the parties they are for. Please note that the object has no activity field and that some display transactions related to WM check on this object as well with DUMMY for the field LGTYP.

 

What to consider during implementation


Please keep in mind the below aspects in order to successfully deploy this solution:

  1. WM storage type search (strategies/sequences):  All ‘allowed’ scenarios must be covered by stock removal and stock placement strategies else authorization checks on storage type will be triggered which can fail because the user is not authorized while he/she should be able to perform the step in the process. Considering there are many variables involved there are many strategies to be maintained.  Having the processes clear and involving a specialist in SAP WM is essential in order to cover everything needed.
  2. Material master data:  In order for SAP to find the correct storage type in the ‘storage type search’ table the material master data fields like stock placement and stock removal strategy indicators need to be set correctly. This is crucial for the solution to work.  As there are a lot of material master records this can be quite some work. Most issues after introducing this solution will most probably be because of the incorrect or missing material master WM data.
  3. Training (of key users): especially the WM part of the solution can be complex. Training of (key) users is important in order for them to understand the concept and to find the right solution when goods ‘get stuck’.
  4. (temporary) Super role:   It can be very useful to (temporarily) have a sort of ‘super user ’ role available that can make transfer orders between storage types handled by different parties ( including those for dynamic bins). This can be done by granting this role authorization for all storage types or by creating a WM movement type that has search strategies for all storage types and granting access to that movement type. By assigning this role to a limited number of key users during the first phase after go-live a work around is available when a material movement gets ‘stuck’ while a real solution ( like material master data ,  WM search strategies of authorization roles changes) are being investigated and followed up. 

Best Practices for Roles Transport in AS ABAP system

Guidelines for role transports, I am here trying to compile different scenarios (as much as possible), please share comments and add-ons on the same if any.

1. Single role

 

For Single role change transport in standard way.

2. Parent and Child roles.

 

For parent and child roles different scenarios are:


Scenario-1. Addition of T-code and Authorization Object

We are adding T-code in parent role and distributing to all child roles, In this case we will create a transport for Parent and all Child roles. (If we are putting all child roles, parent role get added automatically).


Scenario-2. Addition of Org Level in Child Role.

Child role as per mechanism is only for Org level maintenance, so when we are making changes in org level of any child role we can transport only that child role. Again that child role will call parent role in transport automatically.

Imp Note: To avoid confusion and misbehavior in case of large no. of role changes of nature parent and child roles we should include all child roles in transport.

 

3. Composite and Single roles

 

For Single and composite roles different scenarios are:


Scenario-1. Addition of T-code and Authorization Object.

Addition of T-code, authorization object in Single role which is part of composite role can add individually in transport.


Scenario-2. Creation of new single role and adding to existing composite role.

We have created a new role and added that as a part of composite role; In this case we need to add single role and also composite role in transport without checking the option Also Transport Single Roles from Composite Roles.


Scenario-3. Creation of new composite and all its new single role.

We have created a new role and added that as a part of composite role, in this case we need to add composite role in transport without checking the option Also Transport Single Roles from Composite Roles. This option will take all its single roles.


Scenario-4. Adding or deleting existing single role in composite role.

In this case we need to add only composite role without checking the check box of Also Transport Single Roles from Composite Roles.


Scenario-5. Composite roles of BW system

In BW system Composite roles need to move always without checking the check box of Also Transport Single Roles from Composite Roles as in BW there are roles which are suppose to be allowed to edit by query designer and administrator directly in production. They are adding new queries in role menu on daily basis which they are not maintaining in Dev and Test and the composite with option checked Also Transport Single Roles from Composite Roles will spoil the roles.

 

 

The goal of the document is to help make users aware of the Transport roles in AS ABAP systems. Recommendations are based on my personal experience in SAP Implementation as an Sr. SAP Consultant. The user can follow the suggestions provided by the document which should be supplemented with additional information. The suggestion provided by the document may vary as per the project requirement.

Being a basis consultant , it was challenge to take up SAP APO security roles building exercise for an implementation project. I knew how to make roles and edit authorization objects for ECC, but that much information was not sufficient to find out authorization objects needed to control SAP APO functions.  Functional consultants started explaining me what all controls they need in their functionalities. A check at the SU22 screens was difficult process because of the lack of domain knowledge . Unfamiliar terms and codes were running on my head. Often the objects that I found with much pain was not the right one when we tested it . Functional consultants were not always available for our trial and error sessions.

 

I found that "authorization trace" of ST01 is the best and fastest way to find out right authorization objects. I asked the functional consultants to run functionalities  they want to put control on. I could watch their userids with trace produced at ST01.  But ST01 was too boring, I needed  much better tool to move fast and have more clarity.

 

STAUTHTRACE provide a neat formatting than ST01 for trace. I switched this on and asked functional consultant to execute the functionalities they needed. I found the authorization objects checked  in every functionalities by tracing what functional consultant was doing.

 

Example of how to use this function: Using STAUTHTRACE to customize SU01 functionality for unlock only

 

DescriptionScreenshot
  • Create a sample userid for functional consultant in quality system. Provide a role with desired functionality . Here for example we use SU01.
  • Put on the trace for this user in transaction STAUTHTRACE
1.png
  • Provide userid in section Traceoptions-> Trace for user only.    Click on the button "activate trace " at upper pane
   1.5.png

 

  • Then log in (TEST_TRACE)and execute all the function in SU01(for another user TEST_TRACE2). Here I have executed all the functions assign profile, reset password,lock,unlock.
2.png
  • After that you can display the trace in transaction code stauthtrace by clicking the button
3.png
  • In the upper pane. You can see the results as mentioned below
4.png
  • Here you can see the authorization object S_USER_GRP is checked and the activities were 02,05. If you can edit these activities for a role which has got SU01 transaction code assigned to it, you can use this role to control activities of users.
5.png
  • Make sure to put in a copy of standard node (S_USER_GRP) and not to edit the standard node - this is the best practice.

2.png

3.png

  • Select activity 5 to provide access for unlock/lock. Disable the standard node and only retain manual node of S_USER_GRP
4.png
  • Save, generate profile and exit.
5.png
  • Execute the user comparison in pfcg for the user.
6.png
  • Login as test_trace. Execute all the functions on SU01. Check  the trace log again . Failed authorization checks are displayed in red. If it was a webdynpro screen, you could have seen Webdynpro in column 'Type'
7.png

 

By this method you can trace activity of the users by assigning any transaction code. This gives you insight into what all authorization objects are being checked while the functional consultant executes certain functions. This will help a team of security and functional consultants easily find the authorization controls required. It is much easier, accurate and faster method compared to breaking your head on analyzing description of each authorization object in SU22 . We have completed a SAP APO role building project by this method. Kindly do provide your suggestions and questions.


N.B : Please note that tracing authorization is different from stauthtrace for SAP BI.  For BI, SAP has given additional tools like RSECADMIN and RSSM
The roles which were created using this method are as mentioned this document. click here.

For the first time, let us try to speak only about defense. Thus, this article will be about different guidelines, which can help to secure your SAP system. But nothing to worry about - this post will nevertheless remain useful and interesting, even if it does not contain information about 0-days or have no words like “cyber” or “weapon” in title. So, let’s go.

 

This blog post will be about new guideline, or standard, for securing - or testing of the security - of SAP implementations, which is going to be a first standard of the EAS-SEC standard series. There were 2 things that push us unto developing this guideline and give a second birth for our project. We thought about making some kind of guideline from the very beginning, and finally made it, when we’ve got a clear idea of how it should be done and what customers really needed.

 

And the reason we decided to make this…
… Is simple like one, two three.


One. Questions like "why?" and "what for" are the alpha the omega of every research. For us, as it sometimes happens, the answer came from one more additional question. After implementation of our Security Monitoring Suite for SAP in huge enterprises, making dozens of POC’s and completing numerous penetration tests against SAP systems (as well as other business critical systems), the thing we were asked more often than any other was: “Guys, you are awesome! And you are doing a great job so far, finding so many problems in our installations. It's absolutely fantastic, but we don’t know, where should we start to solve them. Could you provide us with top 10/20/50/100/ [put your favorite number here] most critical bugs in every area?”


Two. At the same time we had to do something completely different from just top-10 of the most critical bugs, like the one, when you can select missing SAP security notes with highest CVSS. Even if you patch all of the notes there still could remain lots of problems. For example, you may have SAP_ALL assigned to every user or you have your logs disabled so that next time, when you forget to close sapnotes, it would be easy to hack your system, because of non comprehensive approach. So, the number one challenge is to understand all security areas of SAP platform and to have an opportunity for every area select a number of most critical issues. So our research first aim was to cover all SAP security areas and be simple to implement - the second one.

 

Three. We started to analyze existing guidelines and standards. Currently there are not many of them, which cover SAP security and all of them are supported by ERPScan. The guidelines we have are as follows: Secure Configuration of SAP NetWeaver® Application Server Using ABAP by SAP, ISACA Assurance (ITAF) by ISACA, and DSAG by German SAP User Group.  All those standards are great, but unfortunately all of them have at least one big disadvantage. But let’s be patient and have a better look. On those standards:

 

Secure Configuration of SAP NetWeaver® Application Server Using ABAP


First official SAP guide for technical security of NetWeaver ABAP in general. Before it only dozens of specific guidelines for every application were made. The first version of this guide was published in 2010, and was followed by version 1.2in 2012. As far as it happened almost 2 years ago, we have to put in mind, that in our fast-changing world some critical things could be missing for now. This guideline was created for rapid assessment of most common technical misconfigurations in platform and consists of 9 areas and 82 checks in total.


Advantages: very brief, but quite informative (only 9 pages) and covers application platform issues, applicable for every ABAP- based platform either ERP or Solution manger or HR, it doesn’t matter.


Disadvantages: 82 checks is still a lot for a first brief look on secure configuration. But what’s more important, standard doesn’t cover access control issues  and logging and even in platform security miss some things. Finally, it gives people false sense of security if they cover all checks. But it wouldn’t be completely true.


ISACA Assurance (ITAFF)

Probably, the first guideline for SAP Security. Guideline was made by ISACA consortium. There were 3 versions published in 2002, 2006 and finally - in 2009. And it means that 5 years passed from the last release and many areas are outdated now. In general, checks cover configuration and access control areas, application platform security part covers less than access control and miss some critical areas. Guideline consists of 4 parts and about 160 checks in total.


Advantages: detailed coverage of access control checks.


Disadvantages: Outdated. Technical part is missing. Guideline consists of too many checks, and can’t be easily applicable by non-SAP specialist. Also it can’t be applicable to any system without prior understanding of the business processes. And finally, this guideline could be found officially only as part of the book or you should be at least an ISACA member to get it.



DSAG (Deutschsprachige SAP-Anwendergruppe)

Set of recommendations from German-speaking SAP User Group. Checks cover all security areas from technical configuration and source code to access control and management procedures. Nowadays it is a biggest guideline about SAP Security. Last version was released in Jan 2013. Consists of 8 areas and 200+ checks.


Advantages: Ideal as a final step for securing SAP. Great for SAP Security administrators, covers almost all possible areas.


Disadvantages: Unfortunately, has the same problem as ISACA. It is too big for a starter, and no help at all for Security people who are not familiar with SAP. Also it can’t be directly applicable to every system without prior understanding of business processes. Many checks are recommendations and user should think by himself, if they are applicable in each every case.

figure001final
Fig.1. How SAP security looks like with three guidelines


What goes around that comes around

So, we didn’t want to make just another security guideline. But also we saw, that all of the current approaches miss something.

Finally we understood that there is a real need in new guideline. Fortunately, now we knew, what we should do to make it not good, but perfect. They all miss one general thing – they are big from one side and still doesn’t cover everything but pretend to do that, which finally gives people false sense of security if they cover all checks

The authors' efforts were to make this list as brief as possible but also to cover the most critical threats for each area. This approach is the main objective of this Guide: as despite best practices by the SAP, ISACA and DSAG, our intention was not to create just another list of issues with no explanation on why a particular issue was (not) included in the final list, but to prepare a document that may be easily used not only by SAP security experts but by every Security specialist who wants to check if his SAP is Secure and guideline should also provide comprehensive coverage of all critical areas of SAP Security.

At the same time, the development of the most complete guide would be a never-ending story as at the time of writing we had more than 7000 checks of security configuration settings for the SAP platform.

We need a guideline, which will consist of few checks, but selected and what’s more important it will have future steps so that everybody will know that they made just a part of a job by implementing the standard, really critical part but not everything. So, we talking about 80/20 rules, and we will implement it in SAP Security.


Result

As a result, of more than 7 years experience in Security assessment of Enterprise Business applications of different types from different vendors including of cause SAP, Oracle, Microsoft, IBM but also taking into account different industry-specific systems like Retailix for Retail, MES/SCADA systems for Oil and Gas and ABS systems in Banking area our broadly experienced Pentest and Research team known for sending 450+ advisories in different products and participating in 50+ events in every continent collected information about most critical vulnerabilities and misconfigurations to understand the most critical areas. Our auditors who were responsible for different certifications like ISO, PCIDSS, PADSS, SOX and NIST in previous work analyzed those business applications from a compliance and risk point of view and finally we got 9 critical areas which are essential for security of every Enterprise Business Application and which are sorted by priority (Based on mix of Criticality, Probability, Popularity and Data needed for conducting attack).

After that we pick most critical vulnerabilities and configurations of SAP NetWeaver ABAP based applications from each of those 9 areas, and finally got 33 most critical checks.

 

Those are major checks that must be implemented first and can be applied to any system regardless of its, type, settings and custom parameters. It is also important that these checks are equally applicable to production systems and the ones of testing and development both.

In addition to major all-purpose checks, each of 9 critical areas contains a subsection called "Further steps". This subsection gives major guidelines and instructions on what should be done in the second and third place, and then how to further securely configure each particular item. The recommended guidelines are not always mandatory and sometimes depend on a specific SAP solution.

 

 

figure002final
Fig.2. How SAP security looks like with EAS-SEC

Wrap-up

On the one hand, with this approach, the authors were able to highlight key security parameters for a quick assessment of any SAP solution (from the ERP to the Solution Manager or Industry Solution) based on the NetWeaver ABAP platform and, on the other hand, to cover all potential problems and give complete recommendations on them.

In terms of quality, this makes the present Guide different from the SAP best practices that also contain few items, but do not cover the overall picture, as well as from best practices by ISACA and DSAG that have a lot of items, but the priorities are unclear and too complicated for the first step. Though these papers are highly valuable and absolutely necessary as a next steps and they are mentioned in Further steps" areas.

 

And finally, you are ready to use the guideline itself (click here), made with help of overwhelming experience of ERPScan research team.


Read, learn, stay secured!

Actions

Filter Blog

By author:
By date:
By tag: