1 2 3 8 Previous Next

Security

109 Posts

When my little but big company, that I started 10 years ago and foster ever since, started the venture last year to change the scope of our company from SAP PI, Basis, Data Center Consulting and helping managing complex SAP landscapes on an European scale to SAP Security, it was a feeling like in the good old Internet times. It was a timewarp to the change of the Millennium or to the appearance of the Apple II and IBM PC in the 80s. Exciting times.

 

Approached by IBM to become a strategic partner in the IBM/SAP Security world, we were very pleased that such a “big” company was really trusting us to work with them in the major league.

We looked also at the surrounding economy, the world of Pen Testing, Security Administration and Operation and SIEM (Security Information and Event Management) in the so little, so big SAP Universe.

 

(Just to explain what a Pen Test is: It is a Penetration Test, where dedicated security personal is trying to break the SAP-System and get their way into it. This breach attempt is made on all levels: Network, Infrastructure, Basis hacks, RFC Hacks, SAPGUI hacks, but also social hacks like email phishing and password sniffing).


We also choose our prefered vendor for SAP penetration testing. But to make a long story short and to come to my actual point, it is easy to say “We make now Security”, especially in the SAP world, to choose a product and go ahead try to hack the planet.

 

A good security breach is more than a tool. Like everything else, it is a deep knowledge about networks, infrastructure, attack vectors and the tools needed and used. If you don't want to use a commercial tool, than you still have a good choice.

 

One of the tools you need,  when you start pen testing, is the KALI distribution, maintained by the folks of Offensive Security.

The Linux KALI distribution is Open Source and has a long history started as a tool collection long time ago. They also have an online class with a certification that is commercial, but everybody in the industry will agree that it is a very demanding certificate with a tough exam. This means, that it will prove a work like experience and hands-on expertise.

 

But besides the certificate, these are the “tool of the trade” and you should be able to make any pen test even without commercial tools. There is a great companion book, and if you really want to start looking at the Pen test World, get the KALI distro on your laptop, get the book, start NMAP and practice.

But even if you try to learn the "Top 10 Tools" that KALI emphasizes, you will need a lot of practicing to become fluent in a pentration test workflow.

(If you are by coincidence at your customer site, try to run an NMAP full scan by plugging in your private laptop in the corporate switch and count the time until security stands at your desk. If this is more than 15 minutes, give them a security session). (OK, this is maybe not the brightest idea, but you got the story).

 

Kali has also invented the motto: “The quieter you become, the more you are able to hear”. And this is really true, not only for all security matters in the SAP world, but in the other corporate IT world as well.

 

hack_key.jpg

 

Security needs a very thorough understanding not only of large data center infrastructure and the surrounding networks, but also a lot of patience, listening and exploring. No tool will replace your knowledge and your abilities to map a complex SAP Network. And the SAP world adds big twist to Pen Testing. I have seen the one or the other Pen tester (usually right out of college, but sold as the security consultant)  from outside of the SAP world, using an open source tool and then asking around: “OK, looks like I am in with SAPALL, but what do I do now?”

Things like hacking via RFC and SMGW/Gateway means knowledge of programming, ABAP and Java the like.

 

It is one part to be a loudspeaker, touting all your hacker experience in the world, go with a tattoo on the forehead to Blackhat Las Vegas and pretend to be the coolest kid in the universe. Like someone said, maybe your little teen sister is impressed, but not the CISO.

 

I had a longer conversation with some partners about a good way of approaching my customers and I thought of things like a German Blog, Twitter, weekly reports on threads and new findings. But at the end, this would be just noise. After a short while, nobody would listen anymore. We decided, that the quiet way, the conservative but most trustworthy approach was just to call, meet and talk. Talk about the needs, their local threats and  findings and how to handle all these large and small security issues.

 

Security, especially Penetration Testing and discovering true vulnerabilities that could make or break a company in its hardest case (see my blog) makes a trustworthy relationship a base requirement for every customer situation. Showing first and foremost that you are a responsible person, guiding the customer on the risk assessment through the differentiation of hype or real risk is a demanding task in the SAP world of large installations. Knowing the hack is one thing, but waging the risk, the cost of the process to fix the gaps and making everything fit into an overall security strategy is a complete different world.

 

I like the challenge of this professional spread: Between the fun of serious hacking and testing on the one side and the serious presentation on the other end - to put on your black suit and put the findings in a real perspective.

 

(edited for content grammar and political correctnes)

Frank Koehntopp

Designing for Security

Posted by Frank Koehntopp Aug 27, 2014

There are two distinct ways on how you can build security into your software:

 

  • have your software tested and/or hacked, and start applying technology to plug the holes and keep the bad guys out
  • think about how your software could be mis-used and make sure your design prevents that

 

Or, as Gary McGraw just wrote, in much better words:

 

Screenshot 2014-08-27 20.27.23.png

 

Unfortunately the concept of "anticipating attacks" seems to be quite alien for the average developer - recognized by responding to a threat scenario with "but why would someone do that?".

 

It also seems to be hard to teach. There is a new effort that I think has lots of promise: the IEEE Center for Secure Design tries to tackle the problem from the design angle. This is their mission statement:

 

The IEEE Computer Society's CSD will gather software security expertise from industry, academia and government. The CSD provides guidance on:

  1. Recognizing software system designs that are likely vulnerable to compromise.
  2. Designing and building software systems with strong, identifiable security properties.

The CSD is part of the IEEE Computer Society's larger cybersecurity initiative, launched in 2014.

 

If you're interested in the topic, I would encourage you to read their document. It tries to explain the most common design flaws that lead to vulnerabilities. Every security architect in your team should have read (and understood) those, ideally:

 

Screenshot 2014-08-27 20.33.41.png

 

These are the topics explained in more details in the PDF (click on the image to read it):

 

  • EARN OR GIVE, BUT NEVER ASSUME, TRUST

 

  • USE AN AUTHENTICATION MECHANISM THAT CANNOT BE BYPASSED OR TAMPERED WITH


  • AUTHORIZE AFTER YOU AUTHENTICATE


  • STRICTLY SEPARATE DATA AND CONTROL INSTRUCTIONS, AND NEVER PROCESS CONTROL INSTRUCTIONS RECEIVED FROM UNTRUSTED SOURCES

 

  • DEFINE AN APPROACH THAT ENSURES ALL DATA ARE EXPLICITLY VALIDATED


  • USE CRYPTOGRAPHY CORRECTLY


  • IDENTIFY SENSITIVE DATA AND HOW THEY SHOULD BE HANDLED


  • ALWAYS CONSIDER THE USERS


  • UNDERSTAND HOW INTEGRATING EXTERNAL COMPONENTS CHANGES YOUR ATTACK SURFACE


  • BE FLEXIBLE WHEN CONSIDERING FUTURE CHANGES TO OBJECTS AND ACTORS

In 2012, American agencies under the lead of SIFMA where running the first cyber-attack stress test on financial institutions on Wall Street.


One year later, it was repeated in London, with a broader approach and more detailed preparation. This stress test and the results are stunning. Everyone who has to do with security should look at the scenario and should ask if their organization has an answer to the raised question:


How would we behave, how would we address all the issues that where surfaced during the organized cyber-attack?.


This is nothing that only affects Wall Street or London City’s financial district. This scenario can hit every company in the world.


Since I recently won a price in Germanys largest IT magazine, CT, in a storytelling contest, let’s recount the tale of a cyber-attack war game in a novel way.

And since I am German, (as SAP is), let’s assume the story does happen in SAP Homeland, in Germany and Carl B. Max, the CEO of AUTOBAHN AG, (“Fast is GOOD”) is still asleep in his home near his headquarter in Frankfurt am Main, Germany’s financial district.


page6_pic3.jpg


The Sequence of events that lead to the dissapearance of the German Autobahn AG:


At 6:00 AM in the morning, Twitter, Facebook and  the German Autobahn-Forum “The Fast and the Faster”, are showing up first posts: How bad the German Autobahn is, full of potholes, governed by too much speed limits, too much traffic jams.


At 6:30, more serious posts and accusations are added: Pictures of deadly accidents because of potholes on the fastest parts of the autobahn. The idea of a class action lawsuit is mentioned.


At 8:00, the posts have piled up to a veritable shitstorm.

At 8:30, the Twitter and Facebook accounts, maintained by the PR-Department of Autobahn AG have been hacked and are posting strange and bogus replies to the accusations. The impression of ignoring and downplaying the accusations are immanent.

At 8:45, Carl B. Max, CEO of Autobahn AG, is arriving at the office.


At 9:00, rogue High Frequency Trader are starting an attack on the stock of AUTOBAHN AG. They are short trading the stocks within seconds to a level, where regular trading algorithms, due to the high trading volume and dropping values, are suddenly releasing stop loss orders. This is generating an automatic trading avalanche, resulting in a landslide on the course of the AAG stock.

At 9:30, Social Medias are full of speculation on bad financial deals that are threatening the future results of Autobahn AG. The PR-Account of the company speaker is hacked and false PR statements are send to the world wide press. Since nobody knows, who was adressed and what was published, counteractions became difficult.


At 10:00, Carl Max is calling for a press conference at the headquarter in the office Tower at the “Frankfurter Kreuz” near the Airport. He demands actual financial statements from his CFO that he can present as a testimonial to the press, that everything is good.

In the middle of his calls, the telephone became dead. A massive DDoS attack is driven on the VoIP based telephone center. A special VoIP virus, dedicated to this equipment eats its ways through the Ethernet based phone infrastructure. Only calls via mobile can be done. “Can’t be reached for comments” was the phrase for the hour.

.

At 10:15, the SAP system crashes. Restore of the backup is necessary. The IT is detecting, that all tapes from the last 4 weeks are damaged, due to an error in the backup procedure. The SAN stopped working with a damaged hardware.

At 10:30, the CFO finds out that all numbers in the SAP Business Warehouse systems are corrupt. It is unclear, if the backup does contain non-manipulated figures.


At 11:00, the rogue high frequency trading continues in London, after the London exchange opened. The landslide of the courses goes on


At 12:00, Carl Max can’t present any reliable numbers to the press. The attack is not mentioned.

The plea to the large stock exchanges for suspension of their stock trading is not granted, since AUTOBAHN AG can’t present any figures for proof and no one can’t be reach to comment on the incidence.


At 15:00, NYSE in Wall Street is opening. The rogue trading leads to a suspension of trade, when the company value was hitting one cent and the stock was rated as a penny stock.


At 17:00, when the German Stock Exchange in Frankfurt closed, Deutsche Autobahn AG is “pleite”, bankrupt.


Do you think this is not for real?


Fiction? You wish, but it is real life truth. Every single point of this cyber-attack already happened. Some of them are even common threads, like manipulation of social media or high frequency trades. Ever thought about how reliable a VoIP or how vulnerable a Microsoft Lynx Server is? And especially in a corporate Environment?


Some of them are recent developments, like the new “attack vector” of manipulating BI-cubes with the intent to lead the hacked co to false decisions.

And the backup? Guess how often I have seen this happen in 20 years? More than you would think, and it was always an internal problem of slobby backups, not even a hackers attack.


At the end, Quantum Dawn recommended at first and foremost, to establish a fast,, clear and direct communication on attacks. Don’t keep such attack secret. There must be internal and external (governmental, if this is a broad attack) communication ways that will react within minutes. These attacks are maybe criminal, but given the world wide state of politics, this attack can even be initiated by governments as part of a global warfare.


And you need an alerted IT who can countermeasure this thread in unison.


Really, think of your company you are in: Who would you call if you see an attack on a SAP system? And who can respond immediately?


P.S.:

More Materials:

Deloitte as audit company was part of the cyber trial, Here are their findings

And some great video for it also from Deloitte: Cyber Security. Evolved.


And also check my first blog in this series of security papers: THINK Security: Towards a new horizon


It is interesting to watch the security world undergoing a dramatic change. The classic world of protecting the good SAP system against the evil with a good firewall and relying on the closed SAP ABAP technology (known only by the good guys) does not longer live up to the promise.

 

The old security assurance, that SAP is so isolated and so exotic in the company network, that nobody will enter the premise, is slowly deteriorating over the last decade. Suddenly, the Internet, the Extranet, the VPN'ns are all over the place, connected straight to the ECC core system. SAP Hacks is a standard program on any blackhat convention.


While there are so many new Security Technologies in Firewalls, Appliances and software security frameworks, the Security World at SAP is still old-fashioned. But this is also a tribute to the ever growing complexity of the SAP eco systems. The impression, to live behind a secure wall in a secret garden, is just the glorified view to the past.

It is easy to say "Fix and harden the SAPRouter and Web Dispatcher". But what if you have thousands of routes and dozen of routers and web dispatchers? Just keeping them always up to date is a job by itself.


Customers need to learn to manage this complexity in a new way. I know a lot of SAP sites that are discussing continuous patching, upgrading, testing and enhancing. But this by itself is a daunting task. One of my larger customers has 60 SAP systems in one tier, all related and connected. Multiply this by three and you have Dev, QA and PROD tier with 180 machines. Tell me how to do "permanent changes" to this landscape and ensure maximum security while testing all 60 app systems in unison every time after patch day. In theory, you can add unlimited resources, 24x7 uninterrupted strategies and unlimited budget. Yes, you can solve it. But in economic terms, it is not feasible. It is the old economic story of limited resources and limited spend money.


The first step in a new strategy for security is risk assessment. There was a great blog of Balancing Danger and Opportunity in the New World of Cyber Domain

a great summary of Derek Klobucher about the keynote speech of  Gen. Michael Hayden (retired NSA Chief) who spoke to the attendants of the SAP Retail Forum 2013 .

 

Hayden drastically stated the new security paradgm : "If you have anything of value, you have been penetrated,” Hayden said. “You’ve got to survive while penetrated -- operate while someone else is on your network, wrapping your precious data far more tightly than your other more ordinary data.”

He basically stated, that Security is no longer about vulnerabilty alone. He introduced the formula, that Risk is always a relative value for your assets.


sdnSec.jpg


Risk = vulnerabilty x consequence.


This is the most important message for the near future for everyone involved in security You need to manage risk. Security risk in time and over time.


That must be the goal, even more for such a critical system like the central SAP system. The new security paradigms lives in real time, defending against frequent attacks, internal and external threats to capture or manipulate data. Organizations must face the new complexity, new organizational challenges and security risk management.


And with the risk, you need to change the security thinking from “defending walls” like in medieval castles to “pattern recocgnition”, to an approach, where you anticipate the next attack while it is building up. Here, the technologies of Big Data, SIEM and Artificial Intelligence emerging. In Germany, T-Systems and Telekom have a great "Real Life" showcase: the "Advanced Cyber Defense Center" in Bonn. (Maybe I do a blog one day about it).


Yes, this is a very complex and demanding world. And this is why even big companies need to talk, act and cooperate on security issues.

But this is the topic of my next block:” Quantum Dawn – What SAP Data Centers can learn from SIFMA war games”. 


Just relying on your good old firewall is a thing of the past.

For most SSO issue, the Logon Trace is needed to find the root cause.

 

In ABAP system, actually, the logon trace is the development trace of work process. Normally we use the important Note:

#495911 - Trace analysis for logon problems

After get the trace, we can use the Security Audit Log to locate the work process which handled the logon to find the real reason why logon failed.

 

But sometime, if the security audit log is not active or there is no entry logged in audit log, it becomes difficult to find the work process.

 

For HTTP Logon issue, I found we can use ICM trace to locate the work process.

Firstly, Raise the ICM trace level to 3.

This can be done in the SMICM, use menu “Goto -> Trace Level -> Set”:

1.gif

(Also remember go to SM50 to raise trace level to 3 on “Security” component for DIA work processes.)

 

Then reproduce the issue, and after that change all the trace levels back to default value.

 

Now let go to check the ICM trace. Use the reproduce timestamp to find related trace:

2.gif

(Here I recommend the free software Notepad++, it can search large text file very fast. Show the result in list and can locate to position of file by double-clicking.)

Then we can search by such keyword “IcmHandleOOBData”, in the result following lines are what we need:

3.gif

[Thr 140080821593856] IcmHandleOOBData: Received data on 1st MPI (seqno: 1, type=6, reason=Request processed in wp(6)): 42/23079/0

[Thr 140080821593856] IcmHandleOOBData: request will be processed in wp 6

Here the "wp 6" mean the work process number 6 handled this logon.

 

Then we can go to check the dev_w6 to find the related trace, we can use timestamp or keyword "note 320991" to search:

4.gif

In these logon trace, we can find the root cause of why logon failed.

Segregating Warehouse Responsibilities using standard Inventory Management and Warehouse management authorizations


Background/Situation


In certain situations there can be a requirement to separate logistical processes in a SAP system on a detailed level.  This is usually the case when different parties are responsible to perform different logistical processes and / or are responsible for different parts of the same warehouse. 


Examples of the situations where the requirements could occur are:

  • A third party executes logistical activities and manages a part of the  plant and warehouse.  In these parts of the plant and warehouse this third party is responsible for the stock.
  • ‘Special’ materials are stored in certain parts of the warehouse and should only be handled by a certain set of users.


This separation in responsibilities can be depicted in SAP by setting up different plants and warehouses that can subsequently be authorized on. But these solutions would mean a redesign of the logistical landscape and additional administrative activities would be needed during day to day operations.  Avoiding this redesign and administrative burden would require effective authorization restrictions on organizational elements lower than plant and warehouse. The requirement of controlling who executes IM and WM processes on a detailed level can be met using standard SAP authorizations in combination with IM/ WM customizing without setting up additional plants and warehouses.  This blog discusses this solution for segregating warehouse responsibilities. 


Content of this blog


This blog explains when this solution can be used, when it should not be used, how it works and what it can and cannot do.  It also gives an overview of the activities that need to be performed to implement the solution. The solution is based on my own investigation and experience, but also information from several notes, knowledge base articles and threads was used and combined to create a complete solution.


The solution and when to use it


You can use the solution when you need to differentiate between different groups of users who can perform IM /WM activities within parts of the same plant and warehouse.


The SAP SAP WM customzing and  authorization elements ‘storage location’ and ‘storage type’ form the basis for the solution. By properly defining the WM customizing authorizing on the authorization elements  them you can:

  • Restrict IM movements based on storage location  to certain groups of users ( next to the normal restriction on movement type and plant)
  • Ensure that ‘allowed processes ’ are  defined in WM customizing ( like storage type search settings ) so during WM processes users that needs to execute them are not hampered by authorization checks
  • Restrict ‘manual’  WM movements based on the ‘source’ and ‘destination’ storage type to certain groups of users (next to the normal restriction on warehouse and WM movement type)


By authorizing on these two elements (storage location and storage type), you can create an authorization setup that only allows users with certain roles to perform specific IM and resulting WM movements for specific storage locations and restrict who can make ‘manual’ WM movements for specific storage types. In this case ‘Manual’ WM movements refer to transfer orders that are not triggered by an IM movement or other specific Logitical actios. For example the transfer orders of the movement type 999 that can be created manually via transaction LT01.


With such an authorization setup only the party that is responsible for the storage locations and storage types can keep control over the movements of stock located there while normal ‘Allowed’ warehouse processes are performed in a regulated manner and are not hampered by authorization restrictions.


When not to use it


Only use it when there is a hard requirement that these restrictions are enforced by the system. Implementing and maintaining the solution (for WM) can be complex.  If there is no hard requirement to enforce these restrictions in the system on such a detailed level don’t do it. In case checking if procedural agreements are adhered to is sufficient do not use authorizations for it.  It also makes no sense to put in effect restrictions in SAP if there are no physical restrictions as well.  if SAP blocks a user from moving materials from one part of the warehouse to another but there is no physical  restriction ( like a locked door or a fence) the person can still just move the materials and not register it. 


Prerequisites


Before this solution can be implemented a number of things need to be clear. If these aspects are not clear the solution cannot be implemented correctly and will only work partly or not at all.  The following must be determined:

  • Ownership of all Storage locations
  • Ownership of all Storage types
  • Clearly defined logistical processes
  • Which party executes which steps in these process

Combined ownership of storage locations and storage types should be avoided as much as possible as this will complicate and can (partially) undermine the solution. Where ever possible ownership of storage types for interim bins have to be determined as well.


The concept


Inventory Management


When an IM movement is made an authorization check on plant and movement type is executed. If the user is not authorized the movement cannot be made. By settings made in customizing a subsequent check can be activated whenever a movement is made for a certain storage location. This customizing switch is set per storage location. By default this customizing setting is off.   When this customizing setting for a storage location is activated it will trigger an authorization check for the combination of movement type, plant, storage location ( and of course activity)  whenever a IM movement is made using this storage location. The authorization object checked is M_MSEG_LGO.  See also SAP Knowledge Base Article 1668678.


So by only granting the roles for a certain party with the storage location/plants they are responsible for in combination with the movement types they are allowed to perform the required segregation in responsibilities can be made.


When a storage location to storage location movement is made both the ‘Source’ and ‘Destination’ storage locations are checked in case the customizing check is set for both storage locations. This would mean that a movement betweens storage locations ‘owned’ by different parties is blocked by authorizations. In those cases a ’two –step’ storage location to storage location movement can be made wherein the sending party executes the first step and the receiving party executed the second step. See also SAP note 205448.   


Warehouse management


The solution for warehouse management is more complicated and is based on the SAP WM Customizing like the concept of storage type search (strategies).


Authorization check for all transfer orders:


During the creation of a TO an authorization check on Warehouse is performed in all cases (Field LGNUM of object L_LGNUM).  At that point no check on Storage type is performed (LGTYP is checked with DUMMY) See also Knowledge base article 1803389. In case the user is not authorized for the warehouse the TO cannot be created


Authorization checks in relation to WM customizing:


When a transfer order is created, SAP will try to determine which storage type to pick the material from (source) or which storage type to put this material (destination).


To determine where to pick from SAP checks if it can find a suitable source storage type for removal by searching in the ‘storage type search’ table defined in WM customizing.  This search uses a number of variables like reference movement type, warehouse, pick strategy indicator in the material master and special stock indicator to find a suitable storage type. In case a suitable source storage type is found and used in the transfer order no extra check is performed.


The same method is used to determine the storage type to put away the material. In that case a suitable destination storage type is searched for in the ‘storage type search’ table in WM customizing.   In case found no extra authorization check is performed.


In a lot of cases WM movements are triggered by logistical activities like IM movements.  Under normal circumstances  the ‘storage type search’ WM customizing is properly defined for the logistical process , the necessary material master data is setup and the TO can be created without issues and without needing explicit authorization for the source or  destination storage types. This because it is an ‘allowed’ process and as such the extra authorization checks are not needed.


In case no suitable source or destination storage type is found in the  ‘storage type search ’ table and the user creates the transfer order in the foreground the user can enter a source or destination storage type manually. In that case and extra authorization check is executed.   This check is on the combination of Storage type and Warehouse.  The same object _LGNUM is used, for this check but now the field LGTYP is not checked with DUMMY but for the storage type (see FORM BERECHTIGUNG_LGTYP of include FL000F00). This check is performed because the entered storage type is not found as a suitable storage type in the search strategy (see include LL03AF6I). This check on object L_LGNUM is executed separately for the destination and source storage type.   Also when the users creates the transfer order in the foreground and changes the source or destination storage type into a storage type that is not part of the applicable ‘storage type search ‘ table entry this extra authorization check on the source and / or destination storage type is executed.  See also Knowledge base article 1803389. A thread that also mentions this is http://scn.sap.com/thread/775605


Using what is explained above this extra authorization check can be used to restrict the deviations that a user can make compared to the ‘allowed’ processes that are defined in the WM customizing.  By only granting authorization for the storage types the user is responsible for the user can only make deviations to these storage types. This can be considered technically correct as the stock located there is under this user’s responsibility.


Authorization checks for ‘manual’ transfer orders


Some WM movements can be created manually and are not triggered by other activities like IM.  For instance transaction code LT01 to create a TO manually can be used. Normally these movements are WM supervision movement types like 999 .  Not all WM Movements can be created manually. Which WM movement types can be used to manually create TO’s depends on customizing.  For all movements that are created manually an authorization check on WM movement type in combination with Warehouse is executed. The object that is checked is L_BWLVS.  Also the general check on warehouse is executed.  During the creation of manual transfer orders the concept of ‘storage type search’ and authorizations also applies. By not setting up ‘storage type search‘ customizing  for those movements the extra authorization check is always executed.  By only providing authorization for the storage types s users can only move stock between these storage types they control using these ‘manual’ movements


Conclusion:

  1. By restricting the access on IM level (movement type, plant and storage location) or other actions that trigger a Transfer order the authorization for the subsequent WM Movement  is restricted as well. If the user has authorization for the action with this the user also has authorization for the subsequent TO, but the manipulation of the storage types the material is picked or put away can be restricted to those defined as applicable in the storage type search (WM customizing) and those that are controlled by the authorization of the user (using roles)
  2. The manual WM movements can be restricted based on movement types and to those storage types  that are controlled by the user’s authorizations (using roles)


What it cannot do


Warehouse management:


No authorization check on storage type is performed when a TO is confirmed. The Warehouse is checked but the storage type is not checked (Object L_LGNUM with DUMMY). This means that anybody with authorization for the warehouse and confirming any TO can confirm a TO for that warehouse. There is no way to restrict on storage type during TO confirmation using standard SAP.  Because a Transfer order needs to have been created in order for it to be confirmed and the creation of the TO is controlled this gap is not crucial for the solution. Also the storage type cannot be altered during confirmation.


Inventory Management:


In almost all situations a material document will contain a storage location.  There are however a few situations where a material document does not contain a storage location. This is when a goods receipt is performed and the materials are consumed upon receipt. This happens for instance if a PO has a cost center as account assignment.  You must determine if these situations are relevant and if this gap is relevant for your situation.  If for example goods receipts are always performed by one of the parties then only one of these parties should have the authorization to do goods receipts. Although this party could potentially do a goods receipt while the PO erroneously contains a storage location which is not ‘owned’ by this party they can still do the goods receipt. This will not be an issue as they are responsible for all goods receipts.   In case multiple parties need to be able to perform goods receipts for different storage location you can include an authorization check (on e.g. the storage location in the PO) using BADI MB_CHECK_LINE_BADI.   This is however not standard SAP.


How to set it up


Inventory Management:

The more easy part is the authorization restriction for Inventory Management.   This can be done in four steps:


1) Activate the check on storage location:


Activate the check on object M_MSEG_LGO in customizing (menu path “Materials Management --> Inventory Management and      Physical  Inventory --> Authorization Management --> Authorization Check for Storage Locations”) See also SAP Knowledge Base        Article 16686


     M_MSEG_LGO.png


2) Make storage location an organizational level:


Use program ‘PFCG_ORG_FIELD_CREATE’ to make the field LGORT an organizational level. See SAP note 727536


3) Update SU24 for relevant transaction codes:


All transactions that create, change or display IM movements need to be updated to have object M_MSEG_LGO set as ‘proposed = Y’  so that the object is populated in PFCG during role maintenance.


4) All roles that contain these transactions need to be updated to contain the M_MSEG_LGO object with the right plants, storage           locations, movement types and activity.  Important to know is that the check on M_MSEG_LGO is also performed when a material           document is displayed. This means that also roles that provide display access to material documents ( like MB51) need to be updated to include the authorizations with activity ‘03’


Warehouse management


Setting up the solution for warehouse management is a more tricky part and consists of three steps:


1) Set up all necessary storage type search strategies to cover ALL ‘allowed’ processes:


Stock removal and stock placement storage types search entries have to be setup in WM customizing for all ‘allowed’ processes for which no additional authorization check on storage type is needed.


2) Make sure that the necessary master data (material master data etc) is set up correctly so that the correct storage type search can be found and used during 'allowed' processes

 

3) Update the roles:

 

All roles that contain the object L_LGNUM need to be updated so that they contain the authorization for the storage types belonging to the parties they are for. Please note that the object has no activity field and that some display transactions related to WM check on this object as well with DUMMY for the field LGTYP.

 

What to consider during implementation


Please keep in mind the below aspects in order to successfully deploy this solution:

  1. WM storage type search (strategies/sequences):  All ‘allowed’ scenarios must be covered by stock removal and stock placement strategies else authorization checks on storage type will be triggered which can fail because the user is not authorized while he/she should be able to perform the step in the process. Considering there are many variables involved there are many strategies to be maintained.  Having the processes clear and involving a specialist in SAP WM is essential in order to cover everything needed.
  2. Material master data:  In order for SAP to find the correct storage type in the ‘storage type search’ table the material master data fields like stock placement and stock removal strategy indicators need to be set correctly. This is crucial for the solution to work.  As there are a lot of material master records this can be quite some work. Most issues after introducing this solution will most probably be because of the incorrect or missing material master WM data.
  3. Training (of key users): especially the WM part of the solution can be complex. Training of (key) users is important in order for them to understand the concept and to find the right solution when goods ‘get stuck’.
  4. (temporary) Super role:   It can be very useful to (temporarily) have a sort of ‘super user ’ role available that can make transfer orders between storage types handled by different parties ( including those for dynamic bins). This can be done by granting this role authorization for all storage types or by creating a WM movement type that has search strategies for all storage types and granting access to that movement type. By assigning this role to a limited number of key users during the first phase after go-live a work around is available when a material movement gets ‘stuck’ while a real solution ( like material master data ,  WM search strategies of authorization roles changes) are being investigated and followed up. 

Best Practices for Roles Transport in AS ABAP system

Guidelines for role transports, I am here trying to compile different scenarios (as much as possible), please share comments and add-ons on the same if any.

1. Single role

 

For Single role change transport in standard way.

2. Parent and Child roles.

 

For parent and child roles different scenarios are:


Scenario-1. Addition of T-code and Authorization Object

We are adding T-code in parent role and distributing to all child roles, In this case we will create a transport for Parent and all Child roles. (If we are putting all child roles, parent role get added automatically).


Scenario-2. Addition of Org Level in Child Role.

Child role as per mechanism is only for Org level maintenance, so when we are making changes in org level of any child role we can transport only that child role. Again that child role will call parent role in transport automatically.

Imp Note: To avoid confusion and misbehavior in case of large no. of role changes of nature parent and child roles we should include all child roles in transport.

 

3. Composite and Single roles

 

For Single and composite roles different scenarios are:


Scenario-1. Addition of T-code and Authorization Object.

Addition of T-code, authorization object in Single role which is part of composite role can add individually in transport.


Scenario-2. Creation of new single role and adding to existing composite role.

We have created a new role and added that as a part of composite role; In this case we need to add single role and also composite role in transport without checking the option Also Transport Single Roles from Composite Roles.


Scenario-3. Creation of new composite and all its new single role.

We have created a new role and added that as a part of composite role, in this case we need to add composite role in transport without checking the option Also Transport Single Roles from Composite Roles. This option will take all its single roles.


Scenario-4. Adding or deleting existing single role in composite role.

In this case we need to add only composite role without checking the check box of Also Transport Single Roles from Composite Roles.


Scenario-5. Composite roles of BW system

In BW system Composite roles need to move always without checking the check box of Also Transport Single Roles from Composite Roles as in BW there are roles which are suppose to be allowed to edit by query designer and administrator directly in production. They are adding new queries in role menu on daily basis which they are not maintaining in Dev and Test and the composite with option checked Also Transport Single Roles from Composite Roles will spoil the roles.

 

 

The goal of the document is to help make users aware of the Transport roles in AS ABAP systems. Recommendations are based on my personal experience in SAP Implementation as an Sr. SAP Consultant. The user can follow the suggestions provided by the document which should be supplemented with additional information. The suggestion provided by the document may vary as per the project requirement.

Being a basis consultant , it was challenge to take up SAP APO security roles building exercise for an implementation project. I knew how to make roles and edit authorization objects for ECC, but that much information was not sufficient to find out authorization objects needed to control SAP APO functions.  Functional consultants started explaining me what all controls they need in their functionalities. A check at the SU22 screens was difficult process because of the lack of domain knowledge . Unfamiliar terms and codes were running on my head. Often the objects that I found with much pain was not the right one when we tested it . Functional consultants were not always available for our trial and error sessions.

 

I found that "authorization trace" of ST01 is the best and fastest way to find out right authorization objects. I asked the functional consultants to run functionalities  they want to put control on. I could watch their userids with trace produced at ST01.  But ST01 was too boring, I needed  much better tool to move fast and have more clarity.

 

STAUTHTRACE provide a neat formatting than ST01 for trace. I switched this on and asked functional consultant to execute the functionalities they needed. I found the authorization objects checked  in every functionalities by tracing what functional consultant was doing.

 

Example of how to use this function: Using STAUTHTRACE to customize SU01 functionality for unlock only

 

DescriptionScreenshot
  • Create a sample userid for functional consultant in quality system. Provide a role with desired functionality . Here for example we use SU01.
  • Put on the trace for this user in transaction STAUTHTRACE
1.png
  • Provide userid in section Traceoptions-> Trace for user only.    Click on the button "activate trace " at upper pane
   1.5.png

 

  • Then log in (TEST_TRACE)and execute all the function in SU01(for another user TEST_TRACE2). Here I have executed all the functions assign profile, reset password,lock,unlock.
2.png
  • After that you can display the trace in transaction code stauthtrace by clicking the button
3.png
  • In the upper pane. You can see the results as mentioned below
4.png
  • Here you can see the authorization object S_USER_GRP is checked and the activities were 02,05. If you can edit these activities for a role which has got SU01 transaction code assigned to it, you can use this role to control activities of users.
5.png
  • Make sure to put in a copy of standard node (S_USER_GRP) and not to edit the standard node - this is the best practice.

2.png

3.png

  • Select activity 5 to provide access for unlock/lock. Disable the standard node and only retain manual node of S_USER_GRP
4.png
  • Save, generate profile and exit.
5.png
  • Execute the user comparison in pfcg for the user.
6.png
  • Login as test_trace. Execute all the functions on SU01. Check  the trace log again . Failed authorization checks are displayed in red. If it was a webdynpro screen, you could have seen Webdynpro in column 'Type'
7.png

 

By this method you can trace activity of the users by assigning any transaction code. This gives you insight into what all authorization objects are being checked while the functional consultant executes certain functions. This will help a team of security and functional consultants easily find the authorization controls required. It is much easier, accurate and faster method compared to breaking your head on analyzing description of each authorization object in SU22 . We have completed a SAP APO role building project by this method. Kindly do provide your suggestions and questions.


N.B : Please note that tracing authorization is different from stauthtrace for SAP BI.  For BI, SAP has given additional tools like RSECADMIN and RSSM
The roles which were created using this method are as mentioned this document. click here.

For the first time, let us try to speak only about defense. Thus, this article will be about different guidelines, which can help to secure your SAP system. But nothing to worry about - this post will nevertheless remain useful and interesting, even if it does not contain information about 0-days or have no words like “cyber” or “weapon” in title. So, let’s go.

 

This blog post will be about new guideline, or standard, for securing - or testing of the security - of SAP implementations, which is going to be a first standard of the EAS-SEC standard series. There were 2 things that push us unto developing this guideline and give a second birth for our project. We thought about making some kind of guideline from the very beginning, and finally made it, when we’ve got a clear idea of how it should be done and what customers really needed.

 

And the reason we decided to make this…
… Is simple like one, two three.


One. Questions like "why?" and "what for" are the alpha the omega of every research. For us, as it sometimes happens, the answer came from one more additional question. After implementation of our Security Monitoring Suite for SAP in huge enterprises, making dozens of POC’s and completing numerous penetration tests against SAP systems (as well as other business critical systems), the thing we were asked more often than any other was: “Guys, you are awesome! And you are doing a great job so far, finding so many problems in our installations. It's absolutely fantastic, but we don’t know, where should we start to solve them. Could you provide us with top 10/20/50/100/ [put your favorite number here] most critical bugs in every area?”


Two. At the same time we had to do something completely different from just top-10 of the most critical bugs, like the one, when you can select missing SAP security notes with highest CVSS. Even if you patch all of the notes there still could remain lots of problems. For example, you may have SAP_ALL assigned to every user or you have your logs disabled so that next time, when you forget to close sapnotes, it would be easy to hack your system, because of non comprehensive approach. So, the number one challenge is to understand all security areas of SAP platform and to have an opportunity for every area select a number of most critical issues. So our research first aim was to cover all SAP security areas and be simple to implement - the second one.

 

Three. We started to analyze existing guidelines and standards. Currently there are not many of them, which cover SAP security and all of them are supported by ERPScan. The guidelines we have are as follows: Secure Configuration of SAP NetWeaver® Application Server Using ABAP by SAP, ISACA Assurance (ITAF) by ISACA, and DSAG by German SAP User Group.  All those standards are great, but unfortunately all of them have at least one big disadvantage. But let’s be patient and have a better look. On those standards:

 

Secure Configuration of SAP NetWeaver® Application Server Using ABAP


First official SAP guide for technical security of NetWeaver ABAP in general. Before it only dozens of specific guidelines for every application were made. The first version of this guide was published in 2010, and was followed by version 1.2in 2012. As far as it happened almost 2 years ago, we have to put in mind, that in our fast-changing world some critical things could be missing for now. This guideline was created for rapid assessment of most common technical misconfigurations in platform and consists of 9 areas and 82 checks in total.


Advantages: very brief, but quite informative (only 9 pages) and covers application platform issues, applicable for every ABAP- based platform either ERP or Solution manger or HR, it doesn’t matter.


Disadvantages: 82 checks is still a lot for a first brief look on secure configuration. But what’s more important, standard doesn’t cover access control issues  and logging and even in platform security miss some things. Finally, it gives people false sense of security if they cover all checks. But it wouldn’t be completely true.


ISACA Assurance (ITAFF)

Probably, the first guideline for SAP Security. Guideline was made by ISACA consortium. There were 3 versions published in 2002, 2006 and finally - in 2009. And it means that 5 years passed from the last release and many areas are outdated now. In general, checks cover configuration and access control areas, application platform security part covers less than access control and miss some critical areas. Guideline consists of 4 parts and about 160 checks in total.


Advantages: detailed coverage of access control checks.


Disadvantages: Outdated. Technical part is missing. Guideline consists of too many checks, and can’t be easily applicable by non-SAP specialist. Also it can’t be applicable to any system without prior understanding of the business processes. And finally, this guideline could be found officially only as part of the book or you should be at least an ISACA member to get it.



DSAG (Deutschsprachige SAP-Anwendergruppe)

Set of recommendations from German-speaking SAP User Group. Checks cover all security areas from technical configuration and source code to access control and management procedures. Nowadays it is a biggest guideline about SAP Security. Last version was released in Jan 2013. Consists of 8 areas and 200+ checks.


Advantages: Ideal as a final step for securing SAP. Great for SAP Security administrators, covers almost all possible areas.


Disadvantages: Unfortunately, has the same problem as ISACA. It is too big for a starter, and no help at all for Security people who are not familiar with SAP. Also it can’t be directly applicable to every system without prior understanding of business processes. Many checks are recommendations and user should think by himself, if they are applicable in each every case.

figure001final
Fig.1. How SAP security looks like with three guidelines


What goes around that comes around

So, we didn’t want to make just another security guideline. But also we saw, that all of the current approaches miss something.

Finally we understood that there is a real need in new guideline. Fortunately, now we knew, what we should do to make it not good, but perfect. They all miss one general thing – they are big from one side and still doesn’t cover everything but pretend to do that, which finally gives people false sense of security if they cover all checks

The authors' efforts were to make this list as brief as possible but also to cover the most critical threats for each area. This approach is the main objective of this Guide: as despite best practices by the SAP, ISACA and DSAG, our intention was not to create just another list of issues with no explanation on why a particular issue was (not) included in the final list, but to prepare a document that may be easily used not only by SAP security experts but by every Security specialist who wants to check if his SAP is Secure and guideline should also provide comprehensive coverage of all critical areas of SAP Security.

At the same time, the development of the most complete guide would be a never-ending story as at the time of writing we had more than 7000 checks of security configuration settings for the SAP platform.

We need a guideline, which will consist of few checks, but selected and what’s more important it will have future steps so that everybody will know that they made just a part of a job by implementing the standard, really critical part but not everything. So, we talking about 80/20 rules, and we will implement it in SAP Security.


Result

As a result, of more than 7 years experience in Security assessment of Enterprise Business applications of different types from different vendors including of cause SAP, Oracle, Microsoft, IBM but also taking into account different industry-specific systems like Retailix for Retail, MES/SCADA systems for Oil and Gas and ABS systems in Banking area our broadly experienced Pentest and Research team known for sending 450+ advisories in different products and participating in 50+ events in every continent collected information about most critical vulnerabilities and misconfigurations to understand the most critical areas. Our auditors who were responsible for different certifications like ISO, PCIDSS, PADSS, SOX and NIST in previous work analyzed those business applications from a compliance and risk point of view and finally we got 9 critical areas which are essential for security of every Enterprise Business Application and which are sorted by priority (Based on mix of Criticality, Probability, Popularity and Data needed for conducting attack).

After that we pick most critical vulnerabilities and configurations of SAP NetWeaver ABAP based applications from each of those 9 areas, and finally got 33 most critical checks.

 

Those are major checks that must be implemented first and can be applied to any system regardless of its, type, settings and custom parameters. It is also important that these checks are equally applicable to production systems and the ones of testing and development both.

In addition to major all-purpose checks, each of 9 critical areas contains a subsection called "Further steps". This subsection gives major guidelines and instructions on what should be done in the second and third place, and then how to further securely configure each particular item. The recommended guidelines are not always mandatory and sometimes depend on a specific SAP solution.

 

 

figure002final
Fig.2. How SAP security looks like with EAS-SEC

Wrap-up

On the one hand, with this approach, the authors were able to highlight key security parameters for a quick assessment of any SAP solution (from the ERP to the Solution Manager or Industry Solution) based on the NetWeaver ABAP platform and, on the other hand, to cover all potential problems and give complete recommendations on them.

In terms of quality, this makes the present Guide different from the SAP best practices that also contain few items, but do not cover the overall picture, as well as from best practices by ISACA and DSAG that have a lot of items, but the priorities are unclear and too complicated for the first step. Though these papers are highly valuable and absolutely necessary as a next steps and they are mentioned in Further steps" areas.

 

And finally, you are ready to use the guideline itself (click here), made with help of overwhelming experience of ERPScan research team.


Read, learn, stay secured!

You would not be surprised to hear that another retailer had been hacked and information about many customers was compromised.  We hear this kind of information several times per year sensationalized in the media.  As these incidents occur there are weeks of investigations to analyze the loss.  The team develops an action plan and implements stronger barriers to stop the issue from being a frequent event.

 

According to the 2014 Verizon Data Breach Investigations Report, two out of three data breaches are related to stolen or misused credentials.  There are many ways to obtain these credentials both inside and outside your fortress.  Some companies educate employees on phishing to reduce the likelihood of a response.  You may also need to send periodic reminders related to effective password protection.  To reduce password hacking success rates you may also need to increase password strength.   Many companies determine the combination of security parameters that will reduce hacking attempts; they implement the values and walk away hoping to remain out of the news.

 

With firewalls, secure access zones, and strong passwords you may lose sight that the inside man is estimated to perform over 50% of all data thefts.  Do you know if you have a traitor within your walls?  A Director of National Sales for your company downloads your annual sales revenue by customer.  The following week the Director downloads your customer master data with contact information.  As the Director of Sales, these may be normal business activities when performing data analysis.  However, when the Director turns in his leave notice a week later these may have been traits of the inside man.  Does anyone know what was downloaded?  This may have been a Director performing normal business activities or it could be an undetected data loss by someone who planned their exit.

 

When you think of the inside man, there are many ways we give them access to data.  I recently wrote a blog discussing direct table access.   http://scn.sap.com/community/security/blog/2014/05/01/reduce-the-risk-of-sap-direct-table-access   This blog discusses the authorizations to protect tables.  We assign users direct access to data through many reports and programs.  These transactions are part of the users approved role assignments.  It is very important that this review process is not a rubber stamp exercise to show management approvals.   They need to be reviewed for reasonableness as well as proper separation of duties.  Many times users are approved access to a process since they are company employees.  The process should insure that the access is appropriate for the users’ position.  We state in policies that access is assigned on a need to know basis but do you effectively implement a least privilege assignment model?  Some estimates document data loss by authorized users as two thirds of all incidents.  Users need access to transactions to perform business processes, but this may also enable them to export data into files or spreadsheets at will.

 

Custom programs are your next hurdle when you are protecting data.  It starts with assigning developer keys to developers only.  Once development is complete you need an EFFECTIVE code review before transporting changes.  This is more than a syntax check.  How much time does it take for your experienced developer to review the detailed technical specifications and confirm that what was documented was the only solution delivered?  For small changes this may be quick and inexpensive.  But do they know how to identify cross site scripting or other vulnerabilities in web enabled applications?  Are you removing them from productive work to perform manual analysis?  Depending on your environment these code reviews may be ineffective without tools to aide in the review.  This is not about endorsing one tool over another but making you aware that ineffective code reviews open holes for data breaches.

 

Many times hackers expose flaws in commercial software to access data.  If all of these holes were closed, do you still have vulnerabilities?  Most companies take advantage of enhancement spots within SAP programs or even implement custom code accessing standard SAP tables.  Knowing who has access to data through a business transaction or even direct table access may be an easy answer.   Knowing who has access through a custom program will be more difficult.  How these applications were developed and how they protect access requires an effective code review process.  This may even require training for your code review process.  A strong technical review of custom code changes is just as important as the user access review.  Without a review the developer could be the inside man.  Through the use of user exists or function modules to build files, a harmless program may be downloading data under the radar.  Even one time conversion programs that remain in your production environment can create risk.  You really need to know what tables your custom applications are using.

 

If you have restricted access to tables, transactions and performed an effective code review, is your fortress protected from data loss?  By restricting each of the methods of data access discussed above, there is no magic answer to prevent all data loss.  Recently I was made aware of an application used to track data exports from an SAP environment.    I will be installing this as a pilot to see how much data is being downloaded by approved users.  Some people call it the onion approach since there are many layers of security that must be addressed.  You do need to protect direct table access.  You must restrict access to sensitive transactions.  You need to implement strong security parameters.  You even need an effective code review.  When sensitive custom applications become obsolete they should be removed to reduce risk of data loss as well.  With these obstacles addressed you still need to monitor the insider threat and understand what data is actually being exported from SAP.  I will report back once my pilot testing is complete.

This blog is written in an effort to raise more awareness on securing your SAP infrastructure. In this case specifically on the topic of securing your SAP Password hashes. I will try and avoid being too technical…If I fail, sorry in advance

 

As recently announced there is a new version of OCLHashcat, version 1.20 that now supports password cracking for SAP Codeversion B and F/G. See the release notes.

 

 

Say what? And why should I care?

 

For the less technical people amongst us, OCLHashcat is an advanced password cracking tool that allows you to crack passwords via GPU’s (Graphical cards. This allows you to crack passwords (SAP passwords included) in a relatively fast way.

 

You should care about this as it is now possible to crack your SAP passwords very fast. Specifically the ones that are hashed in codeversion B or F/G. This may allow intruders to give access to your SAP systems.

 

 

So what about this codeversion B and F/G thing?

 

SAP passwords are stored in the database of your SAP systems in the USR02 table. They are not stored in clear text, but in a hashed format, so it cannot be read by anyone having direct access to the database tables.

 

This hash can be generated via different algorithms (In sap called codeversions). See for a good overview the weblog of Daniel Berlin. The most recent algorithm used is “I”, older versions are for example H, G, F, E, D, B and A. They all have their characteristics, but in general one can say that the lower the letter in the alphabet, the weaker the algorithm and the more easy (and faster) it is to crack the hash created by that algorithm.

 

 

How does this cracking work?

 

There are many, many ways to crack passwords, but for now we will focus on offline brute forcing passwords. In simple words; Generating password hashes for all possible passwords and compare them to the SAP extracted hashes to see if there is a match. Good to note here is that this is done OUTSIDE the SAP system, so there is no direct connection to an SAP system needed.

 

As said before, password cracking is an art in itself and there is a whole world to discover. I will not go into detail as there are tons of sources on the internet on password cracking. Some examples here, here or here.

 

Important to mention is that, with tools supporting GPU’s, the processing power is MUCH higher than it used to be with traditional CPU’s. Therefore the TIME needed to crack passwords is dramatically LOWERED and the RISK your SAP passwords might get compromised in a reasonable short time is HIGHER than it was before.

 

 

A practical example:

 

In my test lab I have set up a simple test environment on a rather standard desktop with a GPU card I had spare (used for bitcoin mining when that was still profitable with GPU’s). This is a rather old card I bought 2 years ago for around 200 Euros, but they can be purchased second hand for much less nowadays.

 

The installation of OCLHashcat is simple, install the needed GPU drivers, download OCLHashcat and you are ready to go. See for more information the website.

 

To demonstrate the tooling, I did two runs, one for the Bcode hashes and one for the Gcode hashes. As input I used a download from the USR02 table. The input file needs to have the hashes in the following format:

 

TEST$234CEE8774C3084D

 

Where the username in this case is TEST, followed by a $-sign and then the hash itself. In this example I used a file with several users and hashes, each on a single line. To run the tool in brute force mode you can use this command:

 

# oclHashcat64.exe –m 7700 –a 3 input_bcode.txt

 

Where “-m 7700” stands for the Bcode algorithm (Gcode=7800), “-a 3” means a brute force attack and “input_bcode.txt” is the file with Bcode hashes. The current processing is displayed on the screen:

 

bcode.png
   
In the screenshot it can be seen that the tool is currently brute forcing passwords with a length of 7 characters, the ones with passwords of 1-6 characters are already tried. In this example that only took several minutes. Depending on the speed of your GPU the total range of all possible passwords for the Bcode algorithm can be brute forced in less than a week time. Important to notice is that this only brute forces the first 8 characters, so if your passwords are 10 characters you will miss the last 2 characters.

 

After only 24 hours of brute forcing I managed to retrieve more than 75% of the passwords, many of them were 8 characters long. The remaining part would take several days, but as I already retrieved many passwords this was not necessary.

 

All retrieved Bcode passwords are converted to upper-case as there is not case-sensitivity in this algorithm. However, with the use of password-rules and the Gcode password hashes you can quite easily brute force the remaining characters and case-sensitive part.

 

Another thing to mention is the use of parameter login/password_downwards_compatibility. Depending on the value of this parameter you can use the brute forced Bcode password to logon due to backwards compatibility.

 

After brute forcing the Bcodes hashes, I did a similar attempt on the Gcode hashes from my USR02 table. This is a slower algorithm as can be seen by the  SPEED of ~2100 kH per second (A factor 6 slower than the Bcodes algorithm):

 

gcode.png

 

Brute forcing Gcode hashes is slower than the Bcode equivalent, so in case you have access to both hashes, a hybrid attempt would be more efficient where you first brute force the 8 characters via the Bcode and brute force the remaining characters (if any) and the case-sensitive part via the Gcode.

 

 

Ok, great stuff, but how do I protect myself against this?

 

Some counter measures that can be taken are:

 

  • Regularly attempt to brute force the password hashes of your users to test how strong they are (you probably need approval for this!)
  • When making use of Single-Sign-On you can probably delete the password hashes. Delete them from tables USR02 and USH02.
  • Set parameter login/password_downwards_compatibility = 0 (this might break communication with systems older than 7.0, check carefully)
  • Use a recent password hashing algorithm, see parameter login/password_hash_algorithm
  • Delete old hashes, see ABAP report CLEANUP_PASSWORD_HASH_VALUES
  • Choose strong passwords (enforce them via policies) via the password parameters
  • Limit access to tables with password hashes like USR02, USH02

 


References

 

For more information see following SAP Notes and other material:

 

For over 15 years I have worked for multiple companies that run their business on SAP software.  In each of these companies requirements have dictated that additional custom tables are required to extend SAP functionality for reports or custom processes.  Many of these companies also convert transportable tables into tables that are directly maintained in production.  Whether the table was delivered by SAP or created custom, the authorization group has been the classic method of controlling access to tables in SAP.  However, many application developers never define an authorization group for the table and accept the default value &NC&  (Not Classified authorization group).  Even SAP leaves many tables in the authorization group &NC& as they may have never been designed for direct table maintenance in a production environment.

 

When functional requirements direct a security architect to provide SE16/17, SM30/SM31 or other direct table access (SM34) to the authorization group &NC&, what is the risk?    In most ECC environments you will find more than 10,000 tables assigned to the authorization group &NC&.  Providing a user access to table maintenance transactions and access to the table through authorization object S_TABU_DIS creates excessive risk to potentially sensitive data.

 

Prior to the authorization object S_TABU_NAM, one of the methods to reduce risk was to create a parameter transaction and skip the initial screen.  This allows access to the table but prevents access to other tables within the authorization group.  However, if a user receives table maintenance transactions from another role, the parameter transaction protection is now useless.  The user will have access to every table in the assigned authorization groups based on the restrictions in the S_TABU_DIS authorization object.

 

What can you do to reduce the risk of direct table access?  The easiest method may be to use SE54 and change the authorization group to a different value.  But what value do you use?  Do you use an existing group or create a new one?  What are the downstream dependencies?  If a user needs access, roles and transaction tables will need to be updated.  Within programs there may even be required logic changes when an authority check is performed.  You move from the data risk of direct table access to potentially unknown side effects within SAP.

 

I believe there are several right answers to this question.  First if you have custom tables that require direct table maintenance in production, the developer should assign a proper authorization group.  This will reduce the risk from more than 10,000 tables to the tables within the related group.  I believe you should never use the authorization group &NC& for tables maintained outside of a program which requires assigning S_TABU_DIS to the user.  With enhancements to the VIEW_AUTHORITY_CHECK function module there is now a hierarchy of authorization checks.  With the authorization object S_TABU_NAM you can eliminate the S_TABU_DIS authorization group assignment in many cases.  The initial authority check for S_TABU_DIS and the table authorization group will fail.  However, SAP then checks authorization object S_TABU_NAM for access to the specific table.  With this you have reduced risk from all tables within the group to a single table.

 

This object alone does not solve all issues for direct table access, but it is a GIANT step in the right direction.  SAP is even committed to gradually moving authority checks to S_TABU_NAM as maintenance occurs.  I have even been working with SAP Global Support to correct transactions which were defined incorrectly.  Rather than implement S_TABU_DIS in the SU22 values, SAP has delivered the correct values through S_TABU_NAM.  If you want more details on this subject, check out the SAP Notes 1434284 and 1481950.

For the two people that have not heard of the OpenSSL Heartbleed-Bug yet, let me start with a short explanation (taken from Heartbleed Bug):

 

"The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the Internet. SSL/TLS provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs).


The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users."

 

 

So, is that a serious issue? Hell yeah! To quote Bruce Schneier: ""Catastrophic" is the right word. On the scale of 1 to 10, this is an 11."

 

You will see lots of people recommending you change your passwords on all https:// sites. While that is generally something that you want to do now and then (in my case that would probably require me to take a week off...) _right now_ is probably not the time (yet).

 

Let me explain:

 

  • If one of the sites you have an account on is affected by the issue, data from the site may have leaked, including session data, cookies or your password (although in the individual case that is highly unlikely). Also, depending on how their landscape has been set up, their SSL keys may have leaked.
  • This means that it _might_ be the case that an attacker has the SSL keys and can use it to de-crypt the communication and sniff your new password, too. In order to fix that the site has to request & install a new SSL server certificate _and_ declare the old one invalid by revoking it.
  • Unfortunately your browser will ignore that revocation by default. Which is why you should check the settings as described in this blog: http://www.macobserver.com/tmo/article/dealing-with-heartbleed-what-you-need-to-know/P5
  • The last step is to wait for the site operator to either notify you or check on the web site that they have done the first two things (patched OpenSSL & renewed SSL certificates) - only then the site can be considered secure again!

 

While you're at it, it's probably also a good idea to renew any oAuth authorizations you may have given on thoise sites (like, allowing your blog to automatically post to Twitter).

 

This is going to be a loooong painful process for everyone. But there's no point running in a blind panic now. There's a lesson in there for everyone, I guess.


Edited on 2014-04-11 to address some of the comments:


There are two main messages I want to get across:


  • If you're changing your password before the site is fixed it won't hurt. HOWEVER, you will not be safe until it _has_ been fixed, and you will have to change it again then.


  • Just as important: STOP RE-USING PASSWORDS ON OTHER SITES!

 

Other recommended blogs to read while you're waiting:

 

Creativity is bad.

 

On Passwords

 

initial1

Today I was reviewing some events generated for the Security Audit Log and noticed an interesting behavior.

 

For those who are not familiar with it, the Security Audit Log (SAL) allows SAP security administrators to keep track (via a log) of the activities performed in their SAP systems. In a future post we will discuss how to enable and configure this logging.

 

By default the SAL facility logs the “Terminal Name” which is either the Terminal Name (defined by the computer which performed the logged action) or the IP address of the computer that is the source of events. The IP address is only logged if the source computer does not transmit a Terminal Name with its communications.

 

This behavior can be abused by an attacker since filling the terminal name value in an RFC call is a task performed by the caller (the user’s machine). Having the ability to manipulate the “Terminal Name” means the attacker could try different attacks such as bruteforce attempts but have each transaction appear to come from a different terminal. Taken even further; the attacker could set an IP address (or cycle through a set of IP addresses) as the Terminal Name; meaning each request would appear to have originated from these IP addresses (as in the logs it is not possible to distinguish between an IP address that has been logged because no Terminal Name value was transmitted vs an IP addressed that has been logged as the Terminal Name).

 

 

Remediation

To fix this problem it is possible to configure the profile parameter “rsau/ip_only” and set it to 1. In this scenario whenever possible the source IP address of the event will be logged and the Terminal Name value is ignored. This change must be made to the profile file, it cannot be done using transaction RZ11.

For more information check the SAP note 1497445

Here is the situation:

 

  • You come to a new customer
  • You don't want to change anything in already existing systems
  • You don't want to depend on anything but your own scripts and tools.
  • You want to get an overview of the security settings quickly.
  • ...or you are simply curious...

 

SUIM is a powerful and flexible tool to determine the effective authorizations a user has. It has it's quirks and if you don't know about them, you may come to the wrong conclusions. However, if you have an audit program with hundreds of checks, executing SUIM manually is not feasible.

 

It would be great if you could run a simple SQL statement to determine which users are authorized to perform a certain activity. However, this is not as easy as it sounds. What we need would be a table, that contains the Username, it's assigned authorization objects and their values. If we had this, we would be able to easily retrieve all authorizations assigned to the user and look for the critical ones.

 

Unfortunately such a table doesn't exist (except the buffer tables but depending on the system configuration the buffers may not be up to date and get rebuild as soon as a user logs on). Let's see how we get there. These are the relevant tables:

 

  • USR02: Contains the user logon information including passwords, lock status, validity date and so on.
  • UST10S: Describes the single profiles and the authorization objects they contain
  • UST10C: Describes the composite roles and which single roles they contain
  • UST04: Connects Users with their profiles. It can either refer to single profiles or contain the names of composite profiles.
  • UST12: the actual values of the authorization fields

 

And this is how they are related to each other:

 

table-relationships.png

 

Let's download the relevant tables into your favorite desktop database. There are tons of options on how to accomplish this. For example here: Import tables directly into Access from SAP using RFCs or this one: RFC_READ_TABLE data into MS Access (along with the table structure)

 

Let's start our process by creating a table that contains the username, profile name, authorization object and the name of the profile as it's stored in the user master record. The SQL statement may need to get adapted for your platform:

 

insert into denormalized ("MANDT", "BNAME", "PROFN", "OBJCT", "AUTH")

select b."MANDT", b."BNAME", a."PROFN", a."OBJCT", a."AUTH"

  from "UST10S" a,

       "UST04" b

  where b."MANDT" = a."MANDT"

    and b."PROFILE" = a."PROFN"

    and a."AKTPS" = 'A';

 

It creates a new table and inserts the values we need into it. The next step is to resolve the composite profiles into single profiles and add these values:

 

insert into denormalized ("MANDT", "BNAME", "PROFN", "OBJCT", "AUTH")
select a."MANDT", a."BNAME", c."PROFN", c."OBJCT", c."AUTH"
  from "UST10S" c,
       "UST10C" b,
       "UST04" a
where a."MANDT" = b."MANDT"
  and a."MANDT" = c."MANDT"
  and a."PROFILE" = b."PROFN"
  and b."SUBPROF" = c."PROFN"
  and c."AKTPS" = 'A'
  and b."AKTPS" = 'A';

 

Composite profiles in UST10C may refer to other composite profiles. That means, the field SUBPROF contains another composite profile instead of a single profile. That means, we need to add an additional level:

 

insert into denormalized ("MANDT", "BNAME", "PROFN", "OBJCT", "AUTH")
select a."MANDT", a."BNAME", c."PROFN", c."OBJCT", c."AUTH"
  from "UST10S" c,
       "UST10C" b,
       "UST10C" d,
       "UST04" a
where a."MANDT" = b."MANDT"
   and a."MANDT" = c."MANDT"
   and a."MANDT" = d."MANDT"
   and a."PROFILE" = b."PROFN"
   and b."SUBPROF" = d."PROFN"
   and d."SUBPROF" = c."PROFN"
   and c."AKTPS" = 'A'
   and b."AKTPS" = 'A'
   and d."AKTPS" = 'A';

 

We need to proceed and add additional levels until no further records can be found. As a next step we need to map the records in our table denormalized to the actual values in table UST12. A view would be the easiest and fastest:

 

CREATE OR REPLACE VIEW "V_USR_UST12" as

    SELECT denormalized."BNAME",

           "UST12"."AKTPS",

           denormalized."OBJCT",

           denormalized."AUTH",

           "UST12"."FIELD",

           "UST12"."VON",

           "UST12"."BIS"

      FROM denormalized INNER JOIN "UST12"

        ON denormalized."MANDT" = "UST12"."MANDT"

       AND denormalized."OBJCT" = "UST12"."OBJCT"

       AND denormalized."AUTH" = "UST12"."AUTH"

 

And there it is, the view that allows us to easily retrieve the the users that have a certain combination of authorization objects and values using SQL. This statement shows the usernames with access to SE16:

 

SELECT *

  from "V_USR_UST12"

where "OBJCT" = 'S_TCODE'

   and "FIELD" = 'TCD'

   and "VON" = 'SE16'

 

However, nothing is ever as easy as it seems... Let's say we have a user that is allowed to execute any transaction and thus has a "*" in the VON field. Our simple SQL statement from above will not return that user.

 

Another challenge is the query when an authorization object consists of multiple fields. Like for example S_DEVELOP with ACTVT=02 (change) and OBJTYPE=DEBUG.

 

To get around these issues you need to get creative with your SQL statements. It's not too hard and I don't want you to rob you of the fun of figuring it out

 

You may want to join the results to other USR* tables for example to select only unlocked users or retrieve first names, last names, departments, office locations and so on.

 

Actions

Filter Blog

By author:
By date:
By tag: