1 2 3 Previous Next

Java Development

44 Posts
The purpose of this document is to provide best practices in order to create useful Logging and Tracing messages.

1.    Definitions

1.1  Logs

Logs are targeting administrators when they encounter some problem. Therefore logs should describe the problem in an easily understandable form. Avoid using internal terms and abbreviations in log messages. Checking component versions
The network connection has been broken
Use com.sap.tc.logging.Category to log information, warnings or errors to an administrator of a customer system.

1.2  Trace Messages

Traces are meant for developers or support engineers trying to identify a problem. They should be as detailed as possible. There is no performance impact of having additional trace statements, since traces can be switched on or off based on severity. The goal: provide nearly the same amount of information what we can collect with debugging.
New change request cannot be created when there is one with status CR_CREATED, CR_INPROCESS, CR_ACCEPTED for the same sales order id and item id
Use com.sap.tc.logging.Location to emit trace messages.

2.    Implementation

1.3  Logging

In order to perform logging, the following steps should be followed:
1: Retrieve the Category Object for logging
private static final Category category = Category.getCategory(Category.APPLICATIONS, "<APPLICATION NAME>")
For convenience and to avoid unnecessary object creation, the following static variables are defined:
  • -          com.sap.tc.logging.Category.SYS_DATABASE
  • -          com.sap.tc.logging.Category.SYS_NETWORK
  • -          com.sap.tc.logging.Category.SYS_SERVER
  • -          com.sap.tc.logging.Category.SYS_SECURITY
  • -          com.sap.tc.logging.Category.APPLICATIONS
  • -          com.sap.tc.logging.Category.SYS_USER_INTERFACE
Security related logging should be performed using Category.APPS_COMMON_SECURITY
2: Log the message using the one of the available methods, according to the severity (e.g. infoT, debugT, warningT, errorT, fatalT)
category.infoT(location, method, "User {0} logged in", new Object[] { user });

1.4  Tracing

Proper tracing should be implemented as follows:
1. Create a reference to a location object for tracing like this
private static final Location location = Location.getLocation(<class name>.class);

String method = "<method name>";
Create a String variable which contains the method name in order to enable filtering for the logs of a given method in log viewer. This must be the first statement in every method, and second in constructors. Don’t set multiple times the method name in a given method. If there are multiple methods with the same name distinguish them using parameter list like this:
String method = "<method name> (String)";

String method = "<method name> (String, String)";
2. Output the trace message using the one of the available methods, according to the severity (e.g. infoT, debugT, pathT, warningT, errorT)
location.infoT(method, "User {0} started a new transaction", new Object[] { user });
Because of performance reasons for printing parameters an object array should be used. Elements of the array can be inserted to the text with the following way:
"This is an example text. Parameter 1 is {0}, the next parameter is {1}, the last parameter is {1}, the last parameter is {2}", new Object[] { variable1, variable2, variable3 }
3. Trace the entry and the exit of the method in nontrivial cases. For methods throwing exceptions place the exit in the finally{..} clause.
try {
} finally{
    … location.exiting(method);
Entering exiting pair should be used for all methods which have main business role and for important utility methods as well. Methods with limited functionality, or methods with only 1 or 2 log entries can be skipped.

4. Exceptions should be traced using the traceThrowableT method of the Location class.
try {
} catch (Exception e) {
    location.traceThrowableT(Severity.ERROR, method, "Error occurred while calling business operation with parameter {0}", new Object[] { variable}, e);

Usually in productive systems only Error level is enabled. That’s why the log message should contain the most important values which were processed when the error occurred.
If the current class cannot handle the exception and it thrown it forward; it should be logged in all level. If we filter for a given class in the Log Viewer application which is part of a call chain we won’t be able to see the exception if we don’t log it on every level.
If the given exception is related to configuration or availability of a system component/another system in the landscape instead of location it should be reported to category, because this problem can be solved by a system administrator.

5. Input Parameter Checks

If the given method has input parameter check we can report result in a warning message.
if ( inputParameter_OrderID > 5) {
    location.warningT(method, "Wrong input parameter Order Id: {0}", new Object[] {inputParameter_OrderId});



6. UI Navigation

If you navigate in the Web Dynpro UI layer between pages you can log it with path severity:
location.pathT(method, "Navigate to Clientselection component");
wdThis.wdFireEventNavigationEvent(EventSources.ES_CLIENTSELECTION, eventId);
It other layer/application/system is called from the code it should be logged on Path level (Webservice calls, JCO calls, etc.). This way the connections between layers and components can be investigated. If the called component is in the same software Component Info level should be used.
7. Business Code
The developers should decide what are the important steps in the code which are could be interesting mainly for developers and support engineers. Some cases the messages could be important for administrators as well.
location.infoT(method, "Get identTypeDropDownKey from context");


ISimpleTypeModifiable myType = wdThis.wdGetAPI().getContext().getModifiableTypeOf("identTypeDropDownKey");


IModifiableSimpleValueSet values = myType.getSVServices().getModifiableSimpleValueSet();
infoT for the important values/process steps for developers.  
location.debugT(method, „Put value {0} into the list with key {1} ", new Object[] {e.getAttributeValue("value"), e.getAttributeValue("key")});

values.put((String)e.getAttributeValue("key"), (String)e.getAttributeValue("value"));
debugT for the detailed information which can help find the exact reason of a problem. If you would like to print exact value of a variable please use location.debugT. There is one exception: calling other layers. Please follow the security restrictions as well! For example don't insert highly confidential information into log and trace messages, for example passwords, PIN codes, etc. In ambiguous cases please contact the local security expert!

3.    Severity settings

Default severity settings should be:
-          INFO for categories (INFO and higher severity messages will be logged)
-          ERROR for locations (trace everything with severity ERROR or above)
Because of performance reasons ERROR level should be configured for categories as well in productive systems.

4.    Performance Considerations

If you have parameters, never build Strings yourself, always use the {n}-replacement:
location.infoT(method, "User {0} started transaction", new Object[] { user });

If you have to make calculations for traces, check first if the tracing is active

if(location.beLogged(<severity>/* e.g. WARNING */) ) {  // perform calculations and do logging/tracing
In this case this check should be used (of course errorT should be used instead of traceThrowableT):

5.    Common mistakes regarding Logging and Tracing

Using Error level instead of Info:
Printing the content of a given variable is not an error message. Debug level trace should be used
Using Debug level instead of Error:
Exceptions must be logged with ERROR log level. If an exception is part of normal business behavior (for example it indicates that a function call has no result) it shouldn’t log as an exception.
Incorrect log messages:
How this log message helps to understand what happened in the code? A good log message should be understandable for everybody even without source code.
The log message should contain as many information about the working of the program as possible. In this case Error Code and Error Description could be useful. The timestamp is by default part of the log message.
Unhandled exceptions:
Unhandled exceptions will be written to System.err stream and is spams the log file. Always use traceThrowableT method with Error severity
Using wrong method to print exception details in the log:
Using print stack trace is forbidden, because it prints the stack trace into the log in barely readable format.
The method “catching” will report the exception to Warning level, furthermore entering of own error message is not possible.
In both cases use location.traceThrowableT method instead.
Using traceThrowableT method instead of errorT
If there is no exception error should be reported with errorT method.
Calling exiting method in the middle of the method
Entering-exiting pair should be called only once at the very beginning and end of the method. During issue investigation the investigator will think that the method has ended.
Logging closely related data into several log entry
multiple log entries.jpg
If there are multiple variable to log which are related they should be logged into one log entry. We will see the related data in one place and the logging will be faster because the reduced number of I/O operations.
location.infoT(method, „System Number Reset \n Sales Order ID: {0}\n EOSE Plan: {1}\n Quote: {2}\n Quote Line Item: {3}\n Sales Order Item: {4}\n SEN: {5}\n SSI: {6}", new Object[] {so.getID(), so.getPlan(), quoute.getName(), quote.getLineItem(), soItem.getId(), so.getSEN(), so.getSSI()});
Please note this is just an example code. Every value will be written into a new line so tit remains readable.
Method name abuse
The method name should contain ONLY the method name, not the class name. In the Log Viewer application the location field will contain the class name and the method name. If the method name contains the class name as well this field won’t be readable anymore because it will be too long and contains the same data twice.
If there are more methods with the same name the parameter list can be added in order to distinguish between the methods.
Sensitive information in the log:
Printing sensitive personal data, like account balance or password is forbidden
Null pointer check is missing:
If a printed value is provided by a method of a class null pointer check for the class is mandatory.
Swallowed exception
try {
} catch {
    //TODO write exception handling
In this case the exception won’t be visible in the log.

7.    Appendix

Official guide for Logging and Tracing

The reason behind this blog can be found here: [1].


This blog is not about writing a WDJ application for uploading a file, as there is more than enough information on SCN available explaining how to do this. It is about how to configure ICM to allow large file uploads (>100 MB) and what this implies. The configuration is valid for NetWeaver Java >= 7.10 as these releases use ICM.


A limitation of the maximum file size accepted by ICM is given by standard configuration. By default, ICM only accepts files not larger than 100 MB. As always, the documentation is not explaining why this is, it is just given by almighty SAP [2]:

“The default setting for the size of HTTP requests is a maximum of 100 megabytes”


The parameter: icm/<PROT>/max_request_size_KB

The see the current configuration of this parameter, you can use SAP MMC (you secured port 50013 to ensure not everyone can see anything of your portal, including the logs, right?).




This parameter needs to be changed in the ICM configuration file located on the server. As it is a default parameter, you`ll have to create the parameter first to overwrite the default value. After a restart of ICM, the change is effective and you can upload GB large files. Now, this parameter configured ICM and therefore generally valid. In case your portal is accessible by external users, you should use a reverse proxy to make sure people from outside try to crash your server by sending automatically large files to crash your server.


It would be nice to not find out the possible parameters using SAP Help when editing a configuration file. This is just counterproductive. Why not put all the parameters there, including a description? That’s how most (ex: open source software) configuration files are done. The file serves as documentation.


Example application to try it out


The limitation is valid for every HTTP request send to AS Java, so if you want you can try it out with a portal application, servlet, KM file upload or a WDJ application. The example here is a WDJ application. You can write your own application using the examples provided by SAP here on SCN. In case you do not want to search, here is a short description on how to code the application.


1. Create a view named FileUpload



2. Node element uploadFile



3. Uploading the file is triggered by the action UploadFile


public void onActionUploadFile(com.sap.tc.webdynpro.progmodel.api.IWDCustomEvent wdEvent )
  IPrivateFileUploadView.IContextElement element = wdContext.currentContextElement();
  IWDResource resource = element.getUploadFile();
  if (element.getUploadFile() != null) {
  // do something
} else {}



This is all it needs to have a WDJ application that allows to upload a file. Note that the node element handling the file is of type Resource. During the upload, WDJ creates a temporary file on the server that will be stored there until the resource is not any longer used.





[1] http://scn.sap.com/thread/3278398

[2] SAP Help


Some information from SAP Help and SCN about WDJ and file upload / download





Uploading and Downloading Files in Web Dynpro



Web Dynpro Java Demo Toolkit


>= 7.0

How to Upload and Download Files in an Web Dynpro for Java Application


>= 7.11

SAP Portal KM: Create a resouce



Loading the InputStream at FileDownload on Demand



Uploading and Downloading Files in Web Dynpro Java



Java Connection Pool - Design and Sample Implementation

                                                                                                    -  Aavishkar Bharara



In the world of application designing, we often encounter the need to fetch, update or delete objects stored in the database. To enable this, we would require a database connection object. Opening and closing the database connection object can be very costly as it involves, creating the network connection to the database followed by doing the authentication using the username and password. Hence, database connection is a scarce resource and need to be utilized efficiently and effectively.



Connection Pooling

The connection pooling helps to utilize and efficiently re-use the database connection in a multithreading environment. The idea is to share the database connections (limited connections) with the users (unlimited numbers) in the most efficient way.





There are multiple design options for creating an efficient and re-usable connection pool application programming interface.

The design constraints include:-

  • A singleton "Connection Pool" class
  • An option to "request" for the connection from the connection pool.
  • An option to "release" the connection to the connection pool.

Apart from these core design constraints, there are some administration requirements as well, which includes:-

  • Tracking of "who" and "when" was the connection requested. This helps to track "connection leaks" in the code and take corrective steps accordingly.
  • Tracking of "lost" connections or "idle" connections which were not returned back to the connection pool and "claim" them back to the connection pool.

Design Options

These requirements are very similar to many frameworks already available in the market.Microsoft COM/DCOM provides a common interface called as IKNOWN interface to accomplish the similar set of requirements.  ...see link for more details


Java has the similar requirements for the "Garbage Collector" usage to free up unused java objects....see link for more details


Interface IConnectionPool

An interface IConnectionPool provides the basic methods of the connection pool class.


Interface : IConnectionPoolAdmin


This provides the administrative functions for the connection pool class.

Connection Container Class

The implementation starts with creating a container for the "connection object" first.


This connection container class stores the attributes of the connection like,

  • Connection is in use or idle
  • Who and when accessed the connection

Connection Class

The connection class implements the IConnectionPool interface to provide the basic features of connection pool. It also implements the IConnectionPoolAdmin to provide the administrative functions to the connection pool implementation.


        public class ConnectionPool implements IConnectionPool, IConnectionPoolAdmin {


This implements the "singleton" feature in the Connnection via the "getInstance()" function. The function getConnection() has the complete logic built to get "idle connection" and set the required attributes to the connection object so that it is "marked" as currently "in-use".

A Sample Implemented Code


Attached is the complete implemented code using the jar file AviConPool.jar which you can include in your project and start re-using it. The following set of methods needs to be called to utilize this jar file.


First (and only once) Call should be,


                    String DBDriver, String DBURL, String username, String password, int maxConnections);

This needs to be called from the application central component for once.

Subsequently, the calls to the connection pool would be something like this,


public void functionABC


  Connection con = ConnectionPool.getInstance().getConnection();

  try {

  <<< Write Your Code Here >>>


                        Statement stmt = con.createStatement();


  ResultSet rs = stmt

  .executeQuery("select ......");

  if (rs.next()) {

  // Read Data



  // Help GC



  rs = null;

  stmt = null;

  } catch (Exception ex) {

  // Log Errors








Also, attaching the jar file [ConPool.txt which can be renamed to ConPool.jar] which you can use in building your applications.

In software development, a general rule is to not reinvent the wheel. This meas: If there is already a library available, that does the job, use that library. Without this thinking, even the simplest projects would consume too much time in solving already solved problems. A drawback is that the project will depend on a lot of external libraries. Managing these library dependencies is important. Java projects are no exception to that. For instance, there are many standard classes available from Apache that everyone uses, but are not part of core Java.


In the Java world, two main tools help to take care of this: Maven and Ivy. Both allow to define the dependencies for each project and download them from a central repository. There are public accessible Maven repositories available and while it is easy to find there the needed library, it raises a simple question: is this the right way for your project to do?


Once published there, the library can be downloaded from anyone. Most probably, this is not what you want for your internal project. When you have several developers, makes it sense that each one is accessing the outside repository and downloads hundreds of MB when resolving the library devependencies? When a developer creates a new internal JAR, how to distribute it to the other developers? For code quality analysis, how to get the jars needed for a binary analysis?

There are solutions available for internal repository managers that also act as a proxy for public repositories. The most famous ones are:

  • Nexus
  • Artifactory


Installation, configuration and usage of both is simple. In case the Java project is already using maven or ivy to resolve dependencies, it is only changing the name of the repository server to the internal one. In case a project depends on an external library, the repository manager connects the public maven server and downloads them. As all developers are using the internal server, caches help to minimize traffic and reduce overall time needed to resolve a dependency. Libraries created by the Java projects can be uploaded to the internal repository manager and are automatically accessible by other developers. Maven and Ivy take care of this.


What about SAP Java jars? How to get these and publish to Nexus / Artifactory? SAP isn’t making them available in a public repository manager. There is also no other way given by SAP to import the jars. What you get are the SCs and DCs. NWDI is the repository manager for SAP Java, but when you use continuous integration, NWDI is not the best option.


Using NWDS and NWDI it is possible to sync all DCs to a local computer. This sync process creates a folder structure and copies the associated files, including the jar files. DCs have a public part concept, and each defined public part resolves to a jar file.


To get the jars, open the Development Infrastructure perspective in NWDS and select the DC to get the jar from and sync it.


Where are the files stored? In the workspace directory for NWDI projects. In my case, I have a workspace named test and only 1 NWDI connection (resolved to 0): <path>\test.jdi\0\DCs\sap.com


If you do this for all of SAP`s DCs, the directory will look like


The jar files are stored depending on their DC name.


Tc/je/usermanagement/api resolves to: <path>\test.jdi\0\DCs\sap.com\tc\je\usermanagement\api\. The jar file is stored in the standard directory, that is: _comp\gen\default\public\api\lib\java


This is the jar that needs to be uploaded to your repository manager server. This can be done manually or automated by a script. I developed a small script for that that also replaces the ~ with a . and that I can run from Jenkins.


In Artifactory, the artifact is shown with additional meta data, like age, size and download statistics.


Searching for a class is possible as well as browsing the content of the jar to see the individual class files.


Today my colleague tells me with one tip regarding Eclipse auto completion setting, which I think is useful in my daily life.



There are lots of standard classes and those class/method created on my own, however when I type some characters there is no auto completion dropdown list. For example I have already created one method named "consumeABAPFM", however even after I type "consum", there is still not any auto completion function provided.


However, by choosing "Windows->Preference->Java->Editor->Content Assistant", we can achieve what we expect.



Maintain "Auto activation triggers for JAVA" with value ".ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz".


After that whenever we type some characters the auto completion dropdown list will be there and refreshed with 200 ms.


In our JAVA development if there is no source code attached for a class like below,


We have no chance to view its source code.


However you can use an open source tool, JAD, to decompile the class file so that you can view its source code.

There is also an available plugin for eclipse which you can download from this link.

1. Download the proper Jad Eclipse plugin according to the version of your Eclipse:


2. Download the proper JAD.exe file according to your OS type:



3. put the JAD plugin to your Eclipse plugin folder:


4. Put the JAD.exe to the bin folder of your JRE installation:


5. Restart your Eclipse, in menu Windows->Preference->Java, you can find there is a new option JadClipse, maintain the path of your JAD.exe to "Path to decompiler":


6. Now click F3 on the class which you would like to view its source code, the JAD will decompile it for you:


  These documents will help you to creating the web service for updating the data in java table using EJB and JPA.

1.     Create Dictionary DC from Dictionary Perspective.


2.     Create Table in dictionary.


3.     Build and deploy the Dictionary DC.

4.     Create EJB  application-Select DC type as EJB Module.


5.     Give EJB application name and click next.


6.     Select JPA and finish.


7.     Create Enterprise application for EJB application .


8.     Give name of enterprise application.


9.     Add  ejb application(test_app) to ear project and finish.


10.     Select Sap data source aliases provider module of the ear DC from project facets and apply.


11.     Check alias name of data base in the ear app.


12.     Open data-source-aliases.xml from META-INF folder and note the alias name.


13.     Create  connection from EJB application to dictionary.


14.     Define Alias name in persistence.xml.


15.     Generate Entities from Table in EJB.


16.     After creating the entity class, code as below.


17.     Create class for dto in EJB.


18.     Generate the getter and setter for this variable.


19.     Create an interface in the EJB DC and declare save method as below.


20.     Create a session bean in EJB DC.


21.     Include Entity Manager and Persistence Context in session bean.


22.     Define save operation in session bean.


23.     Create a web service for session bean.


24.     Build and deploy the EAR and EJB project.

25.     Test the web service from WSNavigator.

Steve Carlton

ABAP Enigma Machine

Posted by Steve Carlton Oct 8, 2013

I got interested in the story of the Enigma machines and Bletchly Park and ended up writing this, just for kicks

Now, is someone can write something in ABAP to decypher Enigma messages, please let me know.


*  Report  Z_ENIGMA
* This is a3 wheel Enigma coding machine emulator using genuine wheel settings.
* Sequence is this :-
* Input -> Plugboard -> Wheel 3,2,1 -> Reflector -> Wheel 1,2,3 -> Plugboard -> Output
* The plugboard is a set of connections between the wheel input and either
* the keyboard (i/P) or lightboard (o/p)
* An enigma wheel has a set of 26 i/p connections and 26 o/p connections
* It has a ring around the outside so these connections can be
* shifted relative to the characters that are displayed on its outer edge.
* It has a turnover point where the next wheel in the set moves 1 position
* (think how a car odometer works when it moves from 9 to 10)
* When in use, the initial settings have to be known by the receiving machine
* So the initial settings are encoded using some other method and transmitted
* to the receiver first. This is called public key encryption, because the key
* is known publicly (but not how to use it).
* Text symbols are defined as :
* 001 = "Enigma Settings: Wheel order, start positions and plugboard"
* 002 = "Text to be encrypted / decrypted. Characters A to Z only. No puncuation / numbers."
* Selection Texts are defined as :
* INIT1    = "Initial Position Wheel 1"
* INIT2    = "Initial Position Wheel 2"
* INIT3    = "Initial Position Wheel 3"
* P_PLUGB  = "Use plugboard mapping?"
* W1      = "Wheel 1"
* W2      = "Wheel 2"
* W3      = "Wheel 3"

REPORT  z_enigma .

           c_turn1    TYPE char1  VALUE 'R',
           c_turn2    TYPE char1  VALUE 'F',
           c_turn3    TYPE char1  VALUE 'W',
           c_turn4    TYPE char1  VALUE 'K',
           c_turn5    TYPE char1  VALUE 'A',
           c_turn6    TYPE char2  VALUE 'AN', "Getting sneaky by using 2 turnover positions
           c_turn7    TYPE char2  VALUE 'AN',
           c_turn8    TYPE char2  VALUE 'AN'.

DATA: v_output TYPE string,
      v_source TYPE string,
      v_temp   TYPE string,
      v_length TYPE i,
      source   TYPE string,
      wheel3   TYPE char26,
      wheel2   TYPE char26,
      wheel1   TYPE char26,
      alpha    TYPE char26,
      turn1    TYPE char1,
      turn2    TYPE char1,
      fname    TYPE char8.

TYPES : BEGIN OF ty_wrd ,
         text TYPE char80 ,
       END OF ty_wrd .

DATA: char_in    TYPE char1,
      c1    TYPE char1,
      idx   TYPE i,
      pos   TYPE i,
      char_out TYPE char1,
      lt_text  TYPE TABLE OF ty_wrd WITH HEADER LINE,
      gc_docking        TYPE REF TO cl_gui_docking_container,
      gc_text_editor    TYPE REF TO cl_gui_textedit,
      gt_text           TYPE TABLE OF tline-tdline,
      gs_text           TYPE tline-tdline.


* Choose 3 wheels
            w2 TYPE ddtabind DEFAULT 2,
            w3 TYPE ddtabind DEFAULT 3.

* Set wheel start position
PARAMETERS: init1 TYPE sf_char1 DEFAULT 'M',
            init2 TYPE sf_char1 DEFAULT 'C',
            init3 TYPE sf_char1 DEFAULT 'K'.




  PERFORM initialization.


     w1 > 8 OR w2 > 8 OR w3 > 8.
    MESSAGE s001(00) WITH 'Select 3 wheel numbers from 1 to 8'.

  IF w3 = w2 OR
     w3 = w1 OR
     w2 = w1.
    MESSAGE s001(00) WITH 'Wheels cannot be used in more that 1 position'.

     init1 NA sy-abcde.
    MESSAGE s001(00) WITH 'Set start position wheel 1 A to Z'.
     init2 NA sy-abcde.
    MESSAGE s001(00) WITH 'Set start position wheel 2 A to Z'.
     init3 NA sy-abcde.
    MESSAGE s001(00) WITH 'Set start position wheel 3 A to Z'.


* Load selected wheels into work areas
  ASSIGN  (fname) TO <data>.
  wheel1 = <data>.

  ASSIGN  (fname) TO <data>.
  wheel2 = <data>.
  ASSIGN  (fname) TO <data>.
  turn1 = <data>.

  ASSIGN  (fname) TO <data>.
  wheel3 = <data>.
  ASSIGN  (fname) TO <data>.
  turn2 = <data>.

  PERFORM enigma.

FORM initialization.

* Add free text box to selection screen
  CREATE OBJECT gc_docking
      repid     = sy-repid
      dynnr     = sy-dynnr
      side      = gc_docking->dock_at_bottom
      extension = 200.

  CREATE OBJECT gc_text_editor
      parent = gc_docking.

  IMPORT gt_text FROM MEMORY ID sy-repid.
* Put in sample text
  IF gt_text IS INITIAL.
* Using wheels 123 at MCK with plugboard turned on
    APPEND gs_text TO gt_text.

  CALL METHOD gc_text_editor->set_text_as_r3table
      table  = gt_text
      OTHERS = 1.

ENDFORM.                    " INITIALIZATION
*&      Form  enigma
FORM enigma.

* Get text from selection screen screen editor
  REFRESH gt_text.
  CALL METHOD gc_text_editor->get_text_as_r3table
      table  = gt_text
      OTHERS = 1.

      i_tabline_length = 132
      e_string         = source
      it_table         = gt_text.

* Shift to upper case and remove all spaces
  v_source = source.
  v_length = strlen( source ).

* Order is this :-
* Input -> Plugboard -> Wheel 3,2,1 -> Reflector -> Wheel 1,2,3 -> Plugboard -> Output
  CLEAR : v_temp, v_output.

  DO v_length TIMES.

* get input character from keyboard
    pos = sy-index - 1.
    char_in = source+pos(1).

* Validate only alphabetic characters used
    IF char_in NA sy-abcde.
      MESSAGE s001(00) WITH 'Only letters A to Z allowed'.

* Plugboard mapping inbound
    PERFORM plugboard USING 'I' CHANGING char_in.

* Wheel 3
* Wheel 3 always steps on 1 position before encoding starts
    CLEAR pos.
    SEARCH c_alpha FOR init3.
    pos = sy-fdpos + 1. "11
    PERFORM index_wrap CHANGING pos.
    init3 = c_alpha+pos(1).

* Setup wheel
    ASSIGN  (fname) TO <data>.
    PERFORM set_wheel USING  <data>
                    CHANGING wheel3.

* Look for index of entered character
    SEARCH c_alpha FOR char_in.

* Get wheel character.
    c1 = wheel3+sy-fdpos(1).

* Look for index of o/p character
    PERFORM set_wheel USING  c_alpha
                CHANGING alpha.
    SEARCH alpha FOR c1.
    idx = sy-fdpos.

* Wheel 2 turnover
    IF turn1 CA init3.
      CLEAR pos.
      SEARCH c_alpha FOR init2.
      pos = sy-fdpos + 1.
      PERFORM index_wrap CHANGING pos.
      init2 = c_alpha+pos(1).

* Wheel 2
* Setup wheel
    ASSIGN  (fname) TO <data>.
    PERFORM set_wheel USING  <data>
                    CHANGING wheel2.

    c1 = wheel2+idx(1).

* Look for index of o/p character
    PERFORM set_wheel USING  c_alpha
                CHANGING wheel2.
    SEARCH wheel2 FOR c1.
    idx = sy-fdpos.

* Wheel 1 turnover
    IF turn2 CA init2.
      CLEAR pos.
      SEARCH c_alpha FOR init1.
      pos = sy-fdpos + 1.
      PERFORM index_wrap CHANGING pos.
      init1 = c_alpha+pos(1).

* Wheel 1
* Setup wheel
    ASSIGN  (fname) TO <data>.
    PERFORM set_wheel USING  <data>
                    CHANGING wheel1.

    c1 = wheel1+idx(1).

* Look for index of o/p character
    PERFORM set_wheel USING  c_alpha
                CHANGING wheel1.
    SEARCH wheel1 FOR c1.
    idx = sy-fdpos.
    c1 = c_alpha+idx(1).

* Reflector
    SEARCH c_ref_b FOR c1.
    c1 = c_alpha+sy-fdpos(1).
    idx = sy-fdpos.

* wheel 1 again
    PERFORM set_wheel USING  c_alpha
             CHANGING wheel1.
    c1 = wheel1+idx(1).

    ASSIGN  (fname) TO <data>.
    PERFORM set_wheel USING  <data>
                    CHANGING wheel1.
    SEARCH wheel1 FOR c1.
    idx = sy-fdpos.

* wheel 2 again
    PERFORM set_wheel USING  c_alpha
             CHANGING wheel2.
    c1 = wheel2+idx(1).

    ASSIGN  (fname) TO <data>.
    PERFORM set_wheel USING  <data>
                    CHANGING wheel2.
    SEARCH wheel2 FOR c1.
    idx = sy-fdpos.

* wheel 3 again
    PERFORM set_wheel USING  c_alpha
             CHANGING wheel3.
    c1 = wheel3+idx(1).

    ASSIGN  (fname) TO <data>.
    PERFORM set_wheel USING  <data>
                    CHANGING wheel3.
    SEARCH wheel3 FOR c1.
    idx = sy-fdpos.

    char_out = c_alpha+idx(1).

* Plugboard mapping outbound
    PERFORM plugboard USING 'O' CHANGING char_out.

* Build output string
    CONCATENATE v_temp char_out INTO v_temp.

* Put spaces back in same place as in selection screen text
  v_length = strlen( v_source ).
  CLEAR: pos, idx.

  DO v_length TIMES.
* get selection screen character
    c1 = v_source+pos(1).

      CONCATENATE v_output ' ' INTO v_output RESPECTING BLANKS.
      idx = idx + 1.
      pos = pos + 1.
      sy-fdpos = pos - idx.
      c1 = v_temp+sy-fdpos(1).
      CONCATENATE v_output c1 INTO v_output.
      pos = pos + 1.


* Write source to result screen
  REFRESH lt_text.
      i_string         = v_source
      i_tabline_length = 80
      et_table         = lt_text.

  WRITE / 'Input text.'.
  LOOP AT lt_text.
    SHIFT lt_text-text LEFT DELETING LEADING space.
    WRITE / lt_text-text.

* Write result to result screen
  REFRESH lt_text.
      i_string         = v_output
      i_tabline_length = 80
      et_table         = lt_text.
  WRITE /.
  WRITE / 'Output text.'.
  LOOP AT lt_text.
    SHIFT lt_text-text LEFT DELETING LEADING space.
    WRITE / lt_text-text.

  WRITE /.
  WRITE / 'If you copy the output text and use that as the input text'.
  WRITE / 'with the same initial setup on the selection screen.'.
  WRITE / 'your output will be deciphered or enciphered as required'.

ENDFORM.                    "enigma
*&      Form  plugboard
FORM plugboard USING p_direction TYPE char1
               CHANGING p_char TYPE char1.
* This is a simple x to y type mapping
*  Not all charcaters have to be mapped
* EG. You could leave everything as it comes out of the wheels
* except, say, mapping A to G (and therefore G to A)
* The default in this pgm only map A/Z and G/H in the plugboard
*            ^     ^^                 ^

  CHECK p_plugb = 'X'. " is plugboard mapping ON

  IF p_direction = 'I'.
* Input from keyboard to wheels
    SEARCH c_alpha FOR p_char.
    p_char = c_plugb+sy-fdpos(1).
* Output from wheels to lightboard
    SEARCH c_plugb FOR p_char.
    p_char = c_alpha+sy-fdpos(1).

ENDFORM.                    "plugboard
*&      Form  index_wrap
* This is used to correct the wheel index in case of wrap-around
* required to make the linear wheel string act like a circular wheel
FORM index_wrap CHANGING p_i TYPE i.
  IF p_i > 25.
    p_i = p_i - 25.
  ELSEIF p_i < 0.
    p_i = p_i + 25.
ENDFORM.                    "index_wrap
*&      Form  set_wheel
* Set wheel characters so start position charcater is at position 1.
* EG :
* Wheel 1 with start position is, say, W
* -------------^------------
* becomes:

FORM set_wheel USING     p_wheel_in  TYPE char26
                         p_start     TYPE char1
                CHANGING p_wheel_out TYPE char26.

  DATA: l_index TYPE i,
        l_char  TYPE char1.

  CLEAR p_wheel_out.

  SEARCH c_alpha FOR p_start.
  l_index = sy-fdpos.

  DO 26 TIMES.
    IF l_index > 25.
      l_index = 0.

    l_char = p_wheel_in+l_index(1).
    CONCATENATE p_wheel_out l_char
    INTO p_wheel_out.

    l_index = l_index + 1.

ENDFORM.                    "set_wheel

Steve Carlton

Playing with JCo3

Posted by Steve Carlton Oct 8, 2013

So I downloaded the Java connector sapjco3.jar and the examples and started to play.

I soon got put off by the fact that all the examples require a username and password to be stored in a file and the idoc example I found was a bit over complicated for someone coming from an ABAP background.


Much Googling later and here is what I ended up with. This stores the system connection details in a file but NOT the username and password.

The idoc is created by calling a RFC function and the bit that casts a JCoStructure into a String (to populate the sdata field) is quite useful (I think).


--------- Start Java Code --------------
import java.util.HashMap;
import java.util.Properties;
import java.io.FileInputStream;
import java.io.IOException;

import com.sap.conn.jco.*;
import com.sap.conn.jco.rt.*;
import com.sap.conn.jco.ext.*;

* This is the example that SAP should have delivered It keeps the system name
* etc in a file but not the user and password
* It will call a RFC function to create an idoc and print out the returned data
* Create a file called ABAP_AS.jcoDestination as below
* jco.client.lang=<language>
* jco.client.client=<client>
* jco.client.sysnr=<system number>
* jco.client.ashost=<hostname>
* Enter the required values starting lines: 158,185,196

* Install sapjco3,

* set environment variables: classpath, lib_path and path

* Compile with : javac iDocTest.java

* Run with command java iDocTest
public class iDocTest {

    static String ABAP_AS = "ABAP_AS";

    static class MyDestinationDataProvider implements DestinationDataProvider {

        private DestinationDataEventListener eL;
        private HashMap<String, Properties> secureDBStorage = new HashMap<String, Properties>();

        public Properties getDestinationProperties(String destinationName) {
            try {
                //read the destination from DB
                Properties p = secureDBStorage.get(destinationName);

                if (p != null) {
                    //check if all is correct, for example
                    if (p.isEmpty()) {
                        throw new DataProviderException(DataProviderException.Reason.INVALID_CONFIGURATION, "destination configuration is incorrect", null);

                    return p;

                return null;
            } catch (RuntimeException re) {
                throw new DataProviderException(DataProviderException.Reason.INTERNAL_ERROR, re);

        //An implementation supporting events has to retain the eventListener instance provided
        //by the JCo runtime. This listener instance shall be used to notify the JCo runtime
        //about all changes in destination configurations.
        public void setDestinationDataEventListener(DestinationDataEventListener eventListener) {
            this.eL = eventListener;

        public boolean supportsEvents() {
            return true;

        //implementation that saves the properties in a very secure way
        void changeProperties(String destName, Properties properties) {
            synchronized (secureDBStorage) {
                if (properties == null) {
                    if (secureDBStorage.remove(destName) != null) {
                } else {
                    secureDBStorage.put(destName, properties);
                    eL.updated(destName); // create or updated
    } // end of MyDestinationDataProvider

    static Properties getDestinationPropertiesFromUI() {

        Properties userProperties;
        Properties sapProperties;
        String JCO_CLIENT;
        String JCO_LANG;
        String JCO_SYSNR;
        String JCO_HOST;

        sapProperties = new Properties();
        userProperties = new Properties();

// Load ABAP_AS.jcoDestination file
        try {
            sapProperties.load(new FileInputStream("ABAP_AS.jcoDestination"));

        } catch (IOException ex) {
        // Get fixed connection details from properties file 
        JCO_LANG = sapProperties.getProperty("jco.client.lang");
        JCO_HOST = sapProperties.getProperty("jco.client.ashost");
        JCO_SYSNR = sapProperties.getProperty("jco.client.sysnr");
        JCO_CLIENT = sapProperties.getProperty("jco.client.client");
// Set connection details from strings retrieved fromn file above           
        userProperties.setProperty(DestinationDataProvider.JCO_ASHOST, JCO_HOST);
        userProperties.setProperty(DestinationDataProvider.JCO_SYSNR, JCO_SYSNR);
        userProperties.setProperty(DestinationDataProvider.JCO_CLIENT, JCO_CLIENT);
        userProperties.setProperty(DestinationDataProvider.JCO_LANG, JCO_LANG);

// Get username and password 
        System.console().printf("Enter user:");
        userProperties.setProperty(DestinationDataProvider.JCO_USER, System.console().readLine());

        System.console().printf("Enter password:");
        userProperties.setProperty(DestinationDataProvider.JCO_PASSWD, System.console().readLine());

        return userProperties;


    public static void iDocCall(String destName) throws JCoException {
        //Declare destination for function call
        JCoDestination destination = JCoDestinationManager.getDestination(destName);
        String client = destination.getClient();

        // Declare function and check it exists
        JCoFunction idoc_input = destination.getRepository().getFunction("IDOC_INBOUND_SINGLE");
        if (idoc_input == null) {
            throw new RuntimeException("IDOC_INBOUND_SINGLE not found in SAP.");

// Declare function fields, structures and tables
//JCoFunction is container for function values. Each function contains separate
//containers for import, export, changing and table parameters.
//To set or get the parameters use the APIS setValue() and getXXX().   

// Declare:       
// Import Fields   
//      idoc_input.getImportParameterList().setValue("MASS_PROCESSING", " ");

// Structures
        JCoStructure idoc_control = idoc_input.getImportParameterList().getStructure("PI_IDOC_CONTROL_REC_40");

        JCoTable idoc_data = idoc_input.getTableParameterList().getTable("PT_IDOC_DATA_RECORDS_40");

// Populate function fields, structures and tables     
// To create the idoc we only need to set the control and data
//iDoc Control
        idoc_control.setValue("MANDT", client);
        idoc_control.setValue("DIRECT", "2");
        idoc_control.setValue("MESTYP", "MBGMCR");
        idoc_control.setValue("IDOCTYP", "MBGMCR02");
        idoc_control.setValue("CIMTYP", "ZMBGMCR02001");
        idoc_control.setValue("SNDPOR", <enter_sndpor>);
        idoc_control.setValue("SNDPRT", <enter_sndprt>);
        idoc_control.setValue("SNDPRN", <enter_sndprn>);
        idoc_control.setValue("RCVPOR", <enter_rcvpor>);
        idoc_control.setValue("RCVPRT", <enter_rcvprt>);
        idoc_control.setValue("RCVPRN", <enter_rcvprn>);

//iDoc Data           
        JCoRepository repository = destination.getRepository();
        JCoStructure str_head, str_item;
        JCoRecordMetaData meta_head, meta_item;
        String sdata = "";

//Create structures for sdata population
        meta_head = repository.getStructureDefinition("E1BP2017_GM_HEAD_01");
        str_head = JCo.createStructure(meta_head);

        meta_item = repository.getStructureDefinition("E1BP2017_GM_ITEM_CREATE");
        str_item = JCo.createStructure(meta_item);

        // Write idoc data
        idoc_data.appendRows(2); // Add 4 rows to internal table
        idoc_data.setValue("MANDT", client);
        idoc_data.setValue("SEGNUM", "1");
        idoc_data.setValue("PSGNUM", "0");
        idoc_data.setValue("SEGNAM", "E1BP2017_GM_HEAD_01");

        str_head.setValue("HEADER_TXT", "Java test");
        str_head.setValue("PSTNG_DATE", "20131008");
        str_head.setValue("REF_DOC_NO", "12345");
        sdata = ConvertToSDATA(str_head, meta_head);
        idoc_data.setValue("SDATA", sdata);

        idoc_data.nextRow(); // Move to next row
        idoc_data.setValue("MANDT", client);
        idoc_data.setValue("SEGNUM", "2");
        idoc_data.setValue("PSGNUM", "0");
        idoc_data.setValue("SEGNAM", "E1BP2017_GM_ITEM_CREATE");
        str_item.setValue("MATERIAL", <enter_material>);
        str_item.setValue("BATCH", <enter_batch>);
        str_item.setValue("ENTRY_QNT", <enter_qty>);
        str_item.setValue("EXPIRYDATE", <enter_sled_date>);
        sdata = ConvertToSDATA(str_item, meta_item);
        idoc_data.setValue("SDATA", sdata);


//To set or get the parameters use the APIS setValue() and getValue(). 
        idoc_input.getTableParameterList().setValue("PT_IDOC_DATA_RECORDS_40", idoc_data);

// Execute the function call
        try {
        } catch (AbapException e) {

        // Get returned idoc number
        String idoc_number;
        idoc_number = idoc_input.getExportParameterList().getString("PE_IDOC_NUMBER");

        // Remove leading zeros from idoc number
        idoc_number = idoc_number.replaceFirst("^0*", "");
        if (idoc_number.isEmpty()) {
            idoc_number = "0";
        // Write out idoc number
        System.out.println("Created idoc number " + idoc_number);


    public static String ConvertToSDATA(JCoStructure data_in, JCoRecordMetaData meta_in) {
        Simply casting the structure to a string will fail (won't compile)
        Concatenating the structure fields into a string will remove all the spaces
        so the field alignment is lost.
        The solution is to create a StringBuilder object of the correct length
        Initialize it with all spaces and then, using the structure metadata,
        insert each field at the correct start position
        * NB: This works for idocs where all fields are character types
        String sdata = "";
        int count = 0;
        int Start = 0;
        int End = 0;
        int Len = 0;

        // Declare string builder as 1000 long and fill with spaces
        StringBuilder strB = new StringBuilder(" ");
        ClearString1000(strB, 1000);

        // Get field count in structure
        count = meta_in.getFieldCount();
        count = count - 1;

        for (int i = 0; i <= count; i++) {
            Len = meta_in.getLength(i);
            End = Start + Len;
            strB.insert(Start, data_in.getValue(i));
            Start = End++;
        sdata = strB.toString();
        return sdata;

    public static StringBuilder ClearString1000(StringBuilder str, int Len) {
        // Sets all characters to space
        for (int i = 1; i <= Len; i++) {
            str.insert(i, " ");
        return str;

    public static void main(String[] args) throws JCoException {

        MyDestinationDataProvider myProvider = new MyDestinationDataProvider();

        //register the provider with the JCo environment;
        //catch IllegalStateException if an instance is already registered
        try {
        } catch (IllegalStateException providerAlreadyRegisteredException) {
            //somebody else registered its implementation,
            //stop the execution
            throw new Error(providerAlreadyRegisteredException);

        String destName = "ABAP_AS";
        iDocTest dest;

        dest = new iDocTest();

        //set properties for the destination and ...
        myProvider.changeProperties(destName, getDestinationPropertiesFromUI());

        //... work with it             

        //Clear the properties and ...
        myProvider.changeProperties(destName, null);

--------- End Java Code --------------

To deploy a SAP Java application like Web Dynpro Java or SAP Portal application, tools like NWDS, NWDI or telnet are used. This is a very SAP centric setup. But how can you deploy a WDJ application without using NWDI, NWDS, telnet or direct server access? What if you want to deploy it out of your continuous integration (CI) server?




First question to ask and to answer is: what file can be deployed? Normally, SAP Java applications are in SCA or SDA format. Standalone WDJ applications are EAR and portal files are SDA. The good news is that they are ZIP files. The portal SDA is basically the old PAR file renamed to SDA and WDJ EAR files are … EAR files. NWDS will create the files correctly, it’s just picking these files up – or create them manually - and deploy them to NW Java.


Note for WDJ EAR and SDA transformation

On releases 7.1 and higher, .EAR files are automatically converted to SDA format during deployment. [SAP Note 1715441 - How to deploy .SDA / .EAR files on J2EE servers on Netweaver release 7.1 and higher]


The tools


In NWDS 7.1 versions there was a folder with ant scripts provided.http://help.sap.com/saphelp_nwce711/helpdata/en/46/150746f21914dce10000000a155369/content.htm


This folder and therefore the tools are gone in newer NWDS versions. Basically, there are no client side tools provided by SAP to deploy an application without using a SAP tool. The SAP Note 1715441 contains all the information regarding on how to deploy an application. It contains one entry about a deployment script:


Using filesystem deployment script

  • Open the "deployment script" folder located in /usr/sap/<SID>/J<nr>/j2ee/deployment/scripts.
  • Run the command "deploy.bat <user>:<password>@<host>:<port> <archive location>"  (For example on a windows server, run "deploy.bat hemadm:initial1d@pwdf3341:50004 C:\testsda.sda"


Taking a look at the mentioned deployment directory unveils:


There is an ant directory with an example ant build file. You can take this script and try it out to see if it matches your needs. In my case it didn’t. The SAP Note mentions a script directory; let’s take a look at it:


Running the deploy.csh script shows the application help.


Doing a deploy from the cmd line on the server:


Great, it works! But only from the server. What about freedom and deploy WDJ / Portal applications from your CI?


Java version


A look inside the file reveals that deploy.csh is a wrapper for calling a Java application. To be able to trigger a deploy from anywhere you want, all it takes is a Java application with the required JAR files. All the JAR files can be found in the deployment dir:

  • /usr/sap/<SID>/J<nr>/j2ee/deployment
  • /usr/sap/<SID>/J<nr>/j2ee/j2eeclient


The JAR files are:


  • lib/javaee_deployment-1_2-fr-class.jar
  • lib/sap.com~tc~je~deployment14~impl.jar
  • /lib/ant.jar
  • /lib/sap.com~tc~bl~jarsap~impl.jar
  • /lib/sap.com~tc~sapxmltoolkit~sapxmltoolkit.jar
  • /lib/sap.com~tc~bl~sl~utility~impl.jar
  • /lib/sap.com~tc~je~adminadapter~impl.jar
  • /lib/deploy_lib~sda.jar
  • /j2eeclient/sap.com~tc~je~clientlib~impl.jar
  • /j2eeclient/sap.com~tc~logging~java~impl.jar
  • /j2eeclient/sap.com~tc~exception~impl.jar


The Java class to call is

  • com.sap.engine.deployment.DMClient


Copy these files to a lib directory on the computer from where the deploy will be done. The command to do a deploy of a custom EAR file from command line using Java is:

java.exe -Dserver.parser.inqmy=true -Dproxy=DC -classpath ".;lib/javaee_deployment-1_2-fr-class.jar;lib/sap.com~tc~je~deployment14~impl.jar;lib/ant.jar;lib/sap.com~tc~bl~jarsap~impl.jar;lib/sap.com~tc~sapxmltoolkit~sapxmltoolkit.jar;lib/sap.com~tc~bl~sl~utility~impl.jar;lib/sap.com~tc~je~adminadapter~impl.jar;lib/sap.com~tc~je~clientlib~impl.jar;lib/sap.com~tc~logging~java~impl.jar;lib/sap.com~tc~exception~impl.jar;lib/deploy_lib~sda.jar" com.sap.engine.deployment.DMClient tobias:<password>@nwce73:50004 ./demo.sap.com~tobias~blog~ear.ear


All the JAR files are in one common lib directory and the name of the ear file is demo.sap.com~tobias~blog~ear.ear.

The deploy application expects a certain files and folders structure that is given on the server, but not on the client. My deploy folder looks like:\


  • lib-> JAR files, resolved by ivy
  • deployment/cfg -> contains one of the config files
  • a, b and the rest is created dynamically by the deploy application.


Continuous integration


The above Java call can be converted easily into a (ant) script. As such, it is also very easy to integrate into a CI server like Jenkins. There are five parameters that have to be set:

  • username
  • password
  • server
  • port
  • file to deploy

In Jenkins this is done inside the job configuration by checking “This build is parameterized” and adding parameters:


This gives the option to build the job with parameters:



Inside build.xml these parameters can be referenced using:

<arg value="${env.NWUser}:${env.NWPassword}@${env.NWServer}:${env.NWPort}"/>


This allows to deploy a EAR, SDA file out of your CI server. After a build the application can be deployed and the functional / load test can be run and integrated into the overall CI job run result. Transparency for your SAP Java projects.




A problem is – unfortunately – that the Java program does always return a successful execution as long as the application does not hit an error in the execution flow; like: logon not possible, missing EAR file. When the deploy returns an error, the application still returns success to the ant task, as the execution steps of the deploy application worked. To report a build failure on a failed server side deploy you’ll have to check the logs.

Hello All,


This blog is a compilation of quick steps to develop proxy classes to access the SAP functions(remote-enabled) from non-SAP UI technologies.


[ Inspired by the forum question : http://scn.sap.com/thread/3404002 ]


The steps to be followed are:


1. Install NWDS on your local machine.


     NWDS is available in SDN. For complete list of NWDS versions use the link: https://nwds.sap.com/swdc/downloads/updates/netweaver/nwds/ce/ (It requires an S-User ID. Contact any SAP developer for an S-User ID).


2. Then, create a Java Project.



3. Complete the Project creation. Right-click on project and choose New->Other. In the window select " Enterprise Connector ".



4. Click Next. Then, fill the necessary fields.


5. Provide the SAP System details and SAP login credentials to access the remote-enabled SAP functions.



6. On successful login, search for the function to be used by your application i.e. function for which the proxy classes are required.




7. Finally, the proxy classes are generated.


However, since the JCo library files/apis are not available in the Java project, it results in compile errors for all SAP JCo classes in the project.


The following steps will assist you on add the JCo related jar files to the Java project.


1. Go to the build path of the Java project. Choose, External Jars and navigate to the plugins folder of the NWDS.


2. Look up the plugins for the following jar files:


  • SAPmdi.jar
  • sapjco.jar
  • aii_util_msc.jar
  • aii_proxy_rt.jar



3. Click on OK. Now, the java project and proxy classes are ready to be used in your application.


All the Best.




Hi everybody,


after changing our netweaver runtime platform from 7.0 to 7.31 , I spent a lot of time, trying to migrate some of our TomCat 7.0-based web-application to SAP Netweaver 7.31.


The most of these applications are using popular components/techniques like MyFaces 2.0 (JSF 2.0), Tomahawk, PrimeFaces, Mobile PrimeFaces, Trinidad, CDI, JPA....


Because of the fact that NW 7.31 still supports EE 5.0 (not EE 6.0), in most cases the SAP JSF 1.2 implemenation was insufficient for our needs.


After reading lots of the documents in SDN about "heavy ressource loading", I found a way to separate the MyFaces (JSF) 2.0 libraries into a library project, so that I could use the Mobile PrimeFaces components in my dynamic web projects, referencing the MyFaces 2.0 implementations via a hard class loader reference.


    <!-- JSF 2.0:  MyFaces 2.0.x-Implementierung -->

    <reference reference-type="hard" prepend="true">

        <reference-target provider-name="oevbs.de"




The most difficult part was to decide which jars to place in the MyFaces 2.0 library project to avoid "class not found"-runtime exceptions. But I still found a solution that fulfilled our requirements.


Because of the fact that there still is a big interest in using popular java techniques that exceed the possibilties of the EE 5.0 implementation on NW 7.3, I decided to post my solution in SDN.


The example project is a skeleton of an existing project we use in our insurance ("Elektronische Versicherungsbestätigung"). (Sorry, for the field names and comments in the project example beeing in german language).


If you like, you'll find all sources of the example project in the Code Exchange Repository: "https://code.sdn.sap.com/svn/MobilePrimeFacesExample/"


Important Notice:

  • For entering "Code Exchange", you have to be a registered user!
  • To exclude conflicts with the different licence agreements, I removed the "open source" jars from the project!



If you just want to see the running web-application:


  1. log in to SCN (http://scn.sap.com/community/code-exchange)
  2. download the enterprise-application-archiv "eVBMobilExampleEAR.ear" ("https://code.sdn.sap.com/svn/MobilePrimeFacesExample/eVBMobilExampleEAR/eVBMobilExampleEAR.ear")
  3. download the needed jars (See: "https://code.sdn.sap.com/svn/MobilePrimeFacesExample/eVBMobilExample/WebContent/WEB-INF/Needed_Libraries.txt" for the download-locations!)
  4. place the jars into the zipped "eVBMobilExampleEAR.ear"-file below the following lib-folder:One.jpgTwo.jpg
  5. download the library-sda-file "MyFaces20x.sda" (See: "https://code.sdn.sap.com/svn/MobilePrimeFacesExample/MyFaces20x/MyFaces20x.sda")
  6. download the needed jars (See: "https://code.sdn.sap.com/svn/MobilePrimeFacesExample/MyFaces20x/Needed_Libraries.txt" for the download-locations!)
  7. place the jars into the zipped "MyFaces20x.sda"-file below the following lib-folder (Notice: The names of the jars must exactly match to the listed names in the provider.xml) :Three.jpg
  8. deploy the standard library file: "MyFaces20x.sda" and accordingly the Enterprise Archiv: "eVBMobilExampleEAR.ear"


Call the web-application using the URL: "http://{Your Hostname:Port}/eVBMobilExample".


If everything works fine, you should see the following dialog-pages:DialogStep1.jpgDialogStep2.jpgDialogStep3.jpgDialogStep4.jpg




Hi All,


Just goto Windows-> Preferences and check the ‘Show Heap Status’ .




And press The Garbage Collector Button. Here's the before and after results.


Voila! ! !


Thanks Nick.

Tobias Hofmann

Basic Java security

Posted by Tobias Hofmann Jul 8, 2013

For those who have ever had to work with basic JEE requirements like web service security and restrict access to business functions in EJB or WDJ applications know that this isn’t easy. It is easy to create the business logic and expose it as a web service. There are wizards in NWDS that guide you. But how do you apply security?


There is no step in the web service creation wizard to guide you adding a minimum authorization level. There is no wizard that lets you define UME actions needed to run a function. Finding out how to do this and putting the several development and configuration steps together is up to you, the developer. This also explains why most Java applications you find in a SAP system are not secured. Try it out: take a custom WDJ or EJB web service and see if you can call it as anonymous user. With some luck, at least the WDJ application requests an authenticated user. But only when the developer explicitly added this during design time. The flag is deactivated by default when creating a new application:


The only thing a user needs to call the application is the URL. Calling a EJB from inside your Java code is not complicated and allows you to circumvent the web service part. How many developers or code reviewers actually check that the interface performs a permission check?


SAP has quite a reputation when it comes to security, as a matter of fact SAP employees can be proud of it in making software enterprise ready. This reputation does not only come from seeing “not authorized to run transaction xyz” all the time, but that you can make SAP pretty secure. Now, why is SAP not supporting developers a little bit more in producing secure Java software?


If a developer gets the task to add web service security, this means: searching for the information on SAP Help or SCN and find a not so easy to understand example. It is not easy to find permission and security best practices on SAP Help for WDJ or EJB. And the developer has to leave the IDE. As this costs time, it's outsourced to Basis: “add authentication on web service level.” IF it is taken into consideration. Letting the portal handle this is not uncommon, ignoring the fact that the direct URL to call the WDJ application is still accessible. As for using UME permissions to be able to call a business function, this means to implement the action in a new DC, add the action to a role, add the permission check in your code, etc.


This where wizards that guide developers to include security features directly into the application will be more than helpful. As the definitions included during design time are only one part of it, tools that ensure easy configuration of these features during runtime would be even better (DevOps). Instead of letting the developer deploy a secure application that no one can use immediately because of the added security, an automated configuration needs to be executed during deploy time.


Until then, just because you need to log on to the SAP Portal to run a WDJ/EJB application does not mean the application is secured.

Building Extensible Composite Applications with SAP - Book Cover

A couple of years back when I started blogging on SCN I wrote a series of articles about how-to develop extensible Java applications. Inspired by the Business Add-In (BAdI) framework I even came up with a concept to provide extension points for any application running on a Java EE application server such as the SAP Composition Environment (CE).


Shortly after publishing those articles I was contacted by SAP Press asking me to write a whole book about it and so I did. You can read the whole story here.


Well, a few weeks back the book was put out of print... the good news: You can now download the PDF version for free!


  • [Download] You can now download the PDF version of this book free of charge! (4.4 MB)
  • [Source] Here's the source code (8.7 MB)

Note: While the book focuses on how-to develop Composites based on the SAP Composition Environment the underlying concept purely leverage standard features of Java EE and as such are definitely applicable to other application servers (with minor modifications.)


Matter of fact I have been thinking about porting the whole extensibility framework to the 2.x branch of SAP HANA Cloud - stay tuned! 


Filter Blog

By author:
By date:
By tag: