1 2 3 5 Previous Next

Java Development

65 Posts

I am trying to connect to HANA via hibernate java, i am using reverse engineering method.

First there is no driver in existing drivers for HANA, so i use generic jdbc and set the properties for HANA db, but an exception is thrown:

com.sap.db.jdbc.exceptions.jdbc40.SQLInvalidAuthorizationSpecException: [10]: authentication failed

    at com.sap.db.jdbc.exceptions.jdbc40.SQLInvalidAuthorizationSpecException.createException(SQLInvalidAuthorizationSpecException.java:40)

    at com.sap.db.jdbc.exceptions.SQLExceptionSapDB.createException(SQLExceptionSapDB.java:301)

    at com.sap.db.jdbc.exceptions.SQLExceptionSapDB.generateDatabaseException(SQLExceptionSapDB.java:185)

    at com.sap.db.jdbc.packet.ReplyPacket.buildExceptionChain(ReplyPacket.java:102)

    at com.sap.db.jdbc.ConnectionSapDB.execute(ConnectionSapDB.java:1030)

    at com.sap.db.jdbc.ConnectionSapDB.execute(ConnectionSapDB.java:820)

    at com.sap.db.util.security.AbstractAuthenticationManager.connect(AbstractAuthenticationManager.java:43)

    at com.sap.db.jdbc.ConnectionSapDB.openSession(ConnectionSapDB.java:569)

    at com.sap.db.jdbc.ConnectionSapDB.doConnect(ConnectionSapDB.java:422)

    at com.sap.db.jdbc.ConnectionSapDB.<init>(ConnectionSapDB.java:174)

    at com.sap.db.jdbc.ConnectionSapDBFinalize.<init>(ConnectionSapDBFinalize.java:13)

    at com.sap.db.jdbc.Driver.connect(Driver.java:307)

    at org.eclipse.datatools.connectivity.drivers.jdbc.JDBCConnection.createConnection(JDBCConnection.java:328)

    at org.eclipse.datatools.connectivity.DriverConnectionBase.internalCreateConnection(DriverConnectionBase.java:105)

    at org.eclipse.datatools.connectivity.DriverConnectionBase.open(DriverConnectionBase.java:54)

    at org.eclipse.datatools.connectivity.drivers.jdbc.JDBCConnection.open(JDBCConnection.java:96)

    at org.eclipse.datatools.connectivity.drivers.jdbc.JDBCConnectionFactory.createConnection(JDBCConnectionFactory.java:53)

    at org.eclipse.datatools.connectivity.internal.ConnectionFactoryProvider.createConnection(ConnectionFactoryProvider.java:83)

    at org.eclipse.datatools.connectivity.internal.ConnectionProfile.createConnection(ConnectionProfile.java:359)

    at org.eclipse.datatools.connectivity.ui.PingJob.createTestConnection(PingJob.java:76)

    at org.eclipse.datatools.connectivity.ui.PingJob.run(PingJob.java:59)

    at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)



Any help is appreciated.

Suppose I have a bean named HelloWorld which has a member attribute points to another bean User.



With annotation @Autowired, as long as getBean is called in the runtime, the returned HelloWorld instance will automatically have user attribute injected with User instance.


How is this behavior implemented by Spring framework?


1. in Spring container implementation's refresh method, all singleton beans will be initialized by default.


When the HelloWorld bean is initialized:


Since it has the following source code:

private User user;

In the runtime, this annotation is available in metadata via reflection. In metadata structure below, the targetClass points to HelloWorld bean, and injectedElements points to the User class to be injected.



2. In doResolveDependency, the definition for User bean is searched based on this.beanDefinitionNames ( list in DefaultListableBeanFactory ):


Once found, the found result is added to array candidateNames:



Then the constructor of User bean class is called ( still triggered by getBean call ), the user instance is created by calling constructor:



The created user instance together with its name "user" is inserted to the map matchingBeans.



3. Finally the user reference is set to user attribute of HelloWorld instance via reflection. Here the variable bean in line 569 points to HelloWorld instance, and value points to user instance.


Once field.set(bean, value) is done, we can observe in debugger that the user attribute in HelloWorld instance is already injected successfully.


In Spring configuration xml file, we can define a package for tag component-scan, which tells Spring framework to search all classes within this specified package, to look for those classes which are annotated with @Named or @Component.


I am very curious about how Spring framework achieves this scan, so I have made some debugging to figure it out.


In this blog How to find the exact location where bean configuration file is parsed in Spring framework I have already found the location where the xml configuration file is parsed by Spring framework, so I can directly set breakpoint in found source code.

Here the package to be scanned is parsed from xml file:


And the actual scan is performed in line 87:


Here all classes within the specified package and its children packages are extracted as resource, now I have 7 resources as candidates for scan which makes sense since I have totally 7 classes in the package:



The evaluation to check whether the class has qualified annotation is done in this method:


If the scanned class has at least one annotation ( the annotation written on class is stored in metsadataReader ) which resides in this.includeFilters, then it is considered as a candidate.


By inspecting content of this.includeFilters, we can know that Spring framework considers @Component and @Named as qualified annotation for automatic component scan logic.



Back to my example, since my bean class has @named annotated,


In the runtime, this annotation written in class source code is extracted via reflection, and checked against Spring framework pre-defined annotation set. Here below is how my bean class evaluated as candidate, since it has @Named annotation.



We can define bean configuration in xml and then can get instantiated bean instance with help of all kinds of containers for example ClassPathXmlApplicationContext as displayed below:


The content of Beans.xml:

<?xml version="1.0" encoding="UTF-8"?>
<!--  http://stackoverflow.com/questions/18802982/no-declaration-can-be-found-for-element-contextannotation-config
<beans xmlns="http://www.springframework.org/schema/beans"
   <bean id="helloWorld" class="main.java.com.sap.HelloWorld">
       <property name="message" value="sss"/>
       <property name="testMin" value="2"/>
       <property name="phone" value="1"/>


Where can we set breakpoint to start? No hint. Here is a tip: we can make the Beans.xml invalid by deliberately changing te tag bean to beana, and relaunch application. Now exception is raised as expected: Click the hyperlink XmlBeanDefinitionReader.java:399,


The line 399 where exception is raised will be automatically located. The core logic to load xml file is just near the exception raise position: line 391. So we can set breakpoint in line 391 now:


Change the tag from beana back to bean, and start application via debug mode. The code below is the core logic of Bean configuration file parse in Spring framework. The logic consists of two main steps:


1. parse XML as a dom structure in memory ( line 391 )

2. extract bean information contained in dom structure and generate BeanDefinition structure ( line 392 )


from screenshot below we can find out the xml is parsed via SAX parser:


My "helloWorld" bean is parsed here:



Consider the following example:


package thread;
public class ThreadVerify {
  public static boolean  stop = false;
  public static void main(String args[]) throws InterruptedException {
      Thread testThread = new Thread(){
            public void run(){
              int i = 1;
               //System.out.println("in thread: " + Thread.currentThread() + " i: " + i);
              System.out.println("Thread stop i="+ i);
         stop = true;
         System.out.println("now, in main thread stop is: " + stop);

The working thread is started to do increment on i and after one thread, the flag is set as true in main thread. It is expected that we could see print out in working thread: "Thread stop i=". Unfortunately it is NOT the case.


Through process explorer we can find the working thread is still running:


The only way we can terminate it is to click this button in Eclipse:


The reason is: every thread in Java has its own thread local stack in runtime. Every time a thread tries to access the content of a variable, it first locates the variable content in the main memory, then loads this content from main memory to its local stack. Once this load is done, this relationship between thread local stack and main memory is cut.


Later, when thread performs modifications on the variable, the change is directly done on thread local stack. And at a given time ( scheduled by JVM, developer has no control about this timeslot), the change will be refreshed from thread local stack to memory.  Back to our example, already the main thread has changed the flag to true in one second later ( this is TOO late! ), unfortunately when the working thread reads the flag from its own local stack, the flag is still false ( it makes sense since when it is copied from main memory in main thread ), so the working threads could never end. See the following picture for detail.


Solution is: add keyword volatile to flag variable, to force the read access on it in working thread is done always from main memory, so that after the flag is changed to true in main thread, later the working thread can detect this latest change since it reads data from main memory now.


After this keyword is added we can get expected output:


The definition of AOP in wikipedia seems a little bit difficult for beginners to understand, so in this blog I use an example to introduce why we need it.

Suppose I have an order command class which performs its core business logic in method doBusiness:

package aop;
import java.util.logging.Level;
import com.sun.istack.internal.logging.Logger;
public class OrderCommand {
 public void execute(){
  Logger logger = Logger.getLogger(OrderCommand.class);
  logger.log(Level.INFO, "start processing");
  // authorization check
  logger.log(Level.INFO, "authorization check");
  logger.log(Level.INFO, "begin performance trace");
  // only this line implements real business logic
  logger.log(Level.INFO, "end performance trace");
 private void doBusiness(){
  System.out.println("Do business here");
 public static void main(String[] args) {
  new OrderCommand().execute();

In method execute(), it is flooded with too many non-funcitional code like logging, authorization check and performance trace.


It is not a good design, we can try to improve it via template method pattern.


Template method pattern


With this pattern, I create a new parent class BaseCommand, and put all non-functional code inside the execute method.


import java.util.logging.Level;
import com.sun.istack.internal.logging.Logger;
public abstract class BaseCommand {
public void execute(){
  Logger logger = Logger.getLogger(this.getClass());
  logger.log(Level.INFO, "start processing");
  // authorization check
  logger.log(Level.INFO, "authorization check");
  logger.log(Level.INFO, "begin performance trace");
  // only this line implements real business logic
  logger.log(Level.INFO, "end performance trace");
protected abstract void doBusiness();

Now the real business logic is defined in child class OrderCommand, whose implementation is very clean:


public class OrderCommand extends BaseCommand {
public static void main(String[] args) {
  new OrderCommand().execute();
protected void doBusiness() {
    System.out.println("Do business here");

Drawback of this solution: as the parent class has defined the template method execution, it is NOT possible for a child class to adapt it, for example, a child class cannot change the order sequence of authorization check and performance trace method. And suppose a child class does not want to implement authorization check at all - this could not be achieved with this solution. We have to use decorator pattern instead.


Decorator pattern


First I need to create an interface:


public interface Command {
public void execute();

And create a decorator to cover the log and authorization check function:


import java.util.logging.Level;
import com.sun.istack.internal.logging.Logger;
public class LoggerDecorator implements Command{
private Command cmd;
public LoggerDecorator(Command cmd){
  this.cmd = cmd;
public void execute() {
  Logger logger = Logger.getLogger(this.getClass());
  logger.log(Level.INFO, "start processing");
  // authorization check
  logger.log(Level.INFO, "authorization check");

And a second decorator for performance trace:


package aop;
import java.util.logging.Level;
import com.sun.istack.internal.logging.Logger;
public class PerformanceTraceDecorator implements Command{
private Command cmd;
public PerformanceTraceDecorator(Command cmd){
  this.cmd = cmd;
public void execute() {
  Logger logger = Logger.getLogger(this.getClass());
  logger.log(Level.INFO, "begin performance trace");
  logger.log(Level.INFO, "end performance trace");

And the class to finish the real business logic. Now I have the full flexibility to constructor the instance according to real business case, with the help of different decorator. The following instance fullCmd owns the ability of both authorization check log and performance trace.


public class OrderCommand implements Command {
public static void main(String[] args) {
  Command fullCmd = new LoggerDecorator( new PerformanceTraceDecorator( new OrderCommand()));
public void execute() {
  System.out.println("Do business here");

Suppose in a given scenario, only performance trace is needed, we can just use the performance trace decorator:


Command cmd = new PerformanceTraceDecorator( new OrderCommand());

Drawback of decorator pattern: The decorator class and the business class have to implement the same interface, command, which is more business related. Is there possibility that the utility classes for non-functional implementation can just work without implementing the same interface which is implemented by business class?

AOP solution

I use a Java project implemented by Spring framework to demonstrate the idea. The whole source code of this project could be found from git repository.


Suppose I hope to add performance trace on this business method: save.


1. You may have already observed the annotation @Log(nameI042416="annotation for save method") used in line10.


This annotation is declared in file Log.java:


2. Now I have to declare an Aspect class which contains a pointcut. A pointcut tells Spring framework which methods could be applied with AOP strategy, and the class annotated with @Aspect contains methods which should be called by Spring framework to "decorate" the methods identified by annotation. Here below I have declared a pointcut "logJerry" via annotation @Pointcut:


For example, since we have annotated the business method save() with "@Log(nameI042416="annotation for save method")",

we can define what logics must be done on it, with the help of @Before and @After plus declared pointcut.


With this approach, I can add performance trace function to save method without modifying its source code.

Set breakpoint on these beforeExec and afterExec methods, launch the project with Tomcat under debug mode, paste the following url to your browser:



Through callstack you can understand how the AOP call is working in Spring.



Why we say AOP can increase modularity by allowing the separation of cross-cutting concerns?

Suppose we have lots of methods all of which need common utilities like log, performance trace and authorization check. Before we use AOP, these utilities are scattered in every method:


After AOP is used, those common stuff are extracted into Aspect class and reusability is fulfilled. From the picture below we can see the cross-cutting concerns are now separated.


In this tutorial, java development team experts will share the way to make comparison between two business entities using JPA/ Reflection. You can read the story here and discover how they do it.

Use case:

Let’s say you want to compare two objects of course same type as you can’t compare an Object type with an object type Integer because an Integer can never be String. If the object contains only few attributes we can compare them one by one but if the object contains 20 attributes, 30 attributes, 40 attributes, hundred attributes? Of course in a real time project enterprise application we can have an entity with hundreds of attributes. And out of all these attributes many of them are transiting attributes as some of them are persistent attributes but we have to compare only transient attributes.

How to do it?

Can you write hundred if else conditions and compare each of them? Then this is not at all good programming, if you come up with this solution your team lead will never accept this solution.

Then there comes the solution Java Reflection.


Let’s say you have an Employee entities like below:

Code snippet to create table:


import java.io.Serializable;


import javax.persistence.Column;

import javax.persistence.Entity;

import javax.persistence.FetchType;

import javax.persistence.GeneratedValue;

import javax.persistence.GenerationType;

import javax.persistence.Id;

import javax.persistence.JoinColumn;

import javax.persistence.ManyToOne;

import javax.persistence.Table;

import javax.persistence.Transient;


//// Named queries


@Table(schema = "EMP", name = "T_EMPLOYEE")

@javax.persistence.SequenceGenerator(name = "EmployeeBE", allocationSize = 1, initialValue = 1, sequenceName = "SEQ_EMPLOYEE")


publicclass EmployeeBE implements Serializable {



* the serialVersionUID


privatestaticfinallongserialVersionUID = 6058067959150204025L;




@GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "EmployeeBE")

private Integer id;


@Column(name = "FIRS_NAME")

private String firstName;


@Column(name = "LAST_NAME")

private String lastName;


@Column(name = "ADDRESS")

private String address;


@Column(name = "EMAIL")

private String email;


@Column(name = "MOBILE_NUMBER")

private String mobileNumber;


@Column(name = "MANAGER")



@ManyToOne(fetch = FetchType.LAZY)

@JoinColumn(name = "MANAGER_ID")



@Column(name = "ROLE")

private String role;


@Column(name = "SALARY")

private Double salary;


@Column(name = "MARRIED_STATUS")






/// many other fields


publicvoid setFirstName(String firstName) {

  1. this.firstName = firstName;

} publicvoid setLastName(String lastName) {

  1. this.lastName = lastName;

} publicvoid setId(Integer id) {

  1. this.id = id;

} publicvoid setAddress(String address) {

  1. this.address = address;

} publicvoid setManager(booleanpisManager) {

  1. this.isManager = pisManager;

} publicvoid setRole(String role) {

  1. this.role = role;

} publicvoid setSalary(Double salary) {

  1. this.salary = salary;

} publicvoid setEmployeeType(booleanemployeeType) {

  1. this.employeeType = employeeType;




If you observe the above Entity I added many attributes, many of them are in persistent state and some of them are transient state.

Now there is a need to check whether all the persistent attributes are equal or not.

I wrote one helper class, which will have the methods to compare the attributes by reflection.

import java.lang.reflect.Field;

import java.lang.reflect.Method;

import java.util.ArrayList;

import java.util.Arrays;

import java.util.Collection;


import javax.persistence.Column;

import javax.persistence.Id;

import javax.persistence.Version;


publicclass AttributeCompareHelper {


private AttributeCompareHelper() {



publicstaticboolean areObjectsEqual(EmployeeBE first, EmployeeBE second, Collection<String>toBeExcludeAttributes) {

booleanretVal = first == second;

if (first != null&&second != null&&first.getClass().equals(second.getClass())) {

final Field[] fields = first.getSupportedFields();

for (final Field field : fields) {

String fieldName = field.getName();

if (field.isAnnotationPresent(Column.class) && !field.isAnnotationPresent(Id.class)

&& !toBeExcludeAttributes.contains(fieldName)) {

final Object value1 = getValue(first, field);

final Object value2 = getValue(second, field);

if (!compareValues(value1, value2)) {





retVal = true;





publicstatic Object getValue(Object object, Field field) {

final String fieldName = field.getName();

returnprocessGetMethod(object, fieldName);



privatestatic Object processGetMethod(Object object, String fieldName) {

final Class<?>objClass = object.getClass();

try {

final Method method = getGetterMethod(objClass, fieldName);

if (method == null) {

thrownew NoSuchMethodException();


  1. method.setAccessible(Boolean.TRUE);
  2. returnmethod.invoke(object);

} catch (final Exception e) { }returnobjClass; } publicstatic Method getGetterMethod(Class<?>objClass, String fieldName) throws NoSuchMethodException { String methodName = getGetterName(fieldName); Method isMethodFound = findGetterMethod(objClass, methodName);if (isMethodFound == null) { methodName = getBooleanGetterMethod(fieldName);isMethodFound = findGetterMethod(objClass, methodName);if (isMethodFound == null || !(isMethodFound.getReturnType() == Boolean.class || isMethodFound.getReturnType() == boolean.class)) { thrownew NoSuchMethodException("No such method in the class " + objClass.getClass()); } }returnisMethodFound; } publicstatic String getGetterName(String fieldName) {return"get" + fieldName.substring(0, 1).toUpperCase() + fieldName.substring(1); } privatestatic Method findGetterMethod(Class<?>objClass, String methodName) throws SecurityException {if (objClass == null) {returnnull; }for (final Method method : objClass.getDeclaredMethods()) {if (isMethodMatched(method, methodName)) {returnmethod; } }returnfindGetterMethod(objClass.getSuperclass(), methodName); } privatestaticboolean isMethodMatched(Method md, String methodName, Class<?>... paramTypes) {if (paramTypes == null) {paramTypes = new Class[0]; }final Class<?>[] mdParamTypes = md.getParameterTypes();if (mdParamTypes.length != paramTypes.length) {returnfalse; }for (inti = 0; i<mdParamTypes.length; i++) {if (!mdParamTypes[i].equals(paramTypes[i])) {returnfalse; } }

  1. returnmd.getName().equalsIgnoreCase(methodName);

} publicstatic String getBooleanGetterMethod(String fieldName) {return"is" + fieldName.substring(0, 1).toUpperCase() + fieldName.substring(1); } publicstaticboolean compareValues(Object first, Object second) { if (first == null) {if (second == null) {returntrue; }returnfalse; }booleanresult;if (second == null) {result = false; } else {result = first.equals(second); }returnresult; } public Field[] getSupportedFields() {returngetSupportedFields(this.getClass()); } staticpublic Field[] getSupportedFields(Class<?>cl) {final Collection<Field>fieldCol = new ArrayList<Field>();for (; cl != null; cl = cl.getSuperclass()) {

  1. fieldCol.addAll(Arrays.asList(cl.getDeclaredFields()));

}final Field[] fields = new Field[fieldCol.size()];

  1. returnfieldCol.toArray(fields);




The above class works like this.

  1. First it will check whether the @Column annotation is present or not on the attribute
  2. @Id column should not be compared that does not make sense to compare, if they are equal then those are same entities, so skip it
  3. Skips whether the attributes name is present in the excluded columns.
  4. Then gets the corresponding value by calling get Value() method.
  5. getValue() method in turn calls to process the get method for that attributer, but to execute it we need to have getter method for that attribute.
  6. So again it calls to find the getter method based on the type, because as per the java naming convention the getter name for the Boolean names start with is.
  7. If the method name is not present in the current class then it checks for the super class.
  8. If it finds the getter method then it process the method and gets the values for that attribute, the same thing will be repeated for that other object for each attribute.
  9. Compare value method checks for the equality of the two getter method values.
  10. If any of them are not equals then it returns true otherwise returns false.


Now let’s check this. For this I wrote on demo class as below.

import java.util.ArrayList;


publicclass AttributeCompareDemo {


publicstaticvoid main(String args[]) {


EmployeeBE e1 = new EmployeeBE();




EmployeeBE e2 = new EmployeeBE();




booleanareEqual = AttributeCompareHelper.areObjectsEqual(e1, e2, new ArrayList<String>());

    System.out.println("Objects are equal: " + areEqual);





If you run the above program output will be like this:


Info 1.png

If you observe the output it’s false, because the last name is false;

Info 2.png

If you observe the screen shot the output is true, because I changed the last name.

In this way you can set many attributes and check.

Experts of Java development team have just shared their views about comparison between two business entities using JPA/ Reflection. If you want to ask anything or any point is left unexplained, please write to the experts and wait for their response.




By using the java reflection mechanism we can compare many attributes without writing many compare statements. Not only JPA entities as I said above we can compare any other objects.

But we need to set the customizable things so that we can differentiate them.

Demo On Configuring Graylog input and get messages:

  By using Graylog we can get the whole information(logging information,indexing, collecting information).


If we are not sending any data (application data, JSON data ...etc ) to the Graylog, then we need to configure the input , then this input will tell to Graylog to accept the log messages.


Configuring the Graylog input:


1.Launch the Graylog home page by using URLs.


2.Enter the valid Username and Password, then the page will be navigated to the Graylog console page.


3.The Graylog console page

Note: When we first time launch the Graylog Console , graylog didn't show the Histogram,Messages.


    4.To configure the input in Graylog, click System->Inputs


   Note: First time when we launch the Graylog there is no Global inputs and Local inputs exist.



5.Then Select the Syslog UDP and click on  Launch New Input button.



6.Give Title(Ex:Demo Syslog UDP), Bind Address(Local IP Address or Remote IP Address ) and Port as 5140, then click on the Launch button which is located at bottom right of the “Launch new input” pop-up.


  7.Check If you have Messages:


then you should see the Syslog UDP input appear on the Graylog console.



8.Click on the Show received messages button, then you should find the below screen.


That's it for this demo on Configuring Graylog input and get messages.

Refer link: Installation Steps of Graylog-Part1

Place the hash password.




Do the following changes in the file like server.conf



















Start the graylog server using the following command.




You can check out the server startup logs, it will be useful for you to troubleshoot graylog in case of any issue.







Install Graylog Web Interface:


To configure graylog-web-interface, you must have at least one graylog-server node. Install web interface using below command.




Edit the configuration file and set the following parameters.




This is the list of graylog-server nodes, you can add multiple nodes, separate by commas.




Set the application secret, you can generate it using pwgen -N 1 -s 96.




Restart the gralog-web-interface using following command,




Access Graylog Web Interface:


The web interface will listen on port 9000, configure the firewall to allow traffic on port 9000.







Point your browser to http://ip-add-ress:9000. Log in with username “admin” and the password you configured at root_password_sha2 on server.conf.


Point browser for local to


Point browser for global/remote to http://your_remote_ip:9000


We can launch the Graylog welcome page.



1.Launch the Graylog using local URL,


so, we will be launched on Graylog page as below view,





2.Launch the Graylog using local URL,




so, we will be launched on Graylog page as below view,







Sign In the Graylog by using login credentials,


Username : admin

Password :initial123(where we already created for hash code)






If you login success , then we can see the Graylog console page with “Nothing Found” message.




If you login failed then you can find view like below,




Graylog Installation:

Modern server architectures and configurations are managed in many different ways. Some people still put new software somewhere in opt manually for each server while others have already jumped on the configuration management train and fully automated reproducible setups.

Graylog can be installed in many different ways so you can pick whatever works best for you. We recommend to start with the virtual machine appliances for the fastest way to get started and then pick one of the other, more flexible installation methods to build an easier to scale setup. (Note: The virtual machine appliances are suitable for production usage because they are also prepared to scale out to some level when required.)

The Graylog web interface has the following prerequisites:


  1. Some modern Linux distribution (Debian Linux, Ubuntu Linux, or CentOS recommended)
  2. Oracle Java SE 7 or later (Oracle Java SE 8 is supported, OpenJDK 7 and OpenJDK 8 also work; latest point release is recommended)




              1. MongoDB

              2. ElasticSearch

              3. Graylog

              4. Graylog Web Interface


Installation Steps:

Installing Java:

    1. ElasticSearch works based on Java, so we can install OpendJDK.

      To install OpenJDK Use command like,


       [root@localhost ~]# yum install java



       To verify Java version ,use command like,




Installing EPEL :


Configure EPEL repository on CENTOS 7/ RHEL 7:


This explains that how to enable EPEL (Extra Packages for Enterprise Linux) on newly released CentOS 7 / RHEL 7, it is maintained by a special interest group from Fedora that creates, maintains and manage high quality of additional packages for Enterprise Linux Variants which includes Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Enterprise Linux(OEL).

Install EPEL repository:

Install EPEL rpm by using the following command like,




Output will look like,




List the installed repo’s:

You can find the EPEL repo in the list.




Output will look like,




EPEL packages:




Packages list will look like,




Install the package:




Install ElasticSearch:


Elasticsearch is an open source search server, it offers a realtime distributed search and analytics with RESTful web interface. Elasticsearch stores all the logs sent by the Graylog server and displays the messages when the graylog web interface requests for full filling user request over the web interface.


Import the GPG key:




Add ElasticSearch repository,




Install the ElasticSearch by using command like,




Configure Elasticseach to start during system startup.




The only important thing is to set a cluster name as “graylog2“, that is being used by graylog. Now edit the configuration file of Elasticsearch.




Disable dynamic scripts to avoid remote execution, that can be done by adding the following line at the end of above file.




Once it is done, we are good to go. Before that, restart the ElasticSearch services to load the modified configuration.




Wait at least a minute to let the Elasticsearch get fully restarted, otherwise testing will fail. Elastisearch should be now listen on 9200 for processing HTTP request, we can use CURL to get the response. Ensure

that it returns with cluster name as “graylog2”





Optional: Use the following command to check the Elasticsearch cluster health, you must get a cluster status as “green” for graylog to work.




Install MongoDB:

  MongoDB is available in RPM format and same can be downloaded from the official website. Add the following repository information on the system to install MongoDB using yum.




Install MongoDB by using command like,




If you use SELinux, you must install below package to configure certain elements of SELinux policy.




Run the following command to configure SELinux to allow MongoDB to start.




Start the MongoDB service and enable it to start automatically during the system start-up.




Install Graylog:

  Graylog-server accepts and process the log messages, also spawns the RESTAPI for the requests that comes from graylog-web-interface. Download the latest version of graylog from graylog.org.

  Install Graylog repository by using command like,




Install the latest graylog server by using command like,




Edit the server.conf file.




Configure the following variables in the above file.

Set a secret to secure the user passwords, use the following command to generate a secret, use at least

64 character’s.




Note: Do not forget to configure EPEL repository on CentOS 7 / RHEL 7. As explained above.

If you get a “pwgen: command not found“, use the following command to install pwgen.




Place the secret.




Next is to set a hash password for the root user (not to be confused with system user, root user of graylog is admin). You will use this password for login into the web interface, admin’s password can not be changed using web interface, must edit this variable to set.




continue... in link Installation Steps of Graylog-Part2


Graylog is a fully integrated open source log management platform for collecting, indexing, and analyzing both structured and unstructured data from almost any source.


If you need to make an analysis of logs, note that there is an open source tool called Graylog which can collect, index and analyze structured and unstructured data from various sources.

          1. Started by Lennart Koopmann in his free time in 2010 (Graylog2 at that time)

          2. TORCH GmbH founded as company behind Graylog in late 2012

          3. Big rewrite that got released as 0.20 in Feb 2014

          4. New US based company Graylog, Inc. founded in Jan 2015

          5. Renamed from Graylog2 to Graylog

          6. Graylog 1.0 release in Feb 2015

Management tools:

Configuration management tools allow us to manage our computing resources in an effective and consistent way .

They make it easy to run hundreds or thousands of machines without having to manually execute the same tasks over and over again.

By using shared modules/cookbooks it is pretty easy to end up with hundreds of managed resources like files, packages and services per node .

Nodes can be configured to check for updates and to apply new changes automatically.

This helps us to roll out changes to lots of nodes very easily but also makes it possible to quickly break our infrastructure resulting in outages.

So being able to collect, analyze, and monitor all events that happen sounds like Graylog.



Levels of Log Management:

Log management can able to grep maximum data over flat files, it is stored on its host computer system as an ordinary "flat file".

To access the structure of the data and manipulate it.

Log management can be done on different levels:

Level1: Do not collect logs at all.

Level2: Collect logs. Mostly simple log files from email or HTTP servers.

Level3: Use the logs for forensics and troubleshooting. And also why email not sent out? Why was that HTTP 500 thrown?

Level4: Save searches. The most basic case would be to save a grep command you used.

Level5: share searches. Store that search command somewhere so co-workers can find and use it to solve similar problems.

Level6: Reporting. Easily generate reports from your logs. How many exceptions did we have this week, how many past weeks. People can use Charts, PDF’s.

Level7: Alerting. Automate some of your troubleshooting tasks. Be warned automatically instead of waiting for a user to complain.

Level8: Collect more logs. We may need more log sources for some use cases. Firewalls logs, Router logs, even physical access log.

Level9: Correlation. Manual analysis of all that new data may take too long. Correlate different sources.

Level10: Visual analysis, Pattern detection, interaction visualization, dynamic queries, anomaly detection, sharing and more sharing.

Then we need a central placed to send your logs, for this introduced

Graylog (2)-server

Then we need a central placed to make use of those logs, for this introduced

Graylog (2)-web-interface.

How to send logs:

Classic syslog via TCP/UDP


Both via AMQP or write your own input plugins.

GELF: Graylog Extended Log Format-Lets you structure your logs.

Many libraries for different systems and languages available.



‘short_message’: ’Something went wrong’,



‘facility’:’ some subsystem’,

‘full_message’: ’stacktrace and stuff’,

‘file’: ’some controller.rb’,



‘_user_id’:900 l,



Log messages types:

There are 2 types of log messages.

Type1: Automatically generated from a service. Usually huge amount of structured but raw data. You have only limited control about what is logged.

Type2: Logs directly sent from within your applications. Triggered for example by a log.error() call or an exception catcher. Possible to send highly structured via GELF.




As presented in the above Graylog architecture, it is depending on components.

                   1. ElasticSearch

                   2. MongoDB

                   3. Graylog

1. ElasticSearch: ElasticSearch is useful for storing logs and searching text.

2. MongoDB: MongoDB is useful for Metadata Management.

3. Graylog: Graylog can help you to better understand the use made within your applications, improve their security, and reduce costs.

Architectural Considerations:

There are a few rules of thumb when scaling resources for Graylog:

1. graylog-server nodes should have a focus on CPU power.

2. Elasticsearch nodes should have as much RAM as possible and the fastest disks you can get. Everything depends on I/O speed here.

3. MongoDB is only being used to store configuration and the dead letter messages, and can be sized fairly small.

4. graylog-web-interface nodes are mostly waiting for HTTP answers of the rest of the system and can also be rather small.

5. graylog-radio nodes act as workers. They don’t know each other and you can shut them down at any point in time without changing the cluster state at all.

Also keep in mind that messages are only stored in Elasticsearch. If you have data loss on Elasticsearch, the messages are gone - except if you have created backups of the indices.

MongoDB is only storing meta information and will be abstracted with a general database layer in future versions. This will allow you to use other databases like MySQL instead.


Minimum Setup:

This is a minimum Graylog setup that can be used for smaller, non-critical, or test setups. None of the components is redundant but it is easy and quick to setup.



Bigger Production Setup:

This is a setup for bigger production environments. It has several graylog-server nodes behind a load balancer that share the processing load. The load balancer can ping the graylog-server nodes via REST/HTTP to check if they are alive and take dead nodes out of the cluster.




Refer Links:  1.Installation Steps of Graylog-Part1

                    2.Installation Steps of Graylog-Part2

                    3.Demo On Configuring Graylog input and get messages    

                    4.Demo on Sending Data to the Graylog by using GELF and get the Logging data on Graylog console

Sending Data to the Graylog by using GELF and get the Logging data on Graylog console:

As Graylog principle that it get the Logging information by sending the data from our application layer.

1.Create the Input in the Graylog and Create the Content pack

2.Export/Download the content pack

3.Upload the Content Pack

4.Configure the GELF library for Logback library

5.Configure the logback.xml file

6.Run the application

7.Check the logging data in the Graylog Console.

1.Create the Input in the Graylog and Create the Content pack:

Configure the input in the Graylog for GELF TCP:

1.Select GELF TCP



2.Click on the “Launch new input” button and enter the required details as like below screen,



3.Click on the “Launch” button.


then you should see the Gelfjava(GELF TCP) input appear on the Graylog console.


2.Export/Download the content pack:


Content pack:Content packs are bundles of Graylog input, extractor, stream, dashboard, and output configurations that can provide full support for a data source. Content packs are available in the Graylog the marketplace , so required Content Packs can be imported using the Graylog web interface.


Go to System-> Select Content Packs->Click on Create a content pack button.


Then page will be navigated to the “Create a content pack” page and fill the required fields.


Then click on the “Download my content pack” button which locates at the same page I.e Create a content pack page. So one content-pack.json file will be downloaded.



Later downloaded the file “content-pack.json” and save at a system drive.


Then go back to the Content Packs, click on the button “Import content pack”



3.Upload the Content Pack:


Click on the Choose File button and select content_pack.json file from system and “click” upload button.



  Later created content pack is located in the same Content Packs page with Category name(here Operating Systems).

Click on this Category name(here Operating Systems) , then Content pack name (here logback-Gelf) will be appeared which is created by us. Select the Radio Button->click on the Apply content button.



  Then will get message on top of the page like “Success! Bundle applied successfully




4.Configure the GELF library for Logback library:


GELF / Sending from applications:

The Graylog Extended Log Format (GELF) is a log format that avoids the shortcomings of classic plain syslog and is perfect to logging from your application layer. It comes with optional compression, chunking and most importantly a clearly defined structure. There are dozens of GELF libraries for many frameworks and programming languages to get you started.


Here I chosen logback-gelf library .

  Setup with our application:


Add dependency in the POM.xml file of MAVEN,







5.Configure the logback.xml file :


Add the logback.xml file in the application.


Configurations in the logback.xml,


  1. Add the RemoteHost
  2. Add the Port Number
  3. Add the Host

  <?xml version="1.0" encoding="UTF-8"?>



<!--Use TCP instead of UDP-->

<appender name="GELF TCP APPENDER" class="me.moocar.logback.net.SocketEncoderAppender">



<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">

<layout class="me.moocar.logbackgelf.GelfLayout">

<!--An example of overwriting the short message pattern-->

<shortMessageLayout class="ch.qos.logback.classic.PatternLayout">



<!-- Use HTML output of the full message. Yes, any layout can be used (please don't actually do this)-->

<fullMessageLayout class="ch.qos.logback.classic.html.HTMLLayout">












<!--Facility is not officially supported in GELF anymore, but you can use staticFields to do the same thing-->

<staticField class="me.moocar.logbackgelf.Field">









<root level="debug">

<appender-ref ref="GELF TCP APPENDER" />



  6.Run the application:


Run the application.



Then Go to browser and refresh the Graylog URL and click System->Inputs and then you should see the below screen.



  Note: As per above screen you can find round red color at top center, it is because of that if already Gelfjava(GELF TCP) is available then Graylog server says that the particular connection is available. For this link graylog server made as failed connection.

So as a Graylog administrator can able to delete failed connection.


7.Check the logging data in the Graylog Console:


Then you can Click on the “Show Received Messages”,later you we can see the collection of/bundles of log messages as below screen.




Some times will get “Nothing Found” instead of above screen,





then we have to set the Port numbers in the System Network of Remote Host.




































Refer Links:1. Overview on the Graylog

                  2.Installation Steps of Graylog-Part1

                  3.Installation Steps of Graylog-Part2

                  4.Demo On Configuring Graylog input and get messages

We can get the concept of deadlock in wikipedia.

The picture below gives a common scenario which leads to deadlock.


In this blog, I will share how to detect deadlock situation using JDK standard tool jstack.


First we have to write a Java program which will lead to Deadlock:

package thread;


public class DeadLockExample {



  * Thread 1: locked resource 1

    Thread 2: locked resource 2


public static void main(String[] args) {

  final String resource1 = "ABAP";

  final String resource2 = "Java";

  // t1 tries to lock resource1 then resource2

  Thread t1 = new Thread() {

   public void run() {

    synchronized (resource1) {

     System.out.println("Thread 1: locked resource 1");

     try {


     } catch (Exception e) {


     synchronized (resource2) {

      System.out.println("Thread 1: locked resource 2");






  Thread t2 = new Thread() {

   public void run() {

    synchronized (resource2) {

     System.out.println("Thread 2: locked resource 2");


     try {


     } catch (Exception e) {


     synchronized (resource1) {

      System.out.println("Thread 2: locked resource 1");









Execute this program, you will get output:



Thread 1: locked resource 1


Thread 2: locked resource 2


Then use command jps -l -m to list the process id of this deadlock program. In my example it is 51476:


Just type jstack + process id, and it will display all detailed information about deadlock:


Here the object 0x00000000d6f64988 and 0x00000000d6f649b8 represent the two resource String "ABAP" and "Java".


Based on the learning from An example of building Java project using Maven we can now use Maven to do more practical task.

I plan to create a Hello World application based on Spring framework. Instead of manually downloading Spring framework jar files and configured the usage of those jars into my Java project, I can now leverage Maven to make the whole process run automatically.


Install m2e - Maven integration plugin for Eclipse:


And then create a new Java project, and you can easily convert this project to a Maven project via context menu:


Once converted, you can then declare the dependency by clicking pom.xml and choose "Maven->Add Dependency" from context menu:


Enter group id, artifact id and version accordingly. Once done, the XML content should look as below:


<?xml version="1.0" encoding="UTF-8"?>  
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  


Trigger a build via mvn clean install, and Maven will automatically download the necessary jars of Spring framework and store them to .m2 folder.


Now we can start programming in Spring.

My project has the following hierarchy:


All missing import could be correctly parsed and easily fixed now.


If you would like to do some debugging on Spring framework source code, you can also download the related source code very easily via "Maven->Download Sources".


After that you could just set a breakpoint on the constructor of HelloWorld class and then study how Spring instantiates the instance of this class configured in beans.xml via reflection:


The source code of files used in the project




package main.java.com.sap;
public class HelloWorld {
    private String message;
    public void setMessage(String message){
       this.message  = message;
    public HelloWorld(){
     System.out.println("in constructor");
    public void getMessage(){
       System.out.println("Your Message : " + message);




package main.java.com.sap;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
public class MavenSandbox {
public static void main(String[] args) {
  ApplicationContext context = new ClassPathXmlApplicationContext("Beans.xml");
  HelloWorld obj = (HelloWorld) context.getBean("helloWorld");
public String hello(){
  return "Hello world";




package test.java.com.sap;
import static org.junit.Assert.assertEquals;
import main.java.com.sap.MavenSandbox;
import org.junit.Test;
public class MavenSandboxTest {
public void test() {
  MavenSandbox tool = new MavenSandbox();
  assertEquals(tool.hello(),"Hello world");



<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
   <bean id="helloWorld" class="main.java.com.sap.HelloWorld">
       <property name="message" value="Hello World!"/>

Prerequisite: download and configure Maven in your laptop. Once done, type "mvn" in command line and you should observe the following output by Maven:


Suppose I have a simple Java project with following package hierarchy:


The source code of MavenSandboxTest is also very simple:


package test.java.com.sap;
import static org.junit.Assert.assertEquals;
import main.java.com.sap.MavenSandbox;
import org.junit.Test;
public class MavenSandboxTest {
public void test() {
  MavenSandbox tool = new MavenSandbox();
  assertEquals(tool.hello(),"Hello world");

How to build this simple Java project using Maven?


Create a pom.xml under the root folder of your project,


and paste the following source code:


<?xml version="1.0" encoding="UTF-8"?>  
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  


Create a new Configuration:


Specify the following settings and click run:


If everything goes well, you should see the Build Success message:


and there will be a new folder "target" generated automatically:


go to folder classes, and you can execute the compiled java class via command "java main.java.com.sap.MavenSandbox":


or you can also directly execute the jar file via the command below ( you should first navigate back to target folder )


since we have specified the dependency of JUnit with version 4.10 in pom.xml:


so when "mvn clean install" is executed, you can observe the corresponding jar file is automatically downloaded by Maven:


Finally you could find the downloaded jar file from this folder:



Filter Blog

By author:
By date:
By tag: