1 2 3 Previous Next

Open Source

42 Posts

Logstash is very light weight component to ship the logs from one server to centralized servers. In the centralized server we might have run the logstash to apply the pattern and get the required info extracted then send it to elastic search server. We can configure logstash in any machines,

Now lets see how can we do it in windows.



  • Logstash: The server component of Logstash that processes incoming logs
  • Elasticsearch: Stores all of the logs
  • Kibana: Web interface for searching and visualizing logs


Screenshot below explains capabilities of each component :





Elasticsearch and Logstash require Java 7 it needs to be installed

Configuration of Logstash is shown below:



Example in Windows OS , Grok is used as a filter to Parse arbitrary text and structure it

For additional knowledge on creating patterns one can go through this :


Grok Constructor

logstash-patterns-core/patterns at master · logstash-plugins/logstash-patterns-core · GitHub



If one expects to see the output on the console just un-comment stdout , one can validate against the output to see if it is as expected



## Install Java JRE

Using /s will do a silent installation without asking you any questions. It should be save. Haven't had any additional browser toolbars installed afterwards

jre-windows-x64.exe /s INSTALLDIR=c:\java\jre


## Install NSSM Just extract the ZIP file to c:\nssm

## Logstash   ### Prepare the directory structure

REM Base install dir

md c:\logstash

REM Extract Logstash to this directory

md c:\logstash\install

REM NSSM will save Logstash's stdout/stderr here

md c:\logstash\nssm

REM Let's keep Logstash's config outside the install dir for easier updates

md c:\logstash\conf.d


Component Versions used


Elasticsearch 2.1.0 and 2.3.3

Logstash 1.5.4 and 2.3.2

Kibana-4.3.1-windows( has Sense which was useful in querying) and 4.5.1-windows(doesn't have Sense)


### Install Logstash as a Windows Service  

cd c:\nssm\win64

nssm install logstash C:\logstash\install\bin\logstash.bat

nssm set logstash AppParameters agent --config c:\logstash\conf.d

nssm set logstash AppDirectory C:\logstash\install

nssm set logstash AppEnvironmentExtra "JAVA_HOME=C:\java\jre"

nssm set logstash AppStdout c:\logstash\nssm\stdout.log

nssm set logstash AppStderr c:\logstash\nssm\stderr.log

REM Replace stdout and stderr files

nssm set logstash AppStdoutCreationDisposition 2

nssm set logstash AppStderrCreationDisposition 2

REM Disable WM_CLOSE, WM_QUIT in the Shutdown options.

Without it, NSSM can't stop Logstash properly

nssm set logstash AppStopMethodSkip 6

REM Let's start Logstash.


On completion of the above steps the configuration is in place to start with below steps:


net start logstash

### Remove Logstash's Windows service

net stop logstash

cd c:\nssm\win64

nssm remove logstash

## Troubleshooting   ### Have a look at Logstash's stderr/stdout data first

type c:\logstash\nssm\stderr.log

type  c:\logstash\nssm\stdout.log


### Is Java (64bit) installed correctly? c:\java\jre\bin\java -version java version "1.7.0_60" Java(TM) SE Runtime Environment (build 1.7.0_60-b19) Java HotSpot(TM) 64-Bit Server VM (build 24.60-b09, mixed mode)


### Test reading Security event logs on Windows   input { eventlog { type  => 'Win32-EventLog' logfile  => 'Security' } } output { stdout {} }





Now lets see how can we do it in Monsoon (SAP Cloud Server)


In addition to ELK (versions can vary), logstash forwarder needs to be installed

  • Logstash Forwarder: Installed on servers that will send their logs to Logstash, Logstash Forwarder serves as a log forwarding agent that utilizes the lumberjack networking protocol to communicate with Logstash

The Logstash Forwarder will be installed on all of the servers that we want to gather logs for, which we will refer to collectively as our Servers


My Area of work was on Chef cookbooks to develop a Performance service in HCP for HTTP and LJS logs using ELK and Ruby

Example in Monsoon



Screenshot of a working filter







Here is a quick demo of the steps involved on Windows (also available on youtube)




ELK - Elasticsearch Logstash Kibana - Introduction on Windows - YouTube


Remember that you can send pretty much any type of log to Logstash, but the data becomes even more useful if it is parsed and structured with GROK




About this Blog


Drools is the most famous open-source rules engine. And It supports jbpm as a business process engine. In Drools both the engines are combined into KnowledgeBase and they are able to trigger each other during the runtime. In this blog we will see how to run it in with a small example.




Before starting you need to have JDK/JRE, eclispe, Drools plugin and Drools runtime.



Create a New Project

In eclipse create a new Drools project. And copy the following code into your main method.

public static final void main(String[] args) throws Exception {
        // load up the knowledge base
        KnowledgeBase kbase = readKnowledgeBase();
        StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession();
private static KnowledgeBase readKnowledgeBase() throws Exception {
        KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
        return kbuilder.newKnowledgeBase();

Now you have the basic structure of an engine. In our approval example we hope to have the following use cases:

  • If price <= 100, approval process 1 will be triggered.
  • If price > 100, approval process 2 and 3 will be triggered in sequence.



The Rule Part

Before defining the DRL file we need to have a data model as following:

public class Approval {
    public int processId;
    public int price;
    public void setProcessId(int id) {
        this.processId = id;
    public int getProcessId() {
        return this.processId;
    public void setPrice(int price) {
        this.price = price;
    public int getPrice() {
        return this.price;

Then we can create a rule file approval.drl in the resources folder:

package com.sap.ngom
import com.sap.ngom.dataModel.Approval;
rule "approval1"
  no-loop true
  approval : Approval( price <= 100)
    System.out.println("Process 1 is triggered.");
rule "approval2"
  no-loop true
  approval : Approval( price > 100)
    System.out.println("Process 2 is triggered.");

In this rule file we defined two rules which indicate our two use cases respectively. An Approval instance will be inserted in to the ksession. Let's take rule "approval2" as an example, if the price variable in the Approval instance is larger than 100, then we will print a message to the console and set the selected processId to 2. Then we need to update the Approval instance to confirm the change.


In our main class we can add the rule file into KnowledgeBase.

public static final void main(String[] args) throws Exception {
        // load up the knowledge base
        KnowledgeBase kbase = readKnowledgeBase();
        StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession();
        Approval approval = new Approval();
private static KnowledgeBase readKnowledgeBase() throws Exception {
        KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
        kbuilder.add(ResourceFactory.newClassPathResource("rule/approval.drl"), ResourceType.DRL);
        return kbuilder.newKnowledgeBase();

Then we run the code and you can see when the price is set to 150, process 2 will be triggered.


The Process Part

For the process part, we want to run process 2 and 3 in sequence when the 2nd process is triggered. We can create a new .bpmn file in our resource folder. And the structure is as following:


In this diagram we have Start Event, End Event as the process entrance and exit. We have three Script Task blocks as the three processes. We have diverge and converge Gateways to deal with the choosing of the processed.


For the three processes we need to define three classes, which only print a message to the console:

public class Process1 {
    public void runProcess() {
        System.out.println("Now in process 1.");

Then in the bpmn file we let each Script Task block invoke the corresponding runProcess() method.

  • Click on the "Process_1" block.
  • Modify the action of in the Property perspective.


  • Copy following code into the text editor:

import com.sap.ngom.process.Process1;


Process1 process = new Process1();


  • Click OK to save the action.
  • Do the similar changes in the other 2 process blocks.


The Gateway is used to make a decision between two choices. In our case we need a flag variable called processId to indicate which process is chosen. Click on the blank part of the diagram and add processId to the variables.


Click on the diverge Gateway and set the Type to XOR, which indicates that only one process will be ran. Then we can edit the Constraints. PAY ATTENTION that the Type should be code, otherwise the code cannot be executed properly.



Set the Type of the converge Gateway also to XOR.


Make changes in your main class to run the process.

    public static final void main(String[] args) throws Exception {
        // load up the knowledge base
        KnowledgeBase kbase = readKnowledgeBase();
        StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession();
        Approval approval = new Approval();
        Map<String, Object> params = new HashMap<String, Object>();
        params.put("processId", approval.processId);
        ksession.startProcess("com.sample.bpmn.hello", params);
    private static KnowledgeBase readKnowledgeBase() throws Exception {
        KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
        kbuilder.add(ResourceFactory.newClassPathResource("process/approval.bpmn"), ResourceType.BPMN2);
        kbuilder.add(ResourceFactory.newClassPathResource("rule/approval.drl"), ResourceType.DRL);
        return kbuilder.newKnowledgeBase();

In the code we can see firstly we use the rule engine to determine which process will be triggered and then set the process Id into the process engine. The process Id is used to indicate which process will be ran.


We set price to 150 ant run the program, we could get the following result:



Trouble Shooting

  • If you got an error that class bpmn could not be found, you should add jbpm-bpmn2 as dependency.
  • If you got an error that cannot cast BPMN2ProcessProviderImpl to BPMN2ProcessProvider, that seems to be a mismatch of the code versions. We should change the runtime of Drools. In eclipse go to Windows -> Preference -> Drools -> Installed Drools Runtimes. Click the Add button and create a new Drools runtime. Use the new runtime instead of the old one, then the issue should be solved.


Hello community, hello Ivan,


today I published in my GitHub repository a formatter class for abap2xlsx. The class offers six methods to format a range in an Excel spreadsheet. E.g. with the method set_border_outline_range you can set a border around a range. You must initialize the class with a zcl_excel object and optional with the number of the worksheet. Now you can set a border with the start and end column and row, as well as the border style and color. The other five methods offers the possibility to set the font size and color, the bold style, the background color of a cell and last but not least a grid border. These methods are a good pattern to build your own format methods for ranges of cells easily. I use the abap2xlsx method change_cell_style, which offers the possibility to change special attributes. Hope it is a good base for other abap2xlsx programmers.


Enjoy it.





We are excited to announce that SAP is a Partner Sponsor at Developer Week in San Francisco on February 7-12, 2015.

The week will start with the Hackathon (February 7-8, 2015), where SAP will offer participants 3 challenges:

1. Housing and Affordability in San Francisco

2. Renter's Rights in San Francisco

3. Landlords & Tenants in San Francisco

Technology to be used to solve these challenges:


More details on the challenges and technology can be found here http://accelerate.im/technologies/12

Would you like to participate in the hackathon and try your luck in winning a prize? Then come and join us at the Accelerate SF Hackathon on Feb 7-8.

Followed by the boot-camp on Monday February 09, where our own Alexander Graebe will give a workshop "Build Your First App with OpenUI5". More details http://developerweek.com/conference/conference-schedule/

And finally the conference exhibition will start on February 10-12. Don't miss the keynote by our own Thomas Grassl " Enterprise Apps Are Not Boring As You Might Think".  More details here http://developerweek.com/conference/conference-schedule/


Stop by at SAP booth to find out more about OpenUI5 and meet SAP Developer Relations Team!


We are looking forward to meeting you,


Inga on behalf of Developer Relations Team

Ivan Femia

ABAP URL shortener

Posted by Ivan Femia Dec 24, 2014

Nowadays, everyone knows URL shortener services and even you don't know it you are using it.

Twitter automatically translates every URL into a shorter URL, Google has is own service goo.gl and there are so many others...


In my last project I had to configure NWBC and I faced a problem. Do you know that URL for menu entries in PFCG is just 132 characters?

Usually this is enough, but in my case it was not...


I had to link Cognos reports, that believe me are really long URL and even more due to a bug with SAP Fiori Launchpad I had to apply a workaround to have my theme applied as described by note 2092412, and it generates long URLs.


So I decided to use shorten URL, but I had to face two problems:

  1. User have no access to URL shortening service (Internet access is limited)
  2. It is not in the security policy of the company to use an external service for shortening


The idea


Why do not use an URL shortening service in ABAP?

I checked on the SAP help and I didn't find any solution. In 2010 Roel van den Berge wrote a blog in order to integrate public URL shortening service with ABAP URL Shortener Service in ABAP, but it can't be used in my scenario due to constrain number 2.


Why do not create an URL shortening service in our ABAP system?


The approach


URL shortening concept is really simple: find a short string that identifies a long URL. Usually this short URL is a 6 characters long string...


So having this in mind I grouped the alphanumeric chars using this map:



0 → a
1 → b
25 → z
25 → A
51 → Z
52 → 0
61 → 9


Having a 6 character string as unique identifier and using this scale, it is like having 56,800,235,583 unique shorten URLs. Is that enough?


Where does this number come from?


Each short URL can be considered an unique ID, so for instance id 000000 is the short url aaaaaa, 000001 is aaaaab and so on till 56,800,235,583 that is short URL 999999


The solution


I have created a model class ZCL_T3G_AUS_MODEL that has the conversion logic from short URL to ID and viceversa, a transparent table that stores the short URL generated ZT3G_AUS_LINKS and I've also included a blacklisted/reserved table ZT3G_AUS_SLINKS.

Why a blacklisted/reserved list? I think that you don't want that a short URL is the name of your biggest competitor or an impolite word, even more you want to have some friendly short URL like <mycompanyname> or <myproductname> free for use for some specific links.

In SICF I created a new independent service named s (this can be whatever you want)


An HTTP Handler ZCL_T3G_AUS_HANDLER is associated to this service and it is responsible to translate the short URL to the original URL


Short URL example



Below an example of URL translation from the short url http://yukon.techedge.corp:xxxx/s/aaaaaf to my personal blog http://www.plinky.it




If you try to link an unknown URL a 404 HTTP error is thrown




Here it is my Christmas present to the SCN Community.

Code is shared as open source on GitHub using SAPLink. It should be compatible also on lower releases, maybe some code syntax has to be adapted. Use it and if you have any suggestion just collaborate!


Code has been checked with DoctorZedGe and it has A+++ score

Code Quality.PNG


Before I'll answer the question, let me step back and set the stage for my arguments! I was invited by Leigh Jin, an associate professor at the San Francisco State University, to give a guest lecture on OpenUI5 two weeks ago. I was obviously very excited and prepared some code examples and a presentation for the lecture. Here's what I did: I created all my code examples on jsbin.com, which is a tool/website to collaboratively code in JavaScript. It's an amazing website - allowing you to develop a website using HTML, JS and CSS. But, here are the two best features from my point of view: You can see the outcome of your code and interact with it and you can use libraries like jQuery, Twitter's Bootstrap, AngularJS and .. OpenUI5!



Let's jump into it directly to see how it looks like. Here's one of my code examples: http://jsbin.com/howoyeqoki/1/



_DSC0591.JPG.jpegDuring my presentation, I displayed just this website and did some live coding to demonstrate how fast you can get things done with OpenUI5. I also shared the URL and let the students live code by themselves. The outcome was fantastic! Leigh and the students liked it a lot! For me, it was kind of a big surprise.. I'm in the SAP ecosystem for less then two years and I heard a lot about how complex things can get. Leigh is experienced in conducting SAP student classes and she also knows many SAP solutions. She explained to me that this live coding approach is exactly what she was looking for! By the way, she did not even considered to teach OpenUI5 before I showed her how I used to teach people about OpenUI5. She knows how much preparation SAP classes require: setting up the servers, get and upload sample data, setting up the student's PC to have necessary development tools running, .. the list goes on and on. But, with this approach, you don't need any of those. You can throw away the overhead and jump into code directly - within the browser of your choice!




And finally, here's the list you were probably looking for after reading the title:

  • No need to download nor install any sources and not dealing with folder structures, correct paths and so on
  • No need to download and setup any IDEs, plugins, SDKs or what so ever - just open your pre-installed browser
  • No need to start from scratch anymore. Simply setup a greenfield template like this one: OpenUI5 Mobile Greenfield Example
  • No need to create a system for student's submissions. Share your URL and let them code. At the end, let them sent over their final URL to you.
  • No need to have to setup any server side components or APIs - just use existing public APIs for teaching purposes
  • No need to understand and deal with complex debugging modes of IDEs. You can use OpenUI5s "Diagnostics" popup (Ctrl + Alt + Shift + S) or simply print to console & open the console view on jsbin
  • Simply use copy&paste to prototype your desired application using the code examples provided here: OpenUI5 SDK - Demo Kit


And now it's your turn: What do you think? Try out that approach and share your experience in the comments!


If you're looking for some code examples or just would like to see my slides, click here. And if you're interested in the SFSU class and our collaboration with them, please check out this site. Also, it's nice to see that by adding this web development class with OpenUI5, SFSU is bringing SAP's Student Recognition Award to their MBA program! Congrats!

It's been a while since I blogged about our monthly Open Source meetups in the Bay Area! We just had our fourth meetup last week - with 40 participants (out of 74 RSVPs)! As usual, we had a theme wrapped around the whole meetup, which was this time ... (surprise): Christmas! I was very excited about it and we decorated our facilities with christmassy things -see the pictures below But that was not all! Our first speaker, Aaron Williams, was talking about how to prototype with IoT and he came up with software-controlled christmas lights, running on an Arduino and with sensors and an UI for maintenance purposes!



In the meantime, our Open Source Bay Area community grew to 280 members with overall 162 participants within 4 meetups over the last 5 months. But, that was not the only change! Based on the feedback we received from our upcoming meetups, our participants really enjoy Q&A sessions to get their more detailed questions answered. Because, at the end participants join our meetup to gain new knowledge and our survey results underline that content is king! Based on that, we change the format. Our speeches are now only 15m long, followed by 30m Q&A. This allows us no have more time to mingle and connect to like minded people!


As for the last meetup, we had two speakers joining the speakers panel plus one spontaneous lightning talk by Ralf Pieper:


Here are some pictures - just to let you know what you missed


I was really glad to see such an amazing engagement within our community. Ralf came up with the idea for a lightning talk just before the meetup. We were thinking about those spontaneous talks already a few times and we figured it could be very nice to give our community members the possibility to talk to the community and bring up some issues or thoughts they had!


We also introduced online surveys this time, so let me share some results with you:

  • 100% said attending the meetup was worth their valuable time!
  • Overall, the meetup was rated 5.7/7 stars - which is a great result
  • 64% said that the content was key for their decision to participate
  • Here's what people liked most: Talks & selection of speaker, Networking opportunities, Engagement & discussions, Gained knowledge, Organization
  • And here's what our community is most interested in right now: Big Data and cloud security, IoT everything (e.g. Connected Cars) - but live demos!, NoSQL databases and Big Data infrastructures, Crypto-currency in the cloud, OSS business models & success factors, Configuration Management

We received very well feedback and people would like us to keep it going - which we will obviously do! So, if you are interested join our community and RSVP for the next meetup using the link at the bottom of this blog! Also, if you know any cool speakers that would fit perfectly in one our meetups, please let us know!



Thanks everyone for participation and also thanks to Inga Bereza and Garick Chan for co-organizing all of our meetups

If you are an Open Source enthusiast, please join our meet up group and RSVP for the next meet up on January 21st!


Open Source Bay Area Meet up Group

Learn and practice the following open source technology during 4 days

Get $200 discount



  • Systemd, Btrfs, and kernel crash infrastructure
  • Samba and Btrfs - A Snapshot of Progress
  • Achieve best server/storage performance with NVMe devices
  • UEFI Secure Boot
  • Full-system Rollback - Myth and Truth
  • OS Lifecycle Management from the Datacenter to the Cloud
  • Hardening and tweaking your Linux

High Availability

  • Geo redundancy, including database replication, filesystem replication, and geo cluster overlay
  • Create a highly available 2 node virtual environment using DRBD and KVM
  • Choices in designing HA clusters from a reliability, scalability, and performance perspective (e.g., such as when to use network bonding, OCFS2 versus file-system fail-over, DRBD)

OpenStack.pngOpenStack, KVM and PaaS

  • OpenStack deployments and troubleshooting
  • KVM on a grid enables dynamic management and resource allocation of virtual machines in large scale high-performance environments
  • Build Platform as a Service (PaaS) with WSO2 Middleware and EC2

Big Data (Apache Hadoop)

  • Deploy an elastic auto-scalable cluster with OpenStack to consume and process business data on demand

Ceph_Logo_Stacked_RGB_120411_fa.pngCeph Storage

  • Sizing and performance of Ceph storage
  • Ceph for Cloud and Virtualization use cases, including thin provisioning to make your storage cluster go further

SAP on Linux

  • How T-System leverages Linux and SAP LVM capabilities within their data center
  • Optimized Linux for SAP applications
  • Automate SAP HANA System Replication
  • Manage SAP HANA Scale-Out Linux systems

Register Today

All above open source technical sessions are available at the annual user conference SUSECon 2014 (NOV 17-21, 2014 Orlando). Interested to attend? Request your $200 discount off current full conference pass and meet with SAP & Open Source architects (email).

I wrote a blog last month (in July) on just how much I have been enjoying Ubuntu as a desktop machine.

I can't see myself going back - I am a total convert.

So just in the chance that I might win some more to the cause here are 7 more reasons why you might find Ubuntu to be your next OS choice.


7. Wobbly Windows

Oh this is available for other operating systems but having a few great window effects makes development life much more fun. Given I am running pretty standard Gnome Desktop, I use the compviz plugin to get this working for me. The Wobbly Windows adds a little stretchy and snappy effects to desktop windows but the best part is that compviz comes with other effects to snap windows into different parts of your desktop. So with a couple of keystrokes I can setup a browser window on one half of the screen and an editor (like sublime) on the right hand side of my screen

I can also quickly get several terminals up and snap them into the four quarters of the screen and ssh into a different server in each one. Although you can achieve the same thing with ...


6. Terminator

These things can start to be a little 'my-terminal-is-better-than-your-terminal' but after I was introduced to Terminator I rarely use the standard terminal.

The best feature about Terminator is that you can have many windows and tabs open within the app to then replicate what I was doing with separate terminal sessions and snapping them to different corners of the screen. To take this to another level (because that is not the killer use case) you can link windows together and issue the identical command to all windows. This was invaluable to me on a recent project where I was managing a cluster of servers and I wanted to issue the same sql query to each of them simultaneously to determine if they were all in sync.


5. Cowsay

Again this is a pretty minor item, all things considered, but it does make logging messages that much more fun.

I have been doing a lot of work with Ansible an open source provisioning tool. I will have more to say on this in a future blog but for now lets if you are not familiar with it, then consider it a way to script your server deployments so that is easy to deploy new servers with identical configuration.

Ansible uses cowsay to output many of its messages to the screen as they the playbooks run. Given some playbooks take time it helps break up the monotony as the cows mooove across the screen.

Just to give you a feel of how cowsay can immediately improve your life:cowsay.png


4. Multiple virtual desktops

I don't know how I will be able to work on 1080x768 again. I have grown used to multiple screens and not just multiple screens but multiple virtual screens. Ubuntu and Gnome make this a no brainer and you can easily have 4 virtual screens with real estate of 3840x2160. By splitting this into a virtual screen of 1920x1080 I can easily put different types of work on different virtual screens and focus on one particular type of work at once. I like to have a little PHP going on, with a little SAP UI5 here and perhaps email and other messaging on another. I can set up each screen with all the resources I need for that work and then leave it until I want to come back to it. This saves in context switch time as it is no trouble for my machine to leave an project or two open with the virtual machines that it needs and come back to it as I can.

I might restart my machine once a week if I need to, so to be able to set this up and then leave it all running is a great timesaver.

3. A Better understanding of your computer

In my last blog I mentioned that the command line as one of the great benefits of Ubuntu. Now I love great user interfaces as much as the next person. In fact I am passionate about creating great user experience for my clients. One of the best ways to do this is to simplify, simplify, simplify and remove all the complexity that does not affect the transaction at hand.

As a developer and as a DevOps'er you need to be familiar with what is going on with your servers. There are many great graphical programs that enable this but by using the command line I feel like I am operating at a much closer level to the computer and after a while of doing this the muscle memory kicks in and it becomes second nature. Also things like $ and ^ that are part of regular expressions have identical meaning in vi (yes vi). Knowing basic vi is also handy for when you are ssh'd into your headless server and guess what sublime isn't installed but vi is. Nano probably is too but I'd rather not talk about that.

2. Alias you, Alias me.

Whilst Jason Bourne is flying around the world with half a dozen passports an alias or two can be a great things to stick in your back pocket or or .bash_alias file.

An alias can take a long command line sequence and reduce it to a couple of easily typed letters. For example I have a scripted vagrant box that I used to have to get to the correct directory before starting it but with a simple and short script that I have aliased to two letters I can start that virtual machine and with another alias I can ssh into the machine and I am away.

Also the .ssh_config file is a winner. As you can define easy to remember aliases to all those servers you are managing and specify which user you want to log in as. There is also a trick about use ssh_config to differentiate your github accounts if you have multiple accounts for multiple clients.

1. Configurability

Yes, I saved the best for last. The best part of Ubuntu or any other variant of linux is it's configuarability. If you don't like the UI or pretty much any part of the OS you can switch it out for another. This is one thing that MacOS and Windows don't have to a large extent. While most of the tips and tools mentioned here can be applied on those OS's you can't swap out your UI.


This sort of discussion about editors, terminals, OS's can get a little bit heated for no good reason and I am not saying what I run is best and there is no other. Work out what works for you. In fact I run all these OS's (Linux, MacOS and Windows) and they all have advantages but for my main machine - ubuntu is where I am staying.


So why do you run what you run?

Last week, the Developer Relations team organized a Open Source meet up around success stories, failures

highres_400249062.jpegand best practices of Open Source initiative in bigger organizations. This time - it was our second meetup - we had 75 people attending the meet up. Our community grew from 18 to 75 within a month - that is pretty impressive! I was obviously really excited to welcome all participants and our 3 speakers: Zach Chandler from the Stanford University, SAP's Tools Team (Ben Boeser, Dominik Tornow, David Farr), and Mark Hinkle from Citrix! Based on the feedback from our participants, most people really enjoy the variety of speakers and valued the different perspectives on Open Source initiatives in bigger organizations.

If you were not able to attend, please find the slides of the talks below:


I also wanted to thank Inga Bereza and Garick Chan for their support as Co-Organizers - they did a great job!


For this second meetup, we actually changed our format based on the feedback we received from our early community members. This time we had one more speaker, shorter speaking slots and more focused talks. For the next meetup the speaking slots will get shorter again - our member like to discuss in more detail and exchange their experiences. They also want to have small "pitching slots" to talk about their projects within the community! Oh yeah - also, we will have more pizza - that was requested most


Overall, this was an amazing experience and I was really glad to have so many people joining our community! Check out the pictures below (or check out the meetup description directly) to see what you missed

highres_400249332.jpeghighres_400249462.jpeghighres_400249012.jpeg2014-08-13 19.23.52.jpg2014-08-13 18.40.27.jpg2014-08-13 19.20.44.jpg


Here are the impressions our community members:

  • Rick A: "Great talk from a wide variety of speakers regarding use of open source at their workplace. Very informative, exactly what I was hoping to hear about. Pragmatic talks about open source in general, specific discussions on Drupal, Git, OpenSSL, and others."
  • Daniel K.: "It was great! Very reassuring that other big organizations are having similar experiences...and overcoming them."
  • Jack P.: "Very useful meeting, great hosts and talks."
  • Greg P.: "All the speakers were great. Big thanks to the organizers & SAP for hosting."


If you are an Open Source enthusiast, please join our meet up group and RSVP for the next meet up on September 24th!


Open Source Bay Area Meet up Group

Nigel James

Going a little Ubuntu

Posted by Nigel James Jul 14, 2014

Earlier this year I was about to take on a new client and it was very clear that I would need to upgrade my computer.


The fun part about this new client is that there was no SAP technology to be seen and it was a very open source house. Open source in the respect that it used a lot of open source technologies and open source thinking.


The fun part for me at the start of this assignment was picking out a new beast on which to practise my craft on.


After looking into the various options available I went for a  DELL Latitude with stacks of RAM and an SSD drive and chose Ubuntu for the OS.


WOW!  I can almost hear you drop off your chairs.


I have been a Windows guy for all my career. Not that I have particularly enjoyed that. Windows can be a right pain in the neck at times but at the end of the day it works most of the time and had everything I needed . I saw a lot of my developer colleagues heading down the shiny iMBP or iAir path and while that looked very shiny and attractive here are my reasons for going with Ubuntu, enjoying it and never going back to Windows again (unless I am forced to).


  1. Everything I need is available on Ubuntu.
    There is nothing that I need that is not on Ubuntu. Actually that is not quite strictly true in the most pedantic sense of the word but for everything I need to do there is an option on Ubuntu

  2. What's good for the server is good for the desktop.
    The great thing about working with Ubuntu on the desktop is muscle memory. The servers run Ubuntu be it Webservers, Database servers, Monitoring servers, Email servers are all running Ubuntu. Not that all those processes are running on the desktop but it does mean that when you are working on the production servers all the same commands work exactly the same way. Need to work out if your server is running out of disk then using the same df or du commands makes it easy to remember.

  3. Embrace your inner command line.
    I loved windows because I could avoid the command line. Even though Windows does now have powershell and it is powerful I used to avoid getting into the DOS command line because it was really a pain in the neck. With Ubuntu and even with the MacOS systems in my life I love the command line. A lot of the time it is easier to type a command than use a GUI equivalent. Also because tools like grep become part of the everyday working with regular expressions become (slightly) less daunting. They just become part of your muscle memory.

  4. Do I need to mention Windows8?
    The short answer is no. I have used Windows 8 a little bit on some machines that I had to and I cant say that it was a pleasant experience. It really is two user interface paradigms nailed together badly.

  5. Installing software is a snap
    I had this impression that installing software on linux systems was compile, make etc but because Ubuntu and similar debian based systems have a critical mass software repositories are up to date and it is easy to sudo apt-get install <program>. Pretty much anything you need is an apt-get away.

  6. The performance is awesome
    This is perhaps down to Dell and the face that I have all the memory and SSD that I do but to be up and running from a cold start in 30 seconds is fantastic. My old clunky creaking Windows machine was literally come back after you have made your second coffee. I know I am not comparing apples to apples here but the I haven't yet made it really made this machine creak. 

  7. Virtual machines rock
    VirtualBox is the best. Teamed together with Vagrant and Ansible they make a great combination of creating local servers that can be easily created, provisioned, deployed and destroyed. They make it easy to work on similar setups right across the software landscape.


Seven good reasons to leave the realm of Windows and not get dragged over the the expensive side of the force.


If you are looking to replace your machine soon take another look at Ubuntu. It is not as scary as you might think.


I was first introduced to Ubuntu by a basis consultant years ago. Now I look back and wonder why it took so long to get on board.


I would love to hear of your feedback and how SAP software can be made more Linux friendly.

FISL (International Free Software Forum) is one of the biggest events aimed to promote and adopt free software. It takes place every year in Porto Alegre, the capital of Rio Grande Do Sul, the southernmost state of Brazil and the state where the SAP Labs Latin America is located.


The event is a good place to exchange ideas and knowledge and there you find students, researchers, social movements for freedom of information, entrepreneurs, Information Technology (IT) enterprises, governments, and other interested people. It gathers discussions, speeches, personalities and novelties both national and international in the free software world.


I go to FISL every year since 2009 (at the 10th edition at that time), and in 2010 the SAP made it's first partnership with the event, in this case I got to know better about SAP and had an interview for a developer job position during the event. Less than a month after that I was working at SAP.


This year SAP participated again in the event and I was able to give it back been at FISL representing SAP.




I was there talking about our Open Source contributions (OpenUI5, Eclipse, Apache projects, etc...) and sharing my experience as an SAP employee. The results of the event was great, many people came by our stand (not only for gifts) and we had many good conversations, but in the end I think the most important for me is that I may have inspired others like I got inspired 4 years ago.

Besides me, many people did the SAP participation at FISL15 a success, among them: Allan Silva, Ana Pletsh, Andre Leitzke, Debora Alves, Douglas Maitelli, Edgar Prufer, Fabio Serrano, Jucieli Baschirotto, Lucas Escouto and Matias Schertel.

As a continuation of the blog SAP OData Library Contributed to Apache Olingo (Incubator) I wanted to share some further insights into the Apache Olingo Incubator project.


About two years ago SAP started to invest into a new OData Library (Java). Goals for this effort were to implement a library which supports the OData Specification Version 2, which has nearly the same feature set one can find in SAP NetWeaver Gateway and to open source the library at Apache in order to build a developer community around OData.


Mid of 2013 SAP did a software grant of the library and contributed the source code to the newly formed Apache Olingo Incubator project. Shortly after, the project released version 1.0.0 in October 2013 and  version 1.1.0 in February 2014. The next version 1.2.0 is already on its way and currently available as snapshot on Apache Olingo Incubator. There you can also find the release notes. The releases cover the OData Specification Version 2. The committers of the project work constantly on the documentation for users of the open source library and are happy to answer questions via the dev mailing list or via Jira.


In the meanwhile OData evolves to an OASIS standard. So you can watch out for any news in the OASIS OData Technical Committee. The community work now focuses on implementing both client and server libraries for the OASIS OData Standard (Version 4). These efforts are supported by new contributions for Java (ODataClient) and Javascript (datajs), both client libraries for consuming OData Services. 


Apache Olingo tends to evolve into a project hosting OData Implementations in different languages and technologies which is already a great success but the community has also some more milestones to focus on:


  • Graduation, which means that the project leaves the incubator behind and becomes a top level project within the Apache Software Foundation
  • Agreement within the community for a common roadmap of V4 feature development
  • Merge the contributions into a common code base to go forward with the OData OASIS Standard (Version 4) feature development
  • Release a first version of an OData Java Library supporting V4
  • Release a first version of datajs supporting V4


Last but not least I also wanted to share some short facts around Apache Olingo (Incubator):


  • 2 releases, the third one is on its way
  • 19 initial committers
  • 7 new committers
  • 75 persons active on the mailing list
  • 1025 commits in the git repositories
  • more than 1500 mails via dev mailing list
  • more than 150 Jira Issues closed / resolved
  • about 20 tutorials available


With that I think there will be interesting times ahead of us in shaping the future of the Apache Olingo project.


We are interested to know what are your thoughts. So please share your comments, feedback with us by commenting to this post or if you already have more detailed questions or feature requests you may also use the dev mailing list for Apache Olingo directly. We, that is Christian Amend, Tamara Boehm, Michael Bolz, Jens Huesken, Stephan Klevenz, Sven Kobler-Morris and Chandan V.A. as the main initial committers, are happy to answer your questions.



Source code for the application is available on GitHub.


An application using OpenUI5 at the front-end will sooner or later need to connect to the back-end services for some business logic processing. In this blog entry we'll show how we can use the popular Spring MVC framework to expose REST-like endpoints for such server-side processing. Spring MVC makes it very simple to setup and configure an interface which will handle requests with Json payload, converting all domain model objects from Json to Java and back for us.

  • Simple Maven project with embedded Tomcat for testing locally
  • Servlet 3.0, no-XML, set-up for the web application using Spring's annotation based configuration
  • JSR-303, Bean Validation through annotations, used on the model POJOs
  • Spring MVC set up with a web jar for OpenUI5 runtime and automatic serialization of the model to Json

Useful links

Here are some useful links.




This is a very simple single-page application which has a table of fruit, each having a name (String) and a quantity (integer). One can add a new fruit, delete an existing entry from the table or update an existing fruit using an inline-edit.




Taking just the "add" operation as an example, we can see that the home view, home.view.js, calls the controller with a JavaScript object constructed as to represent a Fruit when it is serialized as the part of the Ajax request by the controller.



// add button
        var oButton = new sap.ui.commons.Button({
            text: "Add",
            press: function () {
                // check if quantity is a number
                if (oInput2.getValueState() !== sap.ui.core.ValueState.Error) {
                            // id attribute can be ignored
                            name: oInput1.getValue(),
                            quantity: oInput2.getValue()


The controller, home.controller.js, then is simply sending the serialized Fruit object as the content of a POST request to the appropriate endpoint (/home/add) made available by the Spring MVC controller. Once the Ajax call returns the updated model data, it is simply rebound to the JSONModel associated with the view.


add: function (fruit) {
        this.doAjax("/home/add", fruit).done(this.updateModelData)
updateModelData: function (modelData) {
        console.debug("Ajax response: ", modelData);
        var model = this.getView().getModel();
        if (model == null) {
            // create new JSON model
            this.getView().setModel(new sap.ui.model.json.JSONModel(modelData));
        else {
            // update existing view model


In what follows we'll look in detail how to implement a REST-like endpoint handling Json payloads using Spring MVC framework.


Spring MVC set-up


We are using Servlet 3.0, no web.xml, approach based on Java annotations to set up a simple Spring MVC web application. For this we need an implementation of org.springframework.web.WebApplicationInitializer where we specify the class which will be used when constructing an instance of org.springframework.web.context.support.AnnotationConfigWebApplicationContext and where we declare a dispatcher servlet. Here is our implementation, com.github.springui5.conf.WebAppInitializer.


public class WebAppInitializer implements WebApplicationInitializer {
    private static final Logger logger = LoggerFactory.getLogger(WebAppInitializer.class);
    public void onStartup(ServletContext servletContext) throws ServletException {
        logger.info("Initializing web application with context configuration class {}", WebAppConfigurer.class.getCanonicalName());
        // create annotation based web application context
        AnnotationConfigWebApplicationContext webAppContext = new AnnotationConfigWebApplicationContext();
        // create and register Spring MVC dispatcher servlet
        ServletRegistration.Dynamic dispatcher = servletContext.addServlet("dispatcher",
                new DispatcherServlet(webAppContext));

The actual configuration is given then by com.github.springui5.conf.WebAppConfigurer class.


@ComponentScan(basePackages = {"com.github.springui5.web"})
public class WebAppConfigurer extends WebMvcConfigurerAdapter {
     * Enable default view ("index.html") mapped under "/".
    public void configureDefaultServletHandling(DefaultServletHandlerConfigurer configurer) {
     * Set up the cached resource handling for OpenUI5 runtime served from the webjar in {@code /WEB-INF/lib} directory
     * and local JavaScript files in {@code /resources} directory.
    public void addResourceHandlers(ResourceHandlerRegistry registry) {
        registry.addResourceHandler("/resources/**").addResourceLocations("classpath:/resources/", "/resources/**")
     * Session-scoped view-model bean for {@code home.view.js} view persisting in between successive Ajax requests.
    @Scope(value = "session", proxyMode = ScopedProxyMode.TARGET_CLASS)
    public HomeViewModel homeModel() {
        return new HomeViewModel();

We use a helpful EnableWebMvc annotation, which configures our application with some useful defaults. For example, Spring will automatically configure an instance of org.springframework.http.converter.json.MappingJackson2HttpMessageConverter message converter which will use a Jackson to Java converter to serialize the model returned by the Ajax handling methods of the controller.


Another interesting thing to notice is that we are using Spring's resource servlet to serve the static JavaScript (OpenUI5 runtime) from the web JAR available on the classpath of the application. To create the web JAR, we can simply package the OpenUI5 runtime JavaScript, available for download, into a JAR and add it to the WEB-INF/lib directory of our project.


The session-scoped bean, com.github.springui5.model.HomeViewModel, is responsible for maintaining the reference to the model object corresponding to the client's view.


public class HomeViewModel {
    private HomeModel homeModel;
     * Initializes and returns a new model.
    public HomeModel getNewHomeModel() {
        homeModel = new HomeModel();
        return homeModel;
     * Returns the model for this view-model.
    public HomeModel getHomeModel() {
        if (homeModel == null) {
            throw new RuntimeException("HomeModel has not been initialized yet.");
        return homeModel;

ComponentScan annotation specifies where to look for the controllers of the application. The single controller for the home view is com.github.springui5.web.HomeController.


@RequestMapping(value = "/home", method = RequestMethod.POST, consumes = "application/json", produces = "application/json")
public class HomeController {
    private static final Logger logger = LoggerFactory.getLogger(HomeController.class);
     * Session-scoped view-model bean.
    private HomeViewModel vm;
     * Initializes the model for the view.
    HomeModel handleInit() {
        return vm.getNewHomeModel().show();
     * Adds the {@linkplain com.github.springui5.domain.Fruit} parsed from the request body to the list of fruit in the
     * model.
    HomeModel handleAdd(@Valid @RequestBody Fruit fruit, BindingResult errors) {
        if (errors.hasErrors()) {
            throw new FruitValidationException(errors);
        return vm.getHomeModel().add(fruit).clearError().show();
     * Deletes the the {@linkplain com.github.springui5.domain.Fruit} with matching {@code id} from the list of fruit in
     * the model.
    HomeModel handleDelete(@PathVariable long id) {
        return vm.getHomeModel().delete(id).clearError().show();
     * Updates the the {@linkplain com.github.springui5.domain.Fruit} with matching {@code id} from the list of fruit in
     * the model.
    HomeModel handleUpdate(@Valid @RequestBody Fruit fruit, BindingResult errors) {
        if (errors.hasErrors()) {
            throw new FruitValidationException(errors);
        return vm.getHomeModel().update(fruit).clearError().show();
     * Custom exception handler for {@linkplain FruitValidationException} exceptions which produces a response with the
     * status {@linkplain HttpStatus#BAD_REQUEST} and the body string which contains the reason for the first field
     * error.
    HomeModel handleException(FruitValidationException ex) {
        String error = String.format("%s %s", ex.getRejectedField(), ex.getRejectedMessage());
        logger.debug("Validation error: {}", error);
        return vm.getHomeModel().storeError(error);

We are autowiring the view-model bean into the controller. It will be reinitialized by Spring automatically for each new client of the application (new browser, for example). Ajax request handling is configured on the class and method levels via RequestMapping annotations specifying the URL paths available in the form /home or /home/add. Some methods accept a model object (Fruit) deserialized or unmarshalled from the Json in the body of the POST request via RequestBody annotations.


Each conroller method returns the instance of HomeModel which will be automatically serialized or marshalled to Json and later bound to the JSONModel on the client side.


Model and validation


The domain model used on the server is a couple of simple POJOs annotated with JSR-303 annotations (using Hibernate Validator implementation). Here is the class for com.github.springui5.model.HomeModel.


public class HomeModel implements Serializable {
    private static final Logger logger = LoggerFactory.getLogger(HomeModel.class);
    private List<Fruit> listOfFruit;
    private String error;
    public List<Fruit> getListOfFruit() {
        return listOfFruit;
    public void setListOfFruit(List<Fruit> listOfFruit) {
        this.listOfFruit = listOfFruit;
    public String getError() {
        return error;
    public void setError(String error) {
        this.error = error;
    public HomeModel() {
        listOfFruit = new ArrayList<>(Arrays.asList(new Fruit("apple", 1), new Fruit("orange", 2)));
    public HomeModel add(Fruit fruit) {
        // set id, it is 0 after deserializing from Json
        return this;
    public HomeModel delete(final long id) {
        CollectionUtils.filter(listOfFruit, new Predicate() {
            public boolean evaluate(Object object) {
                return ((Fruit) object).getId() != id;
        return this;
    public HomeModel update(final Fruit fruit) {
        // find the fruit with the same id
        Fruit oldFruit = (Fruit) CollectionUtils.find(listOfFruit, new Predicate() {
            public boolean evaluate(Object object) {
                return ((Fruit) object).getId() == fruit.getId();
        // update the fruit
        return this;
    public HomeModel storeError(String error) {
        this.error = error;
        return this;
    public HomeModel clearError() {
        this.error = null;
        return this;
    public HomeModel show() {
        return this;

And here is the com.github.springui5.domain.Fruit class.


public class Fruit implements Serializable {
    private static long offset = 0L;
    private long id;
    private String name;
    private int quantity;
     * Returns a new value for {@code id} attribute. Uses timestamp adjusted with the static offset. Used only for
     * illustration.
    public static long newId() {
        return System.currentTimeMillis() + offset++;
    public Fruit() {
        // default constructor
    public Fruit(String name, int quantity) {
        this.id = Fruit.newId();
        this.name = name;
        this.quantity = quantity;
    public long getId() {
        return id;
    public void setId(long id) {
        this.id = id;
    public String getName() {
        return name;
    public void setName(String name) {
        this.name = name;
    public int getQuantity() {
        return quantity;
    public void setQuantity(int quantity) {
        this.quantity = quantity;
    public boolean equals(Object obj) {
        return obj instanceof Fruit && ((Fruit) obj).getId() == id;
    public String toString() {
        return "Fruit [id: " +
                id +
                ", name: " +
                name +
                ", quantity: " +
                quantity +


Upon the initial request for the model data (/home) this is what the controller returns. Notice how the list of Fruit domain objects was automatically serialized to Json for us.




If an invalid value is submitted as the part of the request body (for example the quantity of 0 when adding a new fruit) it is automatically picked up by Spring and assigned to the org.springframework.validation.BindingResult parameter of the corresponding request handling method. The application then exposes the validation error message as the value of the models "error" attribute.


Testing the application


This is a standard Maven application which needs some mandatory dependencies to compile and run.


<!-- all of the necessary Spring MVC libraries will be automatically included -->
<!-- need this for Jackson Json to Java conversion -->
<!-- need this to use JSR 303 Bean validation -->

It also uses a Tomcat Maven plugin for running the project in an embedded Tomcat 7 using: mvn tomcat7:run from the command line.




Using Spring MVC with OpenUI5, the way we have described here, has some advantages. We can easily setup a REST-like endpoint which will automatically convert Json payloads to Java domain objects allowing us to concentrate on manipulating the model in Java without worrying on how the changes will be reflected in the JavaScript on the client-side. We can also plug in domain objects validation based on annotations (JSR 303), using Spring's validation mechanism. This allows us to process all business logic validation on the server-side in a declarative and transparent manner, leaving only checks for formatting errors on the client-side.


There are some disadvantages to this approach, however, the main of which, of course, is that we are returning an entire model for each request, which results in an unnecessary large data transfer. This should not be a limitation for a relatively simple views, but can represent a problem for the complicated views with a lot of data.

Open source is changing the way software is being developed and consumed. It is also SAP’s intention to contribute to open source and integrate open source into the product line. With the same intention, OData JPA Processor Library headed off the open source way a few months back and yes, we are now an open source software along with OData Library (Java) on Apache Software Foundation (ASF), see Apache Olingo project for details



The OData JPA Processor Library is a Java library for transforming Java Persistence API (JPA) models based on JPA specification into OData services. It is an extension of the OData Library (Java) to enable Java developers to convert JPA models into OData services. For more details check SAP OData Library Contributed to Apache Olingo (Incubator) which gives you an introduction on why OData and the features of the OData Library (Java).


The artifacts to get started with OData JPA Processor Library, the documentation, the code are all available on Apache Olingo. The requirements for building an OData service based on a JPA model are quite low, for a quick start you can refer to the following tutorial. In short, you just have to create a web application project in Eclipse (both Kepler and Juno versions are supported), implement a factory to link to the JPA model and register the factory class within the web.xml file, it’s that simple. The OData JPA Processor Library also supports more enhanced features like the ability to redefine the metadata of the OData services (like renaming entity type names and its properties) and to add additional artifacts like function imports to the OData service.


So if you are on the lookout for a reliable and easy to use software for transforming JPA models into OData services, you now know where to go The libraries are out there in the open, please do explore them, extend them and let us know the new faces you give them. Use the mailing list available here not just to let us know how you have used the libraries but also to report bugs, ask questions, the team will be glad to hear from you and to answer your queries


With that, we say we have arrived into the Open Source Software world!! And also, this is just the beginning and there is more to come (or at least that is the intent), so keep an eye on the Apache Olingo project. 


Related Information:


Filter Blog

By author:
By date:
By tag: