1 2 3 Previous Next

Open Source

35 Posts

Learn and practice the following open source technology during 4 days

Get $200 discount

 

2000px-Tux.svg.pngLinux

  • Systemd, Btrfs, and kernel crash infrastructure
  • Samba and Btrfs - A Snapshot of Progress
  • Achieve best server/storage performance with NVMe devices
  • UEFI Secure Boot
  • Full-system Rollback - Myth and Truth
  • OS Lifecycle Management from the Datacenter to the Cloud
  • Hardening and tweaking your Linux

High Availability

  • Geo redundancy, including database replication, filesystem replication, and geo cluster overlay
  • Create a highly available 2 node virtual environment using DRBD and KVM
  • Choices in designing HA clusters from a reliability, scalability, and performance perspective (e.g., such as when to use network bonding, OCFS2 versus file-system fail-over, DRBD)

OpenStack.pngOpenStack, KVM and PaaS

  • OpenStack deployments and troubleshooting
  • KVM on a grid enables dynamic management and resource allocation of virtual machines in large scale high-performance environments
  • Build Platform as a Service (PaaS) with WSO2 Middleware and EC2

Big Data (Apache Hadoop)

  • Deploy an elastic auto-scalable cluster with OpenStack to consume and process business data on demand

Ceph_Logo_Stacked_RGB_120411_fa.pngCeph Storage

  • Sizing and performance of Ceph storage
  • Ceph for Cloud and Virtualization use cases, including thin provisioning to make your storage cluster go further

SAP on Linux

  • How T-System leverages Linux and SAP LVM capabilities within their data center
  • Optimized Linux for SAP applications
  • Automate SAP HANA System Replication
  • Manage SAP HANA Scale-Out Linux systems

Register Today

All above open source technical sessions are available at the annual user conference SUSECon 2014 (NOV 17-21, 2014 Orlando). Interested to attend? Request your $200 discount off current full conference pass and meet with SAP & Open Source architects (email).

I wrote a blog last month (in July) on just how much I have been enjoying Ubuntu as a desktop machine.

I can't see myself going back - I am a total convert.

So just in the chance that I might win some more to the cause here are 7 more reasons why you might find Ubuntu to be your next OS choice.

 

7. Wobbly Windows

Oh this is available for other operating systems but having a few great window effects makes development life much more fun. Given I am running pretty standard Gnome Desktop, I use the compviz plugin to get this working for me. The Wobbly Windows adds a little stretchy and snappy effects to desktop windows but the best part is that compviz comes with other effects to snap windows into different parts of your desktop. So with a couple of keystrokes I can setup a browser window on one half of the screen and an editor (like sublime) on the right hand side of my screen

I can also quickly get several terminals up and snap them into the four quarters of the screen and ssh into a different server in each one. Although you can achieve the same thing with ...

 

6. Terminator

These things can start to be a little 'my-terminal-is-better-than-your-terminal' but after I was introduced to Terminator I rarely use the standard terminal.

The best feature about Terminator is that you can have many windows and tabs open within the app to then replicate what I was doing with separate terminal sessions and snapping them to different corners of the screen. To take this to another level (because that is not the killer use case) you can link windows together and issue the identical command to all windows. This was invaluable to me on a recent project where I was managing a cluster of servers and I wanted to issue the same sql query to each of them simultaneously to determine if they were all in sync.

 

5. Cowsay

Again this is a pretty minor item, all things considered, but it does make logging messages that much more fun.

I have been doing a lot of work with Ansible an open source provisioning tool. I will have more to say on this in a future blog but for now lets if you are not familiar with it, then consider it a way to script your server deployments so that is easy to deploy new servers with identical configuration.

Ansible uses cowsay to output many of its messages to the screen as they the playbooks run. Given some playbooks take time it helps break up the monotony as the cows mooove across the screen.

Just to give you a feel of how cowsay can immediately improve your life:cowsay.png

 

4. Multiple virtual desktops

I don't know how I will be able to work on 1080x768 again. I have grown used to multiple screens and not just multiple screens but multiple virtual screens. Ubuntu and Gnome make this a no brainer and you can easily have 4 virtual screens with real estate of 3840x2160. By splitting this into a virtual screen of 1920x1080 I can easily put different types of work on different virtual screens and focus on one particular type of work at once. I like to have a little PHP going on, with a little SAP UI5 here and perhaps email and other messaging on another. I can set up each screen with all the resources I need for that work and then leave it until I want to come back to it. This saves in context switch time as it is no trouble for my machine to leave an project or two open with the virtual machines that it needs and come back to it as I can.

I might restart my machine once a week if I need to, so to be able to set this up and then leave it all running is a great timesaver.

3. A Better understanding of your computer

In my last blog I mentioned that the command line as one of the great benefits of Ubuntu. Now I love great user interfaces as much as the next person. In fact I am passionate about creating great user experience for my clients. One of the best ways to do this is to simplify, simplify, simplify and remove all the complexity that does not affect the transaction at hand.

As a developer and as a DevOps'er you need to be familiar with what is going on with your servers. There are many great graphical programs that enable this but by using the command line I feel like I am operating at a much closer level to the computer and after a while of doing this the muscle memory kicks in and it becomes second nature. Also things like $ and ^ that are part of regular expressions have identical meaning in vi (yes vi). Knowing basic vi is also handy for when you are ssh'd into your headless server and guess what sublime isn't installed but vi is. Nano probably is too but I'd rather not talk about that.

2. Alias you, Alias me.

Whilst Jason Bourne is flying around the world with half a dozen passports an alias or two can be a great things to stick in your back pocket or or .bash_alias file.

An alias can take a long command line sequence and reduce it to a couple of easily typed letters. For example I have a scripted vagrant box that I used to have to get to the correct directory before starting it but with a simple and short script that I have aliased to two letters I can start that virtual machine and with another alias I can ssh into the machine and I am away.

Also the .ssh_config file is a winner. As you can define easy to remember aliases to all those servers you are managing and specify which user you want to log in as. There is also a trick about use ssh_config to differentiate your github accounts if you have multiple accounts for multiple clients.

1. Configurability

Yes, I saved the best for last. The best part of Ubuntu or any other variant of linux is it's configuarability. If you don't like the UI or pretty much any part of the OS you can switch it out for another. This is one thing that MacOS and Windows don't have to a large extent. While most of the tips and tools mentioned here can be applied on those OS's you can't swap out your UI.

 

This sort of discussion about editors, terminals, OS's can get a little bit heated for no good reason and I am not saying what I run is best and there is no other. Work out what works for you. In fact I run all these OS's (Linux, MacOS and Windows) and they all have advantages but for my main machine - ubuntu is where I am staying.

 

So why do you run what you run?

Last week, the Developer Relations team organized a Open Source meet up around success stories, failures

highres_400249062.jpegand best practices of Open Source initiative in bigger organizations. This time - it was our second meetup - we had 75 people attending the meet up. Our community grew from 18 to 75 within a month - that is pretty impressive! I was obviously really excited to welcome all participants and our 3 speakers: Zach Chandler from the Stanford University, SAP's Tools Team (Ben Boeser, Dominik Tornow, David Farr), and Mark Hinkle from Citrix! Based on the feedback from our participants, most people really enjoy the variety of speakers and valued the different perspectives on Open Source initiatives in bigger organizations.


If you were not able to attend, please find the slides of the talks below:

 

I also wanted to thank Inga Bereza and Garick Chan for their support as Co-Organizers - they did a great job!

 

For this second meetup, we actually changed our format based on the feedback we received from our early community members. This time we had one more speaker, shorter speaking slots and more focused talks. For the next meetup the speaking slots will get shorter again - our member like to discuss in more detail and exchange their experiences. They also want to have small "pitching slots" to talk about their projects within the community! Oh yeah - also, we will have more pizza - that was requested most

 

Overall, this was an amazing experience and I was really glad to have so many people joining our community! Check out the pictures below (or check out the meetup description directly) to see what you missed

highres_400249332.jpeghighres_400249462.jpeghighres_400249012.jpeg2014-08-13 19.23.52.jpg2014-08-13 18.40.27.jpg2014-08-13 19.20.44.jpg

 

Here are the impressions our community members:

  • Rick A: "Great talk from a wide variety of speakers regarding use of open source at their workplace. Very informative, exactly what I was hoping to hear about. Pragmatic talks about open source in general, specific discussions on Drupal, Git, OpenSSL, and others."
  • Daniel K.: "It was great! Very reassuring that other big organizations are having similar experiences...and overcoming them."
  • Jack P.: "Very useful meeting, great hosts and talks."
  • Greg P.: "All the speakers were great. Big thanks to the organizers & SAP for hosting."

 

If you are an Open Source enthusiast, please join our meet up group and RSVP for the next meet up on September 24th!

 

Open Source Bay Area Meet up Group

Nigel James

Going a little Ubuntu

Posted by Nigel James Jul 14, 2014

Earlier this year I was about to take on a new client and it was very clear that I would need to upgrade my computer.

 

The fun part about this new client is that there was no SAP technology to be seen and it was a very open source house. Open source in the respect that it used a lot of open source technologies and open source thinking.

 

The fun part for me at the start of this assignment was picking out a new beast on which to practise my craft on.

 

After looking into the various options available I went for a  DELL Latitude with stacks of RAM and an SSD drive and chose Ubuntu for the OS.

 

WOW!  I can almost hear you drop off your chairs.

 

I have been a Windows guy for all my career. Not that I have particularly enjoyed that. Windows can be a right pain in the neck at times but at the end of the day it works most of the time and had everything I needed . I saw a lot of my developer colleagues heading down the shiny iMBP or iAir path and while that looked very shiny and attractive here are my reasons for going with Ubuntu, enjoying it and never going back to Windows again (unless I am forced to).

 

  1. Everything I need is available on Ubuntu.
    There is nothing that I need that is not on Ubuntu. Actually that is not quite strictly true in the most pedantic sense of the word but for everything I need to do there is an option on Ubuntu


  2. What's good for the server is good for the desktop.
    The great thing about working with Ubuntu on the desktop is muscle memory. The servers run Ubuntu be it Webservers, Database servers, Monitoring servers, Email servers are all running Ubuntu. Not that all those processes are running on the desktop but it does mean that when you are working on the production servers all the same commands work exactly the same way. Need to work out if your server is running out of disk then using the same df or du commands makes it easy to remember.

  3. Embrace your inner command line.
    I loved windows because I could avoid the command line. Even though Windows does now have powershell and it is powerful I used to avoid getting into the DOS command line because it was really a pain in the neck. With Ubuntu and even with the MacOS systems in my life I love the command line. A lot of the time it is easier to type a command than use a GUI equivalent. Also because tools like grep become part of the everyday working with regular expressions become (slightly) less daunting. They just become part of your muscle memory.


  4. Do I need to mention Windows8?
    The short answer is no. I have used Windows 8 a little bit on some machines that I had to and I cant say that it was a pleasant experience. It really is two user interface paradigms nailed together badly.

  5. Installing software is a snap
    I had this impression that installing software on linux systems was compile, make etc but because Ubuntu and similar debian based systems have a critical mass software repositories are up to date and it is easy to sudo apt-get install <program>. Pretty much anything you need is an apt-get away.

  6. The performance is awesome
    This is perhaps down to Dell and the face that I have all the memory and SSD that I do but to be up and running from a cold start in 30 seconds is fantastic. My old clunky creaking Windows machine was literally come back after you have made your second coffee. I know I am not comparing apples to apples here but the I haven't yet made it really made this machine creak. 

  7. Virtual machines rock
    VirtualBox is the best. Teamed together with Vagrant and Ansible they make a great combination of creating local servers that can be easily created, provisioned, deployed and destroyed. They make it easy to work on similar setups right across the software landscape.

 

Seven good reasons to leave the realm of Windows and not get dragged over the the expensive side of the force.

 

If you are looking to replace your machine soon take another look at Ubuntu. It is not as scary as you might think.

 

I was first introduced to Ubuntu by a basis consultant years ago. Now I look back and wonder why it took so long to get on board.

 

I would love to hear of your feedback and how SAP software can be made more Linux friendly.

FISL (International Free Software Forum) is one of the biggest events aimed to promote and adopt free software. It takes place every year in Porto Alegre, the capital of Rio Grande Do Sul, the southernmost state of Brazil and the state where the SAP Labs Latin America is located.

 

The event is a good place to exchange ideas and knowledge and there you find students, researchers, social movements for freedom of information, entrepreneurs, Information Technology (IT) enterprises, governments, and other interested people. It gathers discussions, speeches, personalities and novelties both national and international in the free software world.

 

I go to FISL every year since 2009 (at the 10th edition at that time), and in 2010 the SAP made it's first partnership with the event, in this case I got to know better about SAP and had an interview for a developer job position during the event. Less than a month after that I was working at SAP.

 

This year SAP participated again in the event and I was able to give it back been at FISL representing SAP.

WP_000081.jpg

SAP@ FISL15

 

I was there talking about our Open Source contributions (OpenUI5, Eclipse, Apache projects, etc...) and sharing my experience as an SAP employee. The results of the event was great, many people came by our stand (not only for gifts) and we had many good conversations, but in the end I think the most important for me is that I may have inspired others like I got inspired 4 years ago.


Besides me, many people did the SAP participation at FISL15 a success, among them: Allan Silva, Ana Pletsh, Andre Leitzke, Debora Alves, Douglas Maitelli, Edgar Prufer, Fabio Serrano, Jucieli Baschirotto, Lucas Escouto and Matias Schertel.

As a continuation of the blog SAP OData Library Contributed to Apache Olingo (Incubator) I wanted to share some further insights into the Apache Olingo Incubator project.

 

About two years ago SAP started to invest into a new OData Library (Java). Goals for this effort were to implement a library which supports the OData Specification Version 2, which has nearly the same feature set one can find in SAP NetWeaver Gateway and to open source the library at Apache in order to build a developer community around OData.

 

Mid of 2013 SAP did a software grant of the library and contributed the source code to the newly formed Apache Olingo Incubator project. Shortly after, the project released version 1.0.0 in October 2013 and  version 1.1.0 in February 2014. The next version 1.2.0 is already on its way and currently available as snapshot on Apache Olingo Incubator. There you can also find the release notes. The releases cover the OData Specification Version 2. The committers of the project work constantly on the documentation for users of the open source library and are happy to answer questions via the dev mailing list or via Jira.

 

In the meanwhile OData evolves to an OASIS standard. So you can watch out for any news in the OASIS OData Technical Committee. The community work now focuses on implementing both client and server libraries for the OASIS OData Standard (Version 4). These efforts are supported by new contributions for Java (ODataClient) and Javascript (datajs), both client libraries for consuming OData Services. 

 

Apache Olingo tends to evolve into a project hosting OData Implementations in different languages and technologies which is already a great success but the community has also some more milestones to focus on:

 

  • Graduation, which means that the project leaves the incubator behind and becomes a top level project within the Apache Software Foundation
  • Agreement within the community for a common roadmap of V4 feature development
  • Merge the contributions into a common code base to go forward with the OData OASIS Standard (Version 4) feature development
  • Release a first version of an OData Java Library supporting V4
  • Release a first version of datajs supporting V4

 

Last but not least I also wanted to share some short facts around Apache Olingo (Incubator):

 

  • 2 releases, the third one is on its way
  • 19 initial committers
  • 7 new committers
  • 75 persons active on the mailing list
  • 1025 commits in the git repositories
  • more than 1500 mails via dev mailing list
  • more than 150 Jira Issues closed / resolved
  • about 20 tutorials available

 

With that I think there will be interesting times ahead of us in shaping the future of the Apache Olingo project.

 

We are interested to know what are your thoughts. So please share your comments, feedback with us by commenting to this post or if you already have more detailed questions or feature requests you may also use the dev mailing list for Apache Olingo directly. We, that is Christian Amend, Tamara Boehm, Michael Bolz, Jens Huesken, Stephan Klevenz, Sven Kobler-Morris and Chandan V.A. as the main initial committers, are happy to answer your questions.

Introduction

 

Source code for the application is available on GitHub.

 

An application using OpenUI5 at the front-end will sooner or later need to connect to the back-end services for some business logic processing. In this blog entry we'll show how we can use the popular Spring MVC framework to expose REST-like endpoints for such server-side processing. Spring MVC makes it very simple to setup and configure an interface which will handle requests with Json payload, converting all domain model objects from Json to Java and back for us.


  • Simple Maven project with embedded Tomcat for testing locally
  • Servlet 3.0, no-XML, set-up for the web application using Spring's annotation based configuration
  • JSR-303, Bean Validation through annotations, used on the model POJOs
  • Spring MVC set up with a web jar for OpenUI5 runtime and automatic serialization of the model to Json


Useful links


Here are some useful links.


 

Application

 

This is a very simple single-page application which has a table of fruit, each having a name (String) and a quantity (integer). One can add a new fruit, delete an existing entry from the table or update an existing fruit using an inline-edit.

 

fruit.png

 

Taking just the "add" operation as an example, we can see that the home view, home.view.js, calls the controller with a JavaScript object constructed as to represent a Fruit when it is serialized as the part of the Ajax request by the controller.

 

 

// add button
        var oButton = new sap.ui.commons.Button({
            text: "Add",
            press: function () {
                // check if quantity is a number
                if (oInput2.getValueState() !== sap.ui.core.ValueState.Error) {
                    oController.add({
                            // id attribute can be ignored
                            name: oInput1.getValue(),
                            quantity: oInput2.getValue()
                        }
                    );
                }
            }
        });

 

The controller, home.controller.js, then is simply sending the serialized Fruit object as the content of a POST request to the appropriate endpoint (/home/add) made available by the Spring MVC controller. Once the Ajax call returns the updated model data, it is simply rebound to the JSONModel associated with the view.

 

add: function (fruit) {
        this.doAjax("/home/add", fruit).done(this.updateModelData)
            .fail(this.handleAjaxError);
    },
updateModelData: function (modelData) {
        console.debug("Ajax response: ", modelData);
        var model = this.getView().getModel();
        if (model == null) {
            // create new JSON model
            this.getView().setModel(new sap.ui.model.json.JSONModel(modelData));
        }
        else {
            // update existing view model
            model.setData(modelData);
            model.refresh();
        }
    }

 

In what follows we'll look in detail how to implement a REST-like endpoint handling Json payloads using Spring MVC framework.

 

Spring MVC set-up

 

We are using Servlet 3.0, no web.xml, approach based on Java annotations to set up a simple Spring MVC web application. For this we need an implementation of org.springframework.web.WebApplicationInitializer where we specify the class which will be used when constructing an instance of org.springframework.web.context.support.AnnotationConfigWebApplicationContext and where we declare a dispatcher servlet. Here is our implementation, com.github.springui5.conf.WebAppInitializer.

 

 
public class WebAppInitializer implements WebApplicationInitializer {
    private static final Logger logger = LoggerFactory.getLogger(WebAppInitializer.class);
    @Override
    public void onStartup(ServletContext servletContext) throws ServletException {
        logger.info("Initializing web application with context configuration class {}", WebAppConfigurer.class.getCanonicalName());
        // create annotation based web application context
        AnnotationConfigWebApplicationContext webAppContext = new AnnotationConfigWebApplicationContext();
        webAppContext.register(WebAppConfigurer.class);
        // create and register Spring MVC dispatcher servlet
        ServletRegistration.Dynamic dispatcher = servletContext.addServlet("dispatcher",
                new DispatcherServlet(webAppContext));
        dispatcher.setLoadOnStartup(1);
        dispatcher.addMapping("/");
    }
}

The actual configuration is given then by com.github.springui5.conf.WebAppConfigurer class.

 

@Configuration
@EnableWebMvc
@ComponentScan(basePackages = {"com.github.springui5.web"})
public class WebAppConfigurer extends WebMvcConfigurerAdapter {
    /**
     * Enable default view ("index.html") mapped under "/".
     */
    @Override
    public void configureDefaultServletHandling(DefaultServletHandlerConfigurer configurer) {
        configurer.enable();
    }
    /**
     * Set up the cached resource handling for OpenUI5 runtime served from the webjar in {@code /WEB-INF/lib} directory
     * and local JavaScript files in {@code /resources} directory.
     */
    @Override
    public void addResourceHandlers(ResourceHandlerRegistry registry) {
        registry.addResourceHandler("/resources/**").addResourceLocations("classpath:/resources/", "/resources/**")
                .setCachePeriod(31556926);
    }
    /**
     * Session-scoped view-model bean for {@code home.view.js} view persisting in between successive Ajax requests.
     */
    @Bean
    @Scope(value = "session", proxyMode = ScopedProxyMode.TARGET_CLASS)
    public HomeViewModel homeModel() {
        return new HomeViewModel();
    }
}

We use a helpful EnableWebMvc annotation, which configures our application with some useful defaults. For example, Spring will automatically configure an instance of org.springframework.http.converter.json.MappingJackson2HttpMessageConverter message converter which will use a Jackson to Java converter to serialize the model returned by the Ajax handling methods of the controller.

 

Another interesting thing to notice is that we are using Spring's resource servlet to serve the static JavaScript (OpenUI5 runtime) from the web JAR available on the classpath of the application. To create the web JAR, we can simply package the OpenUI5 runtime JavaScript, available for download, into a JAR and add it to the WEB-INF/lib directory of our project.

 

The session-scoped bean, com.github.springui5.model.HomeViewModel, is responsible for maintaining the reference to the model object corresponding to the client's view.

 

public class HomeViewModel {
    private HomeModel homeModel;
    /**
     * Initializes and returns a new model.
     */
    public HomeModel getNewHomeModel() {
        homeModel = new HomeModel();
        return homeModel;
    }
    /**
     * Returns the model for this view-model.
     */
    public HomeModel getHomeModel() {
        if (homeModel == null) {
            throw new RuntimeException("HomeModel has not been initialized yet.");
        }
        return homeModel;
    }
}

ComponentScan annotation specifies where to look for the controllers of the application. The single controller for the home view is com.github.springui5.web.HomeController.

 

@Controller
@RequestMapping(value = "/home", method = RequestMethod.POST, consumes = "application/json", produces = "application/json")
public class HomeController {
    private static final Logger logger = LoggerFactory.getLogger(HomeController.class);
    /**
     * Session-scoped view-model bean.
     */
    @Autowired
    private HomeViewModel vm;
    /**
     * Initializes the model for the view.
     */
    @RequestMapping
    public
    @ResponseBody
    HomeModel handleInit() {
        return vm.getNewHomeModel().show();
    }
    /**
     * Adds the {@linkplain com.github.springui5.domain.Fruit} parsed from the request body to the list of fruit in the
     * model.
     */
    @RequestMapping("/add")
    public
    @ResponseBody
    HomeModel handleAdd(@Valid @RequestBody Fruit fruit, BindingResult errors) {
        if (errors.hasErrors()) {
            throw new FruitValidationException(errors);
        }
        return vm.getHomeModel().add(fruit).clearError().show();
    }
    /**
     * Deletes the the {@linkplain com.github.springui5.domain.Fruit} with matching {@code id} from the list of fruit in
     * the model.
     */
    @RequestMapping("/delete/{id}")
    public
    @ResponseBody
    HomeModel handleDelete(@PathVariable long id) {
        return vm.getHomeModel().delete(id).clearError().show();
    }
    /**
     * Updates the the {@linkplain com.github.springui5.domain.Fruit} with matching {@code id} from the list of fruit in
     * the model.
     */
    @RequestMapping("/update")
    public
    @ResponseBody
    HomeModel handleUpdate(@Valid @RequestBody Fruit fruit, BindingResult errors) {
        if (errors.hasErrors()) {
            throw new FruitValidationException(errors);
        }
        return vm.getHomeModel().update(fruit).clearError().show();
    }
    /**
     * Custom exception handler for {@linkplain FruitValidationException} exceptions which produces a response with the
     * status {@linkplain HttpStatus#BAD_REQUEST} and the body string which contains the reason for the first field
     * error.
     */
    @ExceptionHandler
    @ResponseStatus(HttpStatus.BAD_REQUEST)
    public
    @ResponseBody
    HomeModel handleException(FruitValidationException ex) {
        String error = String.format("%s %s", ex.getRejectedField(), ex.getRejectedMessage());
        logger.debug("Validation error: {}", error);
        return vm.getHomeModel().storeError(error);
    }
}

We are autowiring the view-model bean into the controller. It will be reinitialized by Spring automatically for each new client of the application (new browser, for example). Ajax request handling is configured on the class and method levels via RequestMapping annotations specifying the URL paths available in the form /home or /home/add. Some methods accept a model object (Fruit) deserialized or unmarshalled from the Json in the body of the POST request via RequestBody annotations.

 

Each conroller method returns the instance of HomeModel which will be automatically serialized or marshalled to Json and later bound to the JSONModel on the client side.

 

Model and validation

 

The domain model used on the server is a couple of simple POJOs annotated with JSR-303 annotations (using Hibernate Validator implementation). Here is the class for com.github.springui5.model.HomeModel.

 

public class HomeModel implements Serializable {
    private static final Logger logger = LoggerFactory.getLogger(HomeModel.class);
    private List<Fruit> listOfFruit;
    private String error;
    public List<Fruit> getListOfFruit() {
        return listOfFruit;
    }
    public void setListOfFruit(List<Fruit> listOfFruit) {
        this.listOfFruit = listOfFruit;
    }
    public String getError() {
        return error;
    }
    public void setError(String error) {
        this.error = error;
    }
    public HomeModel() {
        listOfFruit = new ArrayList<>(Arrays.asList(new Fruit("apple", 1), new Fruit("orange", 2)));
    }
    public HomeModel add(Fruit fruit) {
        // set id, it is 0 after deserializing from Json
        fruit.setId(Fruit.newId());
        listOfFruit.add(fruit);
        return this;
    }
    public HomeModel delete(final long id) {
        CollectionUtils.filter(listOfFruit, new Predicate() {
            @Override
            public boolean evaluate(Object object) {
                return ((Fruit) object).getId() != id;
            }
        });
        return this;
    }
    public HomeModel update(final Fruit fruit) {
        // find the fruit with the same id
        Fruit oldFruit = (Fruit) CollectionUtils.find(listOfFruit, new Predicate() {
            @Override
            public boolean evaluate(Object object) {
                return ((Fruit) object).getId() == fruit.getId();
            }
        });
        // update the fruit
        oldFruit.setName(fruit.getName());
        oldFruit.setQuantity(fruit.getQuantity());
        return this;
    }
    public HomeModel storeError(String error) {
        this.error = error;
        return this;
    }
    public HomeModel clearError() {
        this.error = null;
        return this;
    }
    public HomeModel show() {
        logger.debug(Arrays.toString(listOfFruit.toArray()));
        return this;
    }
}

And here is the com.github.springui5.domain.Fruit class.

 

public class Fruit implements Serializable {
    private static long offset = 0L;
    private long id;
    @NotNull
    @NotBlank
    private String name;
    @NotNull
    @Min(1)
    private int quantity;
    /**
     * Returns a new value for {@code id} attribute. Uses timestamp adjusted with the static offset. Used only for
     * illustration.
     */
    public static long newId() {
        return System.currentTimeMillis() + offset++;
    }
    public Fruit() {
        // default constructor
    }
    public Fruit(String name, int quantity) {
        this.id = Fruit.newId();
        this.name = name;
        this.quantity = quantity;
    }
    public long getId() {
        return id;
    }
    public void setId(long id) {
        this.id = id;
    }
    public String getName() {
        return name;
    }
    public void setName(String name) {
        this.name = name;
    }
    public int getQuantity() {
        return quantity;
    }
    public void setQuantity(int quantity) {
        this.quantity = quantity;
    }
    @Override
    public boolean equals(Object obj) {
        return obj instanceof Fruit && ((Fruit) obj).getId() == id;
    }
    @Override
    public String toString() {
        return "Fruit [id: " +
                id +
                ", name: " +
                name +
                ", quantity: " +
                quantity +
                "]";
    }
}

 

Upon the initial request for the model data (/home) this is what the controller returns. Notice how the list of Fruit domain objects was automatically serialized to Json for us.

 

init.png

 

If an invalid value is submitted as the part of the request body (for example the quantity of 0 when adding a new fruit) it is automatically picked up by Spring and assigned to the org.springframework.validation.BindingResult parameter of the corresponding request handling method. The application then exposes the validation error message as the value of the models "error" attribute.

 

Testing the application

 

This is a standard Maven application which needs some mandatory dependencies to compile and run.

 

<!-- all of the necessary Spring MVC libraries will be automatically included -->
<dependency>
  <groupId>org.springframework</groupId>
  <artifactId>spring-webmvc</artifactId>
  <version>4.0.0.RELEASE</version>
</dependency>
<!-- need this for Jackson Json to Java conversion -->
<dependency>
  <groupId>com.fasterxml.jackson.core</groupId>
  <artifactId>jackson-databind</artifactId>
  <version>2.3.0</version>
</dependency>
<!-- need this to use JSR 303 Bean validation -->
<dependency>
  <groupId>org.hibernate</groupId>
  <artifactId>hibernate-validator</artifactId>
  <version>5.0.2.Final</version>
</dependency>

It also uses a Tomcat Maven plugin for running the project in an embedded Tomcat 7 using: mvn tomcat7:run from the command line.

 

Conclusion

 

Using Spring MVC with OpenUI5, the way we have described here, has some advantages. We can easily setup a REST-like endpoint which will automatically convert Json payloads to Java domain objects allowing us to concentrate on manipulating the model in Java without worrying on how the changes will be reflected in the JavaScript on the client-side. We can also plug in domain objects validation based on annotations (JSR 303), using Spring's validation mechanism. This allows us to process all business logic validation on the server-side in a declarative and transparent manner, leaving only checks for formatting errors on the client-side.

 

There are some disadvantages to this approach, however, the main of which, of course, is that we are returning an entire model for each request, which results in an unnecessary large data transfer. This should not be a limitation for a relatively simple views, but can represent a problem for the complicated views with a lot of data.

Open source is changing the way software is being developed and consumed. It is also SAP’s intention to contribute to open source and integrate open source into the product line. With the same intention, OData JPA Processor Library headed off the open source way a few months back and yes, we are now an open source software along with OData Library (Java) on Apache Software Foundation (ASF), see Apache Olingo project for details

 

 

The OData JPA Processor Library is a Java library for transforming Java Persistence API (JPA) models based on JPA specification into OData services. It is an extension of the OData Library (Java) to enable Java developers to convert JPA models into OData services. For more details check SAP OData Library Contributed to Apache Olingo (Incubator) which gives you an introduction on why OData and the features of the OData Library (Java).

 

The artifacts to get started with OData JPA Processor Library, the documentation, the code are all available on Apache Olingo. The requirements for building an OData service based on a JPA model are quite low, for a quick start you can refer to the following tutorial. In short, you just have to create a web application project in Eclipse (both Kepler and Juno versions are supported), implement a factory to link to the JPA model and register the factory class within the web.xml file, it’s that simple. The OData JPA Processor Library also supports more enhanced features like the ability to redefine the metadata of the OData services (like renaming entity type names and its properties) and to add additional artifacts like function imports to the OData service.

 

So if you are on the lookout for a reliable and easy to use software for transforming JPA models into OData services, you now know where to go The libraries are out there in the open, please do explore them, extend them and let us know the new faces you give them. Use the mailing list available here not just to let us know how you have used the libraries but also to report bugs, ask questions, the team will be glad to hear from you and to answer your queries

 

With that, we say we have arrived into the Open Source Software world!! And also, this is just the beginning and there is more to come (or at least that is the intent), so keep an eye on the Apache Olingo project. 

 

Related Information:

In today’s mobile and agile business environment, it is important to unlock the enterprise data held by applications and other systems, and enable its consumption from anywhere. The Open Data Protocol (OData), on its way to be standardized by Microsoft, IBM, SAP and a lot of other companies within OASIS (an international standards body for advancing open standards for the information society), provides a solution to simplify data sharing across applications in enterprises, in the Cloud, and on mobile devices. OData leverages proven Internet technologies such as REST, ATOM and JSON, and provides a uniform way to access data as well as data models.

 

SAP contributed the Java OData Library recently to Apache Olingo (Incubator). After just a few days available in public we got a lot of interest from other companies. That makes us confident to build up a community working on evolving this library to the latest version which is the upcoming OData Standard as result of the standardization process at OASIS.

 

Talking about features there is already a lot to be discovered in the library. Since the Entity Data Model, the URI Parsing including all System Query Options and (De)Serialization for ATOM/XML and JSON is already supported one can build an OData Services supporting advanced read / write scenarios. Features like $batch are currently added, Conditional Handling, advanced Client Support and detailed documentation are on the roadmap for the upcoming months.

 

The guiding principles during the implementation of the OData Library were to be OData 2.0 specification compliant and have an architecture in place to enhance the library in a compatible manner as much as possible. The clear separation between Core and API and keeping dependencies down to a minimum is important. The community should have the option to build extensions for various data sources on top of the library. The JPA Processor as one additional module provided is an excellent example for such an extension.

 

Besides the Core and API packages there is also an example provided in the ref and ref.web packages in order to show the features in an OData Service Implementation and to enable to integrate new features in that service also for full integration tests (fit).

 

We’ll keep you posted once the first release is available to digest. You can already dig into the coding, provide bug reports, feature requests and questions via Jira or by using the mailing list. All the information is available in the support section of the web site.

 

Further Information:

Hi

 

SAP's software is known for its role running many of the world's largest companies, but not necessarily for its user-friendliness. As part of an ongoing effort to change this perception, SAP unveiled Fiori, a set of 25 lightweight "consumer-friendly" applications that can run on desktops, tablets and mobile devices, on Wednesday at the Sapphire conference in Orlando.

Fiori applications are written in HTML5, which makes multiplatform deployments possible. They also target some of the most common business processes a user might perform, such as creating sales orders or getting their travel expenses approved, according to SAP's announcement.SAP has grouped the initial Fiori applications into four separate employee types, including manager, sales representative, employee and purchasing agent. Fiori is priced per user and available now, but specific costs weren't disclosed Wednesday.It's possible to deploy Fiori as a single group of applications, as well as separate Web applications and within portals, according to a statement.Some 250 customers helped SAP develop Fiori and make the apps more user-friendly, SAP said.SAP has basically been compelled to develop something like Fiori, according to one observer."Customers want enterprise-class apps with consumer-grade experiences," said analyst Ray Wang, CEO of Constellation Research. "Fiori is one of the ways SAP customers can pull the data out of their existing systems, and democratize that information so that everyone can benefit from access to the SAP system.""For years, the issue was that SAP data was hidden or not easily accessed," Wang added. "This is one small step to make that change."SAP's App Haus, a startup-like development group within the company,has been working to create more usable and appealing application interfaces. It wasn't immediately clear Wednesday whether the App Haus team is involved with Fiori.

The vendor has also launched a product called Screen Personas, which gives users the ability to rejigger SAP software screens to better fit their job role and personal preferences.There's plenty more to come, SAP co-CEO Jim Hagemann Snabe said during a keynote.

 

Thank You

As some of you might know SAP is a contributor in the OpenSource project Eclipse. As part of that engagement we also organize so called "Eclipse DemoCamps" to show what one can do with this great development platform which is used a lot in the IT industry and is also the IDE of choice for SAP HANA Cloud Platform.

 

This year's Eclipse DemoCamp will be held at the day of planned release date for Kepler, the release name of Eclipse V4.3.

 

In case you are interested in joining the event you can register for free or even propose a speaking slot at the Eclipse DemoCamp and join speakers like Mike Milinkovich, the Executive Director of the Eclipse Foundation.

You'll be able to listen to interesting talks, get free drinks & food and a lot of possibilities to connect with other developers during the event.

 

So register today and join us in Walldorf for the Eclipse DemoCamp.

 

Best,

Rui

Welcome to the last episode about the SAP Open Source Summit 2012. In the first three episodes, I shared my overall impressions, key parts of my presentation on the corporate open source strategy, and insights from guest keynotes, respectively. Now I want to finish with a few areas that we are focusing on next.

 

Given that we now have an open source strategy in place at SAP, the focus is now much more on execution. Whenever you want to execute a strategy, you may however run into a number of challenges - be they of a technical, organizational or procedural nature. Specifically with regard to the much stronger use of open source and contribution to open souce projects, we have found four major types of challenges. These are reuse and versioning, alignment of release schedules, product security and long-term support as shown in the slide below. I used this slide also in my keynote last week at ApacheCon Europe 2012 to illustrate key challenges for managing open source from an enterprise perspective.

 

ApacheConEurope_Keynote_CvonRiegen.png

 

  • Reuse and Versioning
    If you focus on a given open source foundation like the Apache Software Foundation or the Eclipse Foundation, it is very likely that due to the nature of the foundation's development processes there is not much overlap between the various open source technologies. But open source projects today in many cases start somewhere else and don't necessarily follow a defined governance model. This is good since it is more flexible and allows the emergence of many different ideas. For example, GitHub today has almost 4.3 million projects. But the lack of coordination is also a challenge in that it becomes more difficult to find the right technologies and to understand their level of maturity and adoption. And it increases the likelihood of overlapping and similar technologies. This is not necessarily bad, but it can become a management challenge of open source adoption in the enterprise.

    One particular challenge in this regard is that different product teams might choose different technologies for the same purpose. The integration of, for example, four different open source XML parsers into our product line can result in maintenance overhead - four different technologies need to be maintained over the course of the products' support schedule. Another challenge is the selection of the right version of the open source technology. Even if all SAP product teams agree to select the same open source XML parser, they may have done so over the course of a few years and have chosen different versions of that XML parser. It is preferable to use only one version - and actually the most recent stable one - for all products because it also minimizes the risk to miss important security bug fixes in newer versions. But the upgrade might get complicated, for example, if the XML parser's APIs that were used to integrate it into the SAP product, have changed incompatibly.
  • Release Schedules
    The adoption of the most recent stable version of an open source technology is a reasonable goal to pursue on its own. But since the release schedules of the open source technology and the SAP product are not the same, this is not necessarily an easy task and some updates of the embedded open source technology might become necessary after the SAP product has already been shipped. There are some exceptions like the Eclipse Foundation that has decided to deliver one stable release of the Eclipse Platform once per year - this simplifies the planning exercise and allows us to, for example, develop a stable Eclipse release train SAP-internally on a yearly basis. The whole exercise can get more complicated when SAP is an active contributor to the open source project. By no means is there a guarantee that the extensions we developed SAP-internally will be adopted without any changes by the open source project lead and in time before the SAP product is being released to the market. This means that we sometimes need to live with a fork of the open source technology. The operational recommendation is to avoid such forks and to always seek a close alignment between the open source standard version and the version embedded in the SAP product.
  • Security
    Product security and security response management for SAP products clearly needs to include a responsibility for fixing security vulnerabilitites in embedded open source technologies. On the one side - since the source code of the open source technology is openly available and used by many other firms - there are more sources for finding potential security vulnerabilities. On the other side, this results in an obligation to even more quickly respond to such findings or adopt known resolutions. Enterprise customers need to be able to patch their systems with respecitve bug fixes before it is being discussed in the public. This may sound like a paradox since further development of the open source technology happens "in the open." But there are resolutions - the Apache Software Foundation, for example, has a process by which security vulnerabilities can be reported on a mailing list that is only accessible to the leads of the respective open source project. Until a resolution is available, the conversation continues in a private environment. The vulnerability is finally being reported publicly only at the time a bug fix has being made available. This significantly limits the risk that IT user organizations continue to run systems that are exploitable due to known vulnerabilities.
  • Long-Term Support
    SAP products are typically supported for at least seven years. Customers expect support for the complete solution, not just for the software components that were developed SAP-internally. From a support perspective, customers actually shouldn't need to know which open source technologies have been embedded. Consequently, we need to provide a means to support respective open source technologies, including the application of bug fixes as necessary. Fortunately, the various quality checks that are being applied when selecting open source technologies at SAP radically reduce the number of necessary bug fixes. But it can never be completely avoided and sometimes it is necessary for version 3.2 when the open source project has already released version 6.0 ...
    In principle, there are two main options: either to upgrade the embedded open source technology to version 6.0 or to apply a local bug fix to version 3.2. Both options can be rather complicated. An alternative is to join forces with other interested open source developers and users and to share the cost of maintaining older versions of the open source technology. SAP has long motivated such an approach for Eclipse projects. The recent establishment of the Long Term Support Industry Working Group (LTS IWG) is a good step in this direction and fortunately, a number of other Eclipse members do see the same need and have joined the LTS IWG to solve this important dilemma.

 

So I hope that this blog post series about the SAP Open Source Summit 2012 has been useful for you. From my perspective is was good to see a strong focus on operational excellence, knowledge sharing and applying open source development principles SAP-internally. Let me conclude with Mike Milinkovich's words: "Open source software is really really mainstream." And the journey will continue.

 

Here is the overview of the blog post series again:

  1. Overall Impressions
  2. What we think - SAP Open Source Strategy
  3. Insights from keynotes
  4. What we do next (this post)

In this blog post we'll look at how one can use Spring Security framework together with the usual way of securing a JEE application on NetWeaver 7.3 and what this can bring us. Here are the useful links :

 

 

For the reference, I've tested this setup with Spring Security 3.1.3.RELEASE (which in turn depends on Spring MVC 3.0.7.RELEASE).

 

Spring Security is a popular and very flexible framework which allows to configure and manage all aspects of securing a web application : authentication, authorization, access control to domain objects. One of the many useful things which this framework provides is a spring-security-taglibs module which allows one to protect the various pieces of the JSP page using security tags tied to the user's role membership (authorization). For example, we can have a JSP like this :

 

<%@ taglib prefix="sec" uri="http://www.springframework.org/security/tags"%>
<p>Hello, this page is accessible to all users with ROLE_EVERYONE</p>
<sec:authorize access="hasRole('ROLE_SFLIGHT_USER')">
    <p>This text and link below should only be visible to users with
    ROLE_SFLIGHT_USER</p>
    <a href="<%=request.getContextPath() %>/secure">secure page</a>
</sec:authorize>

 

Notice the use of sec:authorize tag which protects access to the part of the page depending on whether or not the current user has a role SFLIGHT_USER. We'll discuss below how we can configure Spring Security to work seamlessly with the security services provided by out JEE container. The idea behind this integration is quite simple: NetWeaver already handles the user authentication for us, there is also a mechanism to map the UME roles of the portal user to the roles referenced in the web.xml of our application. All we have to do is to find a way to make Spring Security framework recognize these roles as the "granted authorities" associated with the authenticated user.

 

We start with a simple web application set up. Here are the interesting parts of our starting web.xml :

 

<login-config>
    <auth-method>TICKET</auth-method>
</login-config>
<security-role>
    <role-name>EVERYONE</role-name>
</security-role>
<security-role>
    <role-name>SFLIGHT_USER</role-name>
</security-role>
<security-constraint>
    <web-resource-collection>
        <web-resource-name>Spring Security Integration Test Application</web-resource-name>
        <url-pattern>*</url-pattern>
        <http-method>GET</http-method>
        <http-method>POST</http-method>
    </web-resource-collection>
    <auth-constraint>
        <role-name>EVERYONE</role-name>
    </auth-constraint>
    <user-data-constraint>
        <transport-guarantee>NONE</transport-guarantee>
    </user-data-constraint>
</security-constraint>

 

We chose a "tiket" authentication method for our web application meaning that the user will be considered authenticated if he or she previously logged in on the portal and that there is JSESSIONID and MYSAPSSO2 cookies are present in his or her browser session (usual SAP SSO via SAP Log on ticket). We declare two security roles "EVERYONE" and "SFLIGHT_USER" for our example. The first one is a general role assigned to every UME user. The other one is an example role which we can create and assign to some test user. We then protect any access to our web application by setting url-pattern of the security constrain to "*". The idea is that we let the JEE container to manage the authentication of the user but the finer granularity access protection within the application we will delegate to Spring Security. For the mapping between the UME roles and the web application security roles we also need this in the web-j2ee-engine.xml file of the web application module :

 

<security-role-map>
    <role-name>EVERYONE</role-name>
    <server-role-name>EVERYONE</server-role-name>
</security-role-map>
<security-role-map>
    <role-name>SFLIGHT_USER</role-name>
    <server-role-name>SFLIGHT_USER</server-role-name>
</security-role-map>

 

Now we need to set up and configure the filter chain used by Spring Security. This is done by specifying a Spring's application context file containing all the security configuration in our web.xml file :

 

<listener>
    <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
</listener>
<context-param>
    <param-name>contextConfigLocation</param-name>
    <param-value>/WEB-INF/context/security-config.xml</param-value>
</context-param>
<filter>
    <filter-name>springSecurityFilterChain</filter-name>
    <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class>
</filter>
<filter-mapping>
    <filter-name>springSecurityFilterChain</filter-name>
    <url-pattern>/*</url-pattern>
</filter-mapping>

 

The security configuration file, security-config.xml, uses Spring Security namespace for convenience. It uses the security scenario of "pre-authentication" and relies on two classes provided for us by the framework: J2eePreAuthenticatedProcessingFilter and  which integrates with the container authentication process by extracting user principle from the HttpServletRequest and J2eeBasedPreAuthenticatedWebAuthenticationDetailsSource which is responsible to map a configured set of the security roles specified in the web application descriptor file to the set of GrantedAuthorities provided the user membership in these roles has been established. Here is the security-config.xml file :

 

<beans:beans xmlns:beans="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.springframework.org/schema/security"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd
        http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.1.xsd">
    <http auto-config="true" use-expressions="true">
        <jee mappable-roles="EVERYONE,SFLIGHT_USER" />
        <intercept-url pattern="/**" access="hasRole('ROLE_EVERYONE')" />
        <intercept-url pattern="/secure/**" access="hasRole('ROLE_EVERYONE')" />
    </http>
    <authentication-manager>
        <authentication-provider ref="preAuthAuthenticationProvider"></authentication-provider>
    </authentication-manager>
    <beans:bean id="preAuthAuthenticationProvider"
        class="org.springframework.security.web.authentication.preauth.PreAuthenticatedAuthenticationProvider">
        <beans:property name="preAuthenticatedUserDetailsService">
            <beans:bean
                class="org.springframework.security.web.authentication.preauth.PreAuthenticatedGrantedAuthoritiesUserDetailsService"></beans:bean>
        </beans:property>
    </beans:bean>
</beans:beans>

 

Notice the use of "jee" element in the "http" configuration, it is a shortcut for configuring an instance of J2eePreAuthenticatedProcessingFilter filter and registering it with the defaulf filter chain. Also, by default all the JEE security roles specified in the "mappable-roles" attribute will be mapped to the GrantedAuthorities with the names prefixed by "ROLE_".

 

Once the pre-authentication mechanism is successfully configured we can protect URL access using expressions of the kind "hasRole('ROLE_FROM_UME_HERE')" in the global "http" configuration element and in the security tags in our JSPs. For the reference, this is the Spring MVC configuration file, mvc-config.xml, which I've used for the the web application, it uses "mvc" namespaces for convenience :

 

<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans 
        http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
        http://www.springframework.org/schema/mvc 
        http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd">
    <mvc:annotation-driven />
    <bean id="viewResolver"
        class="org.springframework.web.servlet.view.UrlBasedViewResolver">
        <property name="viewClass"
            value="org.springframework.web.servlet.view.JstlView" />
        <property name="prefix" value="/WEB-INF/jsp/" />
        <property name="suffix" value=".jsp" />
    </bean>
    <mvc:view-controller path="/" view-name="index"/>
    <mvc:view-controller path="/secure" view-name="secure"/>
</beans>

 

It could be referenced in the web.xml file in the standard way :

 

<servlet>
    <servlet-name>springMvcServlet</servlet-name>
    <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
    <init-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>/WEB-INF/context/mvc-config.xml</param-value>
    </init-param>
    <load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
    <servlet-name>springMvcServlet</servlet-name>
    <url-pattern>/</url-pattern>
</servlet-mapping>

 

Using Spring Security in the standard JEE web application can be very useful. Other than the use of the security tags in the JSPs, described in this post, one can think of securing method's access in Java beans or using Access Control Lists (ACL) management for example.

In the first two episodes of this blog post series about the SAP Open Source Summit 2012, I shared my overall impressions and key parts of my own presentation on the corporate open source strategy, respectively. Now I want to touch upon insights from the keynotes that our guest speakers delivered.

 

Mike Milinkovich, Executive Director, Eclipse Foundation, presented on Foundations 2.0. He offered his slides to be posted on SCN, see here. As already mentioned in my first blog post in this series, Mike stressed the point that it is commonly unknown that SAP is a strong contributor to the Eclipse Foundation. The Eclipse ecosystem is still growing - today, there are more than 2 mio downloads per month. Also, Eclipse is known for its predictabie, yearly releases and the level of corporate engagement, which was one of the differentiators when the foundation was created more than 10 years ago.

Recognizing a few trends in the software industry such as "software is everywhere" and "open source is really really mainstream", he characterized the next generation of open source foundations, which is also where Eclipse is going:

  1. Technology-agnostic Eclipse is already much more than an IDE only and applies to practically all programming languages (including ABAP ...). The foundation will continue to serve additional purposes and welcome other technologies that can make use of the development and IP management principles that it is known for.
  2. Git-based The adoption of Git as a version control and source code management system for distributed development is still growing and Eclipse has decided to migrate to a Git-based common build infrastructure, which will presumably simplify the daily life of Eclipse committers.
  3. Long-term support Particularly in enterprise environments, the requirement of long-term support (or sometimes long long-term support ...) is obvious and the establishment of the Long Term Support Industry Working Group supports the development of respective development and support models.
  4. User-led More and more end-user organizations from different industries observe the opportunities (or sometimes the need) to increase the level of collaboration, including the joint work on software projects. Eclipse already has a number of industry working groups such as Polarsys for embedded systems or LocationTech for location-aware software and Mike believes that the trend of more end-use involvement will continue.

 

Andrew Aitken is SVP, Olliance Group, a Black Duck company and an open source consulting firm. He has consulted numerous software companies on how to embed open source development and licensing approaches in their corporate strategy. From his point of view, open source is already in its fourth generation. In the beginning, it was a rather extreme movement with rather extreme characters like Richard Stallman and Bruce Perens. Then, open source emerged and became a means for commercial success: Red Hat, MySQL and SpringSource are examples of commercially quite successful companies with an open source based business model. Thirdly, the more traditional vendors complement their business model with open source approaches - today, there is hardly any software vendor that does not have a defined approach towards open source. And the fourth generation is - in line with what Mike observes - the involvement of end-users. NYSE, Airbus, BMW and NASA, for example, are all quite actively engaged in open source projects that are specific to their industry vertical.

 

Dirk Riehle, Professor for Open Source Software at the University of Erlangen-Nürnberg, presented on one of his favorite topics: Inner Source. Inner Source is what he calls "open source best practices applied to firm-internal software development." He and his team have interviewed and worked with a number of corporations, both software vendors and end-user organizations to see if the expected benefits of inner source (better code reuse, more knowledge sharing, improved resource allocation and higher job satisfaction) actually apply. While he in general confirms this, he distinguishes between two forms of inner source - volunteer (i.e., self-managed) and managed. Managed inner source requires a "defined and actively managed governance process." The right choice of model depends on the types of challenges an organization or a project is facing, which varies between developer skills, developer mindset and management mindset. More research from Dirk can be found on his blog.

 

Last but not least, Jono Bacon presented on Communities and what makes them strong and vibrant. For him, sense of belonging and sense of purpose are the most important characteristics the individual community members should feel to make the community a coherent one. Communities are obviously not restricted to software development (and of course not to open source software), but many principles of successful communities can also be seen in software-related communities. For example, the importance of providing kudos to active members, particularly new ones that start contributing, can not be overestimated. A simple "thank you" for committing a patch that resolves an annoying bug is key to keep the contributor's sense of belonging and the overall community more active. Jono has written a well-known book "The Art of Community" published at O'Reilly and continues the dialogue on the Art Of Community Online.

 

Next time, I'll finish this series and write about what SAP is - presumably - going to do next in the context of open source software.

 

In terms of the SAP Open Source Summit 2012 blog series, here is the overview again:

  1. Overall Impressions
  2. What we think - SAP Open Source Strategy
  3. Insights from keynotes (this post)
  4. What we do next

In this blog I'll share with you an alternative to the SAPUI5 development on the NetWeaver platform. SAPUI5 is an interesting and innovative effort from SAP on the front of modern UI development. But at the time of writing this post, it is still in an evaluation beta stage and it seems like there is still no decision about the licensing scheme that SAP will adopt for this technology. Since NetWeaver has been able to take advantage of Java 5 and JEE 5 and especially since it has become very easy to deploy a third-party JARs with a webapp deployed on this platform, we have a multitude of free and open source alternatives available for developing modern UI in the same way SAPUI5 has been designed. I will describe a way of using a very cool JavaScript framework, Dojo, together with my favorite MVC framework, Spring, to build a simple hello world webapp which demonstrates how these technologies can be used together on NetWeaver 7.3.

 

The sources for this webapp can be found in the Code Exchange project dojoui.

 

Note: I assume that you are already at least somewhat familiar with Spring MVC and Dojo frameworks.

 

Download a Dojo distribution from their site. I used the latest release, 1.8.1. You'll need to create a JAR containing these packages under the path /META-INF/resources/. The idea is that we will delegate to Spring the care of serving and caching these JavaScript files from this JAR for us, instead of bundling them as is in our WAR.

 

You also need to get all the JARs necessary for Spring MVC setup. There many ways to do this. I use Apache Ivy for this purpose. You can create a simple Java project in NWDS, import the single build.xml file from the Ivy's site and create a simple target which will retrieve Spring's libraries and their dependencies for us from the Maven repository. Here is the target:

 

<target name="resolve-and-retrieve" depends="install-ivy" description="--> resolves dependencies decalred in ivy.xml file">
    <ivy:resolve file="${basedir}/ivy.xml" transitive="true"/>
    <ivy:retrieve/>
</target>

 

It will resolve the dependencies declared in ivy.xml file, download them and put then in lib folder. Here are the contents of ivy.xml file:

 

<info organisation="your.org" module="anything" />
<configurations defaultconfmapping="default->default"></configurations>
<dependencies>
    <dependency org="org.springframework" name="spring-webmvc" rev="3.1.0.RELEASE" />
    <dependency org="cglib" name="cglib" rev="2.2" />
    <dependency org="org.codehaus.jackson" name="jackson-mapper-asl" rev="1.9.10" />
</dependencies>

 

You might need ivysettings.xml file, as well, here it is:

 

<ivysettings>
    <settings defaultResolver="chain" />
    <resolvers>
        <chain name="chain">
            <ibiblio name="central" m2compatible="true"></ibiblio>
            <!-- uncomment if you want to use Spring's milestone libraries -->
            <!-- 
            <ibiblio name="spring-milestone" m2compatible="true"
                root="http://repo.springsource.org/milestone"></ibiblio>
            -->
        </chain>
    </resolvers>
</ivysettings>

 

After retrieving the libraries you should have all the JARs (together with the dojo-1.8.1.jar created above) needed to create a simple webapp. You can see all the JARs needed in the image of the webapp structure at the end of this post.

 

Create a simple webapp project in NWDS (tmp~dojo~web) and assign it to an EAR (tmp~dojo~ear). Copy all the JARs into /WEB-INF/lib directory of your web project, they will be deployed to the server. Now we need to set up Spring's MVC for our webapp. This requires registering Spring's Dispatcher servlet in our web.xml file. Here it is:

 

<servlet>
    <servlet-name>SpringMvcDispatcher</servlet-name>
    <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
    <init-param>
        <param-name>contextClass</param-name>
        <param-value>org.springframework.web.context.support.AnnotationConfigWebApplicationContext</param-value>
    </init-param>
    <init-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>ch.unil.dojo.web.config.WebConfig</param-value>
    </init-param>
    <load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
    <servlet-name>SpringMvcDispatcher</servlet-name>
    <url-pattern>/</url-pattern>
</servlet-mapping>

 

Spring MVC can be configured using Java annotations, this is how I have done it:

 

/**
 * Web context configuration to be processed by {@code
 * AnnotationConfigWebApplicationContext} and specified as {@code
 * contextConfigLocation} parameter for {@code DispatcherServlet}, see {@code
 * web.xml} file.
 * <p>
 * {@code EnableWebMvc} annotation configures default MVC infrastructure
 * including an instance of {@code MappingJacksonHttpMessageConverter} Json to
 * Java converter for request handlers. {@code ComponentScan} annotation
 * specifies the base package which will be scanned for the {@code Controller}
 * annotated classes.
 * 
 * @see org.springframework.web.servlet.DispatcherServlet
 * @see org.springframework.web.context.support.AnnotationConfigWebApplicationContext
 * @see org.springframework.web.servlet.config.annotation.EnableWebMvc
 * @see org.springframework.http.converter.json.MappingJacksonHttpMessageConverter
 */
@EnableWebMvc
@Configuration
@ComponentScan(basePackages = "ch.unil.dojo.web")
public class WebConfig extends WebMvcConfigurerAdapter {
    // set up the default view resolver, mapping logical view name
    // to a JSP with the same file name
    @Bean
    public ViewResolver viewResolver() {
        InternalResourceViewResolver viewResolver = new InternalResourceViewResolver();
        viewResolver.setPrefix("/WEB-INF/pages/");
        viewResolver.setSuffix(".jsp");
        return viewResolver;
    }
    // add handlers for the static resources (js, css, images),
    // will look up and cache Dojo files from the jar on the classpath
    public void addResourceHandlers(ResourceHandlerRegistry registry) {
        registry.addResourceHandler("/resources/**").addResourceLocations("classpath:/META-INF/resources/")
            .setCachePeriod(31556926);
    }
    // set up the redirection to the main view if the root URL is accessed
    public void addViewControllers(ViewControllerRegistry registry) {
        registry.addViewController("/").setViewName("index");
    }
}

 

Notice how we are setting up a special ResourceHandler for all the Dojo JavaScript files which will be served from the JAR on the classpath and even cached for faster subsequent accesses.

 

Now we can create our JavaScript font-end UI using Dojo. Create a JSP page called index.jsp in /WEB-INF/pages/ folder. The Spring will automatically use this view for all requests to the root of the web application. Here are the contents of this file:

 

<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Testing Dojo</title>
<link rel="stylesheet" type="text/css"
    href="<%=request.getContextPath()%>/resources/dijit/themes/claro/claro.css">
<script src="<%=request.getContextPath()%>/resources/dojo/dojo.js"
    data-dojo-config="async: true,
    packages: [{name: 'js', location: '<%=request.getContextPath()%>/resources/js'}],
    gfxRenderer: 'svg'"></script>
</head>
<body class="claro">
    <script type="dojo/require">at: "dojox/mvc/at"</script>
    <div data-dojo-type="dojox/mvc/Group" data-dojo-props="target: model">
        <div data-dojo-type="dijit/form/TextBox"
            data-dojo-props="value: at('rel:', 'name'), placeHolder: 'First Name',
                properCase: true, trim: true"></div>
        <div id="submitBtn" data-dojo-type="dijit/form/Button"
            data-dojo-props="label: 'Submit'"></div>
        <div id="surfaceDiv"></div>
    </div>
    <script>
    require(["js/utils", "dojo/_base/kernel", "dojo/when", "dojo/parser", "dojo/json", "dojo/Stateful", 
             "dojo/on", "dojo/mouse", "dijit/registry", "dojox/gfx", "dojox/gfx/fx", "dojo/colors"],
            function(utils, kernel, when, parser, json, Stateful,
                    on, mouse, registry, gfx, gfxAnim, Color){
        //create a model
        var model = kernel.global.model = new Stateful();
        //parce the document, then connect the submit button
        when(parser.parse(), function() {
            //create SVG surface
            var surface = gfx.createSurface("surfaceDiv", 400, 100);
            registry.byId("submitBtn").on("click", function(evt){
                //make Ajax request
                when(utils.ajaxRequest("<%=request.getContextPath()%>/greet",
                        json.stringify(model)), function(data) {
                     //clear previous shape
                    surface.clear();
                    //crete text shape
                    var text = surface
                        .createText({x: 200, y: 50, text: data.greeting, align: "middle"})
                        .setFont({family: "Arial", size: "20pt", weight: "bold"})
                        .setFill(Color.named.skyblue);
                     new gfxAnim.animateTransform({
                         duration: 1500,
                         shape: text,
                         transform: [{
                             name: "rotategAt",
                             start: [0, 200, 50],
                             end: [360, 200, 50]
                         }]
                     }).play();
                });
            });
        });
    });
    </script>
</body>
</html>

 

This is a simple hello world webapp but with a little Dojo twist. It collects the user's name and sends to the Spring's controller in a form of an Ajax request with the name serialized as a Json string. On the cotroller's side the Json string from the body of the request is automatically converted to a POJO of the corresponding structure by the MappingJacksonHttpMessageConverter instance registered with Spring MVC by default (it's a part of EnableWebMvc configuration). Once the greeting message is generated it is stored as a value of the POJO's property and the POJO is serialized to the body of the HTTP response in the form of a Json string again automatically by the Jackson mapper. The returned Json response is parsed by Dojo which displays the greeting message as a rotating SVG text shape.

 

As you can see, we don't actually use a lot of JSP's functionality, it is mostly a plain HTML page with some JavaScript. However, we need a reference to the context path of the webapp which we can obtain by using a scriplet <%=request.getContextPath()%>. We also use a custom module, "js/utils", which Dojo is instructed to look up under /resources/js location. I have put this file on the classpath (src folder) under /WEB-INF/resources/js/utils.js this way the Spring's ResourceHandler servlet can pick it up as well. Actually, this is the only way that I found to reference my custom module on NetWeaver if I wanted to have it served by the Spring's resources servlet. Here are the contents of this file, it's just a simple utility to generate an asynchronous Ajax request to the server using Dojo's handy "dojo/request" module.

 

define(["dojo/request", "dojo/Deferred"], function(request, Deferred){
    return {
        ajaxRequest: function(path, data, sync){
             var def = new Deferred();
            request.post(path, {
                headers: {"Content-Type": "application/json"},
                data: data,
                handleAs: "json",
                sync: (sync || false)
            }).then(function(data){
                def.resolve(data);
            },
            function(error){
                def.reject(error);
            });
            return def;
        }
    };
});

 

The controller class which is responsible for handling the Ajax request is given below:

 

/**
 * Controller which will handle all incoming HTTP requests. Registered in the
 * MVC infrastructure via {@code ComponentScan} annotation used in the web
 * context configuration class.
 */
@Controller
public class AjaxController {
    private static final Logger logger = Logger.getLogger(AjaxController.class);
    /**
     * Request handling method applicable to any POST request with a header
     * {@code Content-type: application/json} and the path {@code /greet}.
     * Spring automatically converts the Json string from the body of the
     * request via {@code MappingJacksonHttpMessageConverter} to an instance of
     * {@code Greeting} form and serializes it back to a Json string set to the
     * body of the response.
     * 
     * @param form person data
     * @return greeting data
     * 
     * @see org.springframework.http.converter.json.MappingJacksonHttpMessageConverter
     */
    @RequestMapping(value = "/greet", method = RequestMethod.POST, consumes = "application/json", produces = "application/json")
    public @ResponseBody
    Greeting greet(@RequestBody Greeting form) {
        logger.debug("Processing Ajax request for /greet with form: " + form);
        form.setGreeting("Hello, " + form.getName() + "!");
        return form;
    } 
}

 

As you can see Spring makes it very easy for us to work with this kind of requests and thus it is a particularly good choice for the server-side Ajax processing.

 

Here is an outlay of the entire webapp project in NWDS for reference:

 

 

We can recap the things that we have used in this showcase and the advantages of this architecture.

 

1) Dojo front-end

 

Dojo is an excellent JavaScript framework, it is reach and constantly growing, uses AMD module loading, has a vibrant community and it is open source. The learning curve for Dojo is probably not much steeper than the one for learning SAPUI5 considering that you would have to learn JQuery with SAPUI5 at some point. The advantage of using Dojo is that you don't need to use the NetWeaver at all. I actually developed the entire webapp in STS with a Tomcat 6. Of course you will not get the code assist for the JavaScript files as you have with SAPUI5 plugin but at least it is free.

 

2) Spring MVC

 

Spring MVC framework allows to configure a clean separation of concern, RESTfull architecture for a webapp very easily, especially when it comes to conversion to and from Json requests. It all happens behind the scenes with Jackson mapper allowing one to implement server-side logic with simple POJOs. If you like this setup you can easily imagine using Sprin's Security to make your webapp secure or to use Spring CCI with an EJB session facade to connect to an ABAP backend.

Actions

Filter Blog

By author: By date:
By tag: