1 2 3 Previous Next

Open Source

37 Posts

_DSC0592.JPG.jpeg

Before I'll answer the question, let me step back and set the stage for my arguments! I was invited by Leigh Jin, an associate professor at the San Francisco State University, to give a guest lecture on OpenUI5 two weeks ago. I was obviously very excited and prepared some code examples and a presentation for the lecture. Here's what I did: I created all my code examples on jsbin.com, which is a tool/website to collaboratively code in JavaScript. It's an amazing website - allowing you to develop a website using HTML, JS and CSS. But, here are the two best features from my point of view: You can see the outcome of your code and interact with it and you can use libraries like jQuery, Twitter's Bootstrap, AngularJS and .. OpenUI5!

 

 

Let's jump into it directly to see how it looks like. Here's one of my code examples: http://jsbin.com/howoyeqoki/1/

screenshot_86.png

 

_DSC0591.JPG.jpegDuring my presentation, I displayed just this website and did some live coding to demonstrate how fast you can get things done with OpenUI5. I also shared the URL and let the students live code by themselves. The outcome was fantastic! Leigh and the students liked it a lot! For me, it was kind of a big surprise.. I'm in the SAP ecosystem for less then two years and I heard a lot about how complex things can get. Leigh is experienced in conducting SAP student classes and she also knows many SAP solutions. She explained to me that this live coding approach is exactly what she was looking for! By the way, she did not even considered to teach OpenUI5 before I showed her how I used to teach people about OpenUI5. She knows how much preparation SAP classes require: setting up the servers, get and upload sample data, setting up the student's PC to have necessary development tools running, .. the list goes on and on. But, with this approach, you don't need any of those. You can throw away the overhead and jump into code directly - within the browser of your choice!

 

 

 

And finally, here's the list you were probably looking for after reading the title:

  • No need to download nor install any sources and not dealing with folder structures, correct paths and so on
  • No need to download and setup any IDEs, plugins, SDKs or what so ever - just open your pre-installed browser
  • No need to start from scratch anymore. Simply setup a greenfield template like this one: OpenUI5 Mobile Greenfield Example
  • No need to create a system for student's submissions. Share your URL and let them code. At the end, let them sent over their final URL to you.
  • No need to have to setup any server side components or APIs - just use existing public APIs for teaching purposes
  • No need to understand and deal with complex debugging modes of IDEs. You can use OpenUI5s "Diagnostics" popup (Ctrl + Alt + Shift + S) or simply print to console & open the console view on jsbin
  • Simply use copy&paste to prototype your desired application using the code examples provided here: OpenUI5 SDK - Demo Kit

 

And now it's your turn: What do you think? Try out that approach and share your experience in the comments!

 

If you're looking for some code examples or just would like to see my slides, click here. And if you're interested in the SFSU class and our collaboration with them, please check out this site. Also, it's nice to see that by adding this web development class with OpenUI5, SFSU is bringing SAP's Student Recognition Award to their MBA program! Congrats!

It's been a while since I blogged about our monthly Open Source meetups in the Bay Area! We just had our fourth meetup last week - with 40 participants (out of 74 RSVPs)! As usual, we had a theme wrapped around the whole meetup, which was this time ... (surprise): Christmas! I was very excited about it and we decorated our facilities with christmassy things -see the pictures below But that was not all! Our first speaker, Aaron Williams, was talking about how to prototype with IoT and he came up with software-controlled christmas lights, running on an Arduino and with sensors and an UI for maintenance purposes!

600_432355117.jpeg600_432355139.jpeg600_432340436.jpeg

 

In the meantime, our Open Source Bay Area community grew to 280 members with overall 162 participants within 4 meetups over the last 5 months. But, that was not the only change! Based on the feedback we received from our upcoming meetups, our participants really enjoy Q&A sessions to get their more detailed questions answered. Because, at the end participants join our meetup to gain new knowledge and our survey results underline that content is king! Based on that, we change the format. Our speeches are now only 15m long, followed by 30m Q&A. This allows us no have more time to mingle and connect to like minded people!

 

As for the last meetup, we had two speakers joining the speakers panel plus one spontaneous lightning talk by Ralf Pieper:

 

Here are some pictures - just to let you know what you missed

600_432355177.jpeg600_432355131.jpeg600_432355126.jpeg600_432355124.jpeg600_432342388.jpeg600_432342371.jpeg600_432342322.jpeg600_432342307.jpeg600_432342291.jpeg600_432338708.jpeg600_432342378.jpeg

I was really glad to see such an amazing engagement within our community. Ralf came up with the idea for a lightning talk just before the meetup. We were thinking about those spontaneous talks already a few times and we figured it could be very nice to give our community members the possibility to talk to the community and bring up some issues or thoughts they had!

 

We also introduced online surveys this time, so let me share some results with you:

  • 100% said attending the meetup was worth their valuable time!
  • Overall, the meetup was rated 5.7/7 stars - which is a great result
  • 64% said that the content was key for their decision to participate
  • Here's what people liked most: Talks & selection of speaker, Networking opportunities, Engagement & discussions, Gained knowledge, Organization
  • And here's what our community is most interested in right now: Big Data and cloud security, IoT everything (e.g. Connected Cars) - but live demos!, NoSQL databases and Big Data infrastructures, Crypto-currency in the cloud, OSS business models & success factors, Configuration Management


We received very well feedback and people would like us to keep it going - which we will obviously do! So, if you are interested join our community and RSVP for the next meetup using the link at the bottom of this blog! Also, if you know any cool speakers that would fit perfectly in one our meetups, please let us know!

 

 

Thanks everyone for participation and also thanks to Inga Bereza and Garick Chan for co-organizing all of our meetups


If you are an Open Source enthusiast, please join our meet up group and RSVP for the next meet up on January 21st!

 

Open Source Bay Area Meet up Group

Learn and practice the following open source technology during 4 days

Get $200 discount

 

2000px-Tux.svg.pngLinux

  • Systemd, Btrfs, and kernel crash infrastructure
  • Samba and Btrfs - A Snapshot of Progress
  • Achieve best server/storage performance with NVMe devices
  • UEFI Secure Boot
  • Full-system Rollback - Myth and Truth
  • OS Lifecycle Management from the Datacenter to the Cloud
  • Hardening and tweaking your Linux

High Availability

  • Geo redundancy, including database replication, filesystem replication, and geo cluster overlay
  • Create a highly available 2 node virtual environment using DRBD and KVM
  • Choices in designing HA clusters from a reliability, scalability, and performance perspective (e.g., such as when to use network bonding, OCFS2 versus file-system fail-over, DRBD)

OpenStack.pngOpenStack, KVM and PaaS

  • OpenStack deployments and troubleshooting
  • KVM on a grid enables dynamic management and resource allocation of virtual machines in large scale high-performance environments
  • Build Platform as a Service (PaaS) with WSO2 Middleware and EC2

Big Data (Apache Hadoop)

  • Deploy an elastic auto-scalable cluster with OpenStack to consume and process business data on demand

Ceph_Logo_Stacked_RGB_120411_fa.pngCeph Storage

  • Sizing and performance of Ceph storage
  • Ceph for Cloud and Virtualization use cases, including thin provisioning to make your storage cluster go further

SAP on Linux

  • How T-System leverages Linux and SAP LVM capabilities within their data center
  • Optimized Linux for SAP applications
  • Automate SAP HANA System Replication
  • Manage SAP HANA Scale-Out Linux systems

Register Today

All above open source technical sessions are available at the annual user conference SUSECon 2014 (NOV 17-21, 2014 Orlando). Interested to attend? Request your $200 discount off current full conference pass and meet with SAP & Open Source architects (email).

I wrote a blog last month (in July) on just how much I have been enjoying Ubuntu as a desktop machine.

I can't see myself going back - I am a total convert.

So just in the chance that I might win some more to the cause here are 7 more reasons why you might find Ubuntu to be your next OS choice.

 

7. Wobbly Windows

Oh this is available for other operating systems but having a few great window effects makes development life much more fun. Given I am running pretty standard Gnome Desktop, I use the compviz plugin to get this working for me. The Wobbly Windows adds a little stretchy and snappy effects to desktop windows but the best part is that compviz comes with other effects to snap windows into different parts of your desktop. So with a couple of keystrokes I can setup a browser window on one half of the screen and an editor (like sublime) on the right hand side of my screen

I can also quickly get several terminals up and snap them into the four quarters of the screen and ssh into a different server in each one. Although you can achieve the same thing with ...

 

6. Terminator

These things can start to be a little 'my-terminal-is-better-than-your-terminal' but after I was introduced to Terminator I rarely use the standard terminal.

The best feature about Terminator is that you can have many windows and tabs open within the app to then replicate what I was doing with separate terminal sessions and snapping them to different corners of the screen. To take this to another level (because that is not the killer use case) you can link windows together and issue the identical command to all windows. This was invaluable to me on a recent project where I was managing a cluster of servers and I wanted to issue the same sql query to each of them simultaneously to determine if they were all in sync.

 

5. Cowsay

Again this is a pretty minor item, all things considered, but it does make logging messages that much more fun.

I have been doing a lot of work with Ansible an open source provisioning tool. I will have more to say on this in a future blog but for now lets if you are not familiar with it, then consider it a way to script your server deployments so that is easy to deploy new servers with identical configuration.

Ansible uses cowsay to output many of its messages to the screen as they the playbooks run. Given some playbooks take time it helps break up the monotony as the cows mooove across the screen.

Just to give you a feel of how cowsay can immediately improve your life:cowsay.png

 

4. Multiple virtual desktops

I don't know how I will be able to work on 1080x768 again. I have grown used to multiple screens and not just multiple screens but multiple virtual screens. Ubuntu and Gnome make this a no brainer and you can easily have 4 virtual screens with real estate of 3840x2160. By splitting this into a virtual screen of 1920x1080 I can easily put different types of work on different virtual screens and focus on one particular type of work at once. I like to have a little PHP going on, with a little SAP UI5 here and perhaps email and other messaging on another. I can set up each screen with all the resources I need for that work and then leave it until I want to come back to it. This saves in context switch time as it is no trouble for my machine to leave an project or two open with the virtual machines that it needs and come back to it as I can.

I might restart my machine once a week if I need to, so to be able to set this up and then leave it all running is a great timesaver.

3. A Better understanding of your computer

In my last blog I mentioned that the command line as one of the great benefits of Ubuntu. Now I love great user interfaces as much as the next person. In fact I am passionate about creating great user experience for my clients. One of the best ways to do this is to simplify, simplify, simplify and remove all the complexity that does not affect the transaction at hand.

As a developer and as a DevOps'er you need to be familiar with what is going on with your servers. There are many great graphical programs that enable this but by using the command line I feel like I am operating at a much closer level to the computer and after a while of doing this the muscle memory kicks in and it becomes second nature. Also things like $ and ^ that are part of regular expressions have identical meaning in vi (yes vi). Knowing basic vi is also handy for when you are ssh'd into your headless server and guess what sublime isn't installed but vi is. Nano probably is too but I'd rather not talk about that.

2. Alias you, Alias me.

Whilst Jason Bourne is flying around the world with half a dozen passports an alias or two can be a great things to stick in your back pocket or or .bash_alias file.

An alias can take a long command line sequence and reduce it to a couple of easily typed letters. For example I have a scripted vagrant box that I used to have to get to the correct directory before starting it but with a simple and short script that I have aliased to two letters I can start that virtual machine and with another alias I can ssh into the machine and I am away.

Also the .ssh_config file is a winner. As you can define easy to remember aliases to all those servers you are managing and specify which user you want to log in as. There is also a trick about use ssh_config to differentiate your github accounts if you have multiple accounts for multiple clients.

1. Configurability

Yes, I saved the best for last. The best part of Ubuntu or any other variant of linux is it's configuarability. If you don't like the UI or pretty much any part of the OS you can switch it out for another. This is one thing that MacOS and Windows don't have to a large extent. While most of the tips and tools mentioned here can be applied on those OS's you can't swap out your UI.

 

This sort of discussion about editors, terminals, OS's can get a little bit heated for no good reason and I am not saying what I run is best and there is no other. Work out what works for you. In fact I run all these OS's (Linux, MacOS and Windows) and they all have advantages but for my main machine - ubuntu is where I am staying.

 

So why do you run what you run?

Last week, the Developer Relations team organized a Open Source meet up around success stories, failures

highres_400249062.jpegand best practices of Open Source initiative in bigger organizations. This time - it was our second meetup - we had 75 people attending the meet up. Our community grew from 18 to 75 within a month - that is pretty impressive! I was obviously really excited to welcome all participants and our 3 speakers: Zach Chandler from the Stanford University, SAP's Tools Team (Ben Boeser, Dominik Tornow, David Farr), and Mark Hinkle from Citrix! Based on the feedback from our participants, most people really enjoy the variety of speakers and valued the different perspectives on Open Source initiatives in bigger organizations.


If you were not able to attend, please find the slides of the talks below:

 

I also wanted to thank Inga Bereza and Garick Chan for their support as Co-Organizers - they did a great job!

 

For this second meetup, we actually changed our format based on the feedback we received from our early community members. This time we had one more speaker, shorter speaking slots and more focused talks. For the next meetup the speaking slots will get shorter again - our member like to discuss in more detail and exchange their experiences. They also want to have small "pitching slots" to talk about their projects within the community! Oh yeah - also, we will have more pizza - that was requested most

 

Overall, this was an amazing experience and I was really glad to have so many people joining our community! Check out the pictures below (or check out the meetup description directly) to see what you missed

highres_400249332.jpeghighres_400249462.jpeghighres_400249012.jpeg2014-08-13 19.23.52.jpg2014-08-13 18.40.27.jpg2014-08-13 19.20.44.jpg

 

Here are the impressions our community members:

  • Rick A: "Great talk from a wide variety of speakers regarding use of open source at their workplace. Very informative, exactly what I was hoping to hear about. Pragmatic talks about open source in general, specific discussions on Drupal, Git, OpenSSL, and others."
  • Daniel K.: "It was great! Very reassuring that other big organizations are having similar experiences...and overcoming them."
  • Jack P.: "Very useful meeting, great hosts and talks."
  • Greg P.: "All the speakers were great. Big thanks to the organizers & SAP for hosting."

 

If you are an Open Source enthusiast, please join our meet up group and RSVP for the next meet up on September 24th!

 

Open Source Bay Area Meet up Group

Nigel James

Going a little Ubuntu

Posted by Nigel James Jul 14, 2014

Earlier this year I was about to take on a new client and it was very clear that I would need to upgrade my computer.

 

The fun part about this new client is that there was no SAP technology to be seen and it was a very open source house. Open source in the respect that it used a lot of open source technologies and open source thinking.

 

The fun part for me at the start of this assignment was picking out a new beast on which to practise my craft on.

 

After looking into the various options available I went for a  DELL Latitude with stacks of RAM and an SSD drive and chose Ubuntu for the OS.

 

WOW!  I can almost hear you drop off your chairs.

 

I have been a Windows guy for all my career. Not that I have particularly enjoyed that. Windows can be a right pain in the neck at times but at the end of the day it works most of the time and had everything I needed . I saw a lot of my developer colleagues heading down the shiny iMBP or iAir path and while that looked very shiny and attractive here are my reasons for going with Ubuntu, enjoying it and never going back to Windows again (unless I am forced to).

 

  1. Everything I need is available on Ubuntu.
    There is nothing that I need that is not on Ubuntu. Actually that is not quite strictly true in the most pedantic sense of the word but for everything I need to do there is an option on Ubuntu


  2. What's good for the server is good for the desktop.
    The great thing about working with Ubuntu on the desktop is muscle memory. The servers run Ubuntu be it Webservers, Database servers, Monitoring servers, Email servers are all running Ubuntu. Not that all those processes are running on the desktop but it does mean that when you are working on the production servers all the same commands work exactly the same way. Need to work out if your server is running out of disk then using the same df or du commands makes it easy to remember.

  3. Embrace your inner command line.
    I loved windows because I could avoid the command line. Even though Windows does now have powershell and it is powerful I used to avoid getting into the DOS command line because it was really a pain in the neck. With Ubuntu and even with the MacOS systems in my life I love the command line. A lot of the time it is easier to type a command than use a GUI equivalent. Also because tools like grep become part of the everyday working with regular expressions become (slightly) less daunting. They just become part of your muscle memory.


  4. Do I need to mention Windows8?
    The short answer is no. I have used Windows 8 a little bit on some machines that I had to and I cant say that it was a pleasant experience. It really is two user interface paradigms nailed together badly.

  5. Installing software is a snap
    I had this impression that installing software on linux systems was compile, make etc but because Ubuntu and similar debian based systems have a critical mass software repositories are up to date and it is easy to sudo apt-get install <program>. Pretty much anything you need is an apt-get away.

  6. The performance is awesome
    This is perhaps down to Dell and the face that I have all the memory and SSD that I do but to be up and running from a cold start in 30 seconds is fantastic. My old clunky creaking Windows machine was literally come back after you have made your second coffee. I know I am not comparing apples to apples here but the I haven't yet made it really made this machine creak. 

  7. Virtual machines rock
    VirtualBox is the best. Teamed together with Vagrant and Ansible they make a great combination of creating local servers that can be easily created, provisioned, deployed and destroyed. They make it easy to work on similar setups right across the software landscape.

 

Seven good reasons to leave the realm of Windows and not get dragged over the the expensive side of the force.

 

If you are looking to replace your machine soon take another look at Ubuntu. It is not as scary as you might think.

 

I was first introduced to Ubuntu by a basis consultant years ago. Now I look back and wonder why it took so long to get on board.

 

I would love to hear of your feedback and how SAP software can be made more Linux friendly.

FISL (International Free Software Forum) is one of the biggest events aimed to promote and adopt free software. It takes place every year in Porto Alegre, the capital of Rio Grande Do Sul, the southernmost state of Brazil and the state where the SAP Labs Latin America is located.

 

The event is a good place to exchange ideas and knowledge and there you find students, researchers, social movements for freedom of information, entrepreneurs, Information Technology (IT) enterprises, governments, and other interested people. It gathers discussions, speeches, personalities and novelties both national and international in the free software world.

 

I go to FISL every year since 2009 (at the 10th edition at that time), and in 2010 the SAP made it's first partnership with the event, in this case I got to know better about SAP and had an interview for a developer job position during the event. Less than a month after that I was working at SAP.

 

This year SAP participated again in the event and I was able to give it back been at FISL representing SAP.

WP_000081.jpg

SAP@ FISL15

 

I was there talking about our Open Source contributions (OpenUI5, Eclipse, Apache projects, etc...) and sharing my experience as an SAP employee. The results of the event was great, many people came by our stand (not only for gifts) and we had many good conversations, but in the end I think the most important for me is that I may have inspired others like I got inspired 4 years ago.


Besides me, many people did the SAP participation at FISL15 a success, among them: Allan Silva, Ana Pletsh, Andre Leitzke, Debora Alves, Douglas Maitelli, Edgar Prufer, Fabio Serrano, Jucieli Baschirotto, Lucas Escouto and Matias Schertel.

As a continuation of the blog SAP OData Library Contributed to Apache Olingo (Incubator) I wanted to share some further insights into the Apache Olingo Incubator project.

 

About two years ago SAP started to invest into a new OData Library (Java). Goals for this effort were to implement a library which supports the OData Specification Version 2, which has nearly the same feature set one can find in SAP NetWeaver Gateway and to open source the library at Apache in order to build a developer community around OData.

 

Mid of 2013 SAP did a software grant of the library and contributed the source code to the newly formed Apache Olingo Incubator project. Shortly after, the project released version 1.0.0 in October 2013 and  version 1.1.0 in February 2014. The next version 1.2.0 is already on its way and currently available as snapshot on Apache Olingo Incubator. There you can also find the release notes. The releases cover the OData Specification Version 2. The committers of the project work constantly on the documentation for users of the open source library and are happy to answer questions via the dev mailing list or via Jira.

 

In the meanwhile OData evolves to an OASIS standard. So you can watch out for any news in the OASIS OData Technical Committee. The community work now focuses on implementing both client and server libraries for the OASIS OData Standard (Version 4). These efforts are supported by new contributions for Java (ODataClient) and Javascript (datajs), both client libraries for consuming OData Services. 

 

Apache Olingo tends to evolve into a project hosting OData Implementations in different languages and technologies which is already a great success but the community has also some more milestones to focus on:

 

  • Graduation, which means that the project leaves the incubator behind and becomes a top level project within the Apache Software Foundation
  • Agreement within the community for a common roadmap of V4 feature development
  • Merge the contributions into a common code base to go forward with the OData OASIS Standard (Version 4) feature development
  • Release a first version of an OData Java Library supporting V4
  • Release a first version of datajs supporting V4

 

Last but not least I also wanted to share some short facts around Apache Olingo (Incubator):

 

  • 2 releases, the third one is on its way
  • 19 initial committers
  • 7 new committers
  • 75 persons active on the mailing list
  • 1025 commits in the git repositories
  • more than 1500 mails via dev mailing list
  • more than 150 Jira Issues closed / resolved
  • about 20 tutorials available

 

With that I think there will be interesting times ahead of us in shaping the future of the Apache Olingo project.

 

We are interested to know what are your thoughts. So please share your comments, feedback with us by commenting to this post or if you already have more detailed questions or feature requests you may also use the dev mailing list for Apache Olingo directly. We, that is Christian Amend, Tamara Boehm, Michael Bolz, Jens Huesken, Stephan Klevenz, Sven Kobler-Morris and Chandan V.A. as the main initial committers, are happy to answer your questions.

Introduction

 

Source code for the application is available on GitHub.

 

An application using OpenUI5 at the front-end will sooner or later need to connect to the back-end services for some business logic processing. In this blog entry we'll show how we can use the popular Spring MVC framework to expose REST-like endpoints for such server-side processing. Spring MVC makes it very simple to setup and configure an interface which will handle requests with Json payload, converting all domain model objects from Json to Java and back for us.


  • Simple Maven project with embedded Tomcat for testing locally
  • Servlet 3.0, no-XML, set-up for the web application using Spring's annotation based configuration
  • JSR-303, Bean Validation through annotations, used on the model POJOs
  • Spring MVC set up with a web jar for OpenUI5 runtime and automatic serialization of the model to Json


Useful links


Here are some useful links.


 

Application

 

This is a very simple single-page application which has a table of fruit, each having a name (String) and a quantity (integer). One can add a new fruit, delete an existing entry from the table or update an existing fruit using an inline-edit.

 

fruit.png

 

Taking just the "add" operation as an example, we can see that the home view, home.view.js, calls the controller with a JavaScript object constructed as to represent a Fruit when it is serialized as the part of the Ajax request by the controller.

 

 

// add button
        var oButton = new sap.ui.commons.Button({
            text: "Add",
            press: function () {
                // check if quantity is a number
                if (oInput2.getValueState() !== sap.ui.core.ValueState.Error) {
                    oController.add({
                            // id attribute can be ignored
                            name: oInput1.getValue(),
                            quantity: oInput2.getValue()
                        }
                    );
                }
            }
        });

 

The controller, home.controller.js, then is simply sending the serialized Fruit object as the content of a POST request to the appropriate endpoint (/home/add) made available by the Spring MVC controller. Once the Ajax call returns the updated model data, it is simply rebound to the JSONModel associated with the view.

 

add: function (fruit) {
        this.doAjax("/home/add", fruit).done(this.updateModelData)
            .fail(this.handleAjaxError);
    },
updateModelData: function (modelData) {
        console.debug("Ajax response: ", modelData);
        var model = this.getView().getModel();
        if (model == null) {
            // create new JSON model
            this.getView().setModel(new sap.ui.model.json.JSONModel(modelData));
        }
        else {
            // update existing view model
            model.setData(modelData);
            model.refresh();
        }
    }

 

In what follows we'll look in detail how to implement a REST-like endpoint handling Json payloads using Spring MVC framework.

 

Spring MVC set-up

 

We are using Servlet 3.0, no web.xml, approach based on Java annotations to set up a simple Spring MVC web application. For this we need an implementation of org.springframework.web.WebApplicationInitializer where we specify the class which will be used when constructing an instance of org.springframework.web.context.support.AnnotationConfigWebApplicationContext and where we declare a dispatcher servlet. Here is our implementation, com.github.springui5.conf.WebAppInitializer.

 

 
public class WebAppInitializer implements WebApplicationInitializer {
    private static final Logger logger = LoggerFactory.getLogger(WebAppInitializer.class);
    @Override
    public void onStartup(ServletContext servletContext) throws ServletException {
        logger.info("Initializing web application with context configuration class {}", WebAppConfigurer.class.getCanonicalName());
        // create annotation based web application context
        AnnotationConfigWebApplicationContext webAppContext = new AnnotationConfigWebApplicationContext();
        webAppContext.register(WebAppConfigurer.class);
        // create and register Spring MVC dispatcher servlet
        ServletRegistration.Dynamic dispatcher = servletContext.addServlet("dispatcher",
                new DispatcherServlet(webAppContext));
        dispatcher.setLoadOnStartup(1);
        dispatcher.addMapping("/");
    }
}

The actual configuration is given then by com.github.springui5.conf.WebAppConfigurer class.

 

@Configuration
@EnableWebMvc
@ComponentScan(basePackages = {"com.github.springui5.web"})
public class WebAppConfigurer extends WebMvcConfigurerAdapter {
    /**
     * Enable default view ("index.html") mapped under "/".
     */
    @Override
    public void configureDefaultServletHandling(DefaultServletHandlerConfigurer configurer) {
        configurer.enable();
    }
    /**
     * Set up the cached resource handling for OpenUI5 runtime served from the webjar in {@code /WEB-INF/lib} directory
     * and local JavaScript files in {@code /resources} directory.
     */
    @Override
    public void addResourceHandlers(ResourceHandlerRegistry registry) {
        registry.addResourceHandler("/resources/**").addResourceLocations("classpath:/resources/", "/resources/**")
                .setCachePeriod(31556926);
    }
    /**
     * Session-scoped view-model bean for {@code home.view.js} view persisting in between successive Ajax requests.
     */
    @Bean
    @Scope(value = "session", proxyMode = ScopedProxyMode.TARGET_CLASS)
    public HomeViewModel homeModel() {
        return new HomeViewModel();
    }
}

We use a helpful EnableWebMvc annotation, which configures our application with some useful defaults. For example, Spring will automatically configure an instance of org.springframework.http.converter.json.MappingJackson2HttpMessageConverter message converter which will use a Jackson to Java converter to serialize the model returned by the Ajax handling methods of the controller.

 

Another interesting thing to notice is that we are using Spring's resource servlet to serve the static JavaScript (OpenUI5 runtime) from the web JAR available on the classpath of the application. To create the web JAR, we can simply package the OpenUI5 runtime JavaScript, available for download, into a JAR and add it to the WEB-INF/lib directory of our project.

 

The session-scoped bean, com.github.springui5.model.HomeViewModel, is responsible for maintaining the reference to the model object corresponding to the client's view.

 

public class HomeViewModel {
    private HomeModel homeModel;
    /**
     * Initializes and returns a new model.
     */
    public HomeModel getNewHomeModel() {
        homeModel = new HomeModel();
        return homeModel;
    }
    /**
     * Returns the model for this view-model.
     */
    public HomeModel getHomeModel() {
        if (homeModel == null) {
            throw new RuntimeException("HomeModel has not been initialized yet.");
        }
        return homeModel;
    }
}

ComponentScan annotation specifies where to look for the controllers of the application. The single controller for the home view is com.github.springui5.web.HomeController.

 

@Controller
@RequestMapping(value = "/home", method = RequestMethod.POST, consumes = "application/json", produces = "application/json")
public class HomeController {
    private static final Logger logger = LoggerFactory.getLogger(HomeController.class);
    /**
     * Session-scoped view-model bean.
     */
    @Autowired
    private HomeViewModel vm;
    /**
     * Initializes the model for the view.
     */
    @RequestMapping
    public
    @ResponseBody
    HomeModel handleInit() {
        return vm.getNewHomeModel().show();
    }
    /**
     * Adds the {@linkplain com.github.springui5.domain.Fruit} parsed from the request body to the list of fruit in the
     * model.
     */
    @RequestMapping("/add")
    public
    @ResponseBody
    HomeModel handleAdd(@Valid @RequestBody Fruit fruit, BindingResult errors) {
        if (errors.hasErrors()) {
            throw new FruitValidationException(errors);
        }
        return vm.getHomeModel().add(fruit).clearError().show();
    }
    /**
     * Deletes the the {@linkplain com.github.springui5.domain.Fruit} with matching {@code id} from the list of fruit in
     * the model.
     */
    @RequestMapping("/delete/{id}")
    public
    @ResponseBody
    HomeModel handleDelete(@PathVariable long id) {
        return vm.getHomeModel().delete(id).clearError().show();
    }
    /**
     * Updates the the {@linkplain com.github.springui5.domain.Fruit} with matching {@code id} from the list of fruit in
     * the model.
     */
    @RequestMapping("/update")
    public
    @ResponseBody
    HomeModel handleUpdate(@Valid @RequestBody Fruit fruit, BindingResult errors) {
        if (errors.hasErrors()) {
            throw new FruitValidationException(errors);
        }
        return vm.getHomeModel().update(fruit).clearError().show();
    }
    /**
     * Custom exception handler for {@linkplain FruitValidationException} exceptions which produces a response with the
     * status {@linkplain HttpStatus#BAD_REQUEST} and the body string which contains the reason for the first field
     * error.
     */
    @ExceptionHandler
    @ResponseStatus(HttpStatus.BAD_REQUEST)
    public
    @ResponseBody
    HomeModel handleException(FruitValidationException ex) {
        String error = String.format("%s %s", ex.getRejectedField(), ex.getRejectedMessage());
        logger.debug("Validation error: {}", error);
        return vm.getHomeModel().storeError(error);
    }
}

We are autowiring the view-model bean into the controller. It will be reinitialized by Spring automatically for each new client of the application (new browser, for example). Ajax request handling is configured on the class and method levels via RequestMapping annotations specifying the URL paths available in the form /home or /home/add. Some methods accept a model object (Fruit) deserialized or unmarshalled from the Json in the body of the POST request via RequestBody annotations.

 

Each conroller method returns the instance of HomeModel which will be automatically serialized or marshalled to Json and later bound to the JSONModel on the client side.

 

Model and validation

 

The domain model used on the server is a couple of simple POJOs annotated with JSR-303 annotations (using Hibernate Validator implementation). Here is the class for com.github.springui5.model.HomeModel.

 

public class HomeModel implements Serializable {
    private static final Logger logger = LoggerFactory.getLogger(HomeModel.class);
    private List<Fruit> listOfFruit;
    private String error;
    public List<Fruit> getListOfFruit() {
        return listOfFruit;
    }
    public void setListOfFruit(List<Fruit> listOfFruit) {
        this.listOfFruit = listOfFruit;
    }
    public String getError() {
        return error;
    }
    public void setError(String error) {
        this.error = error;
    }
    public HomeModel() {
        listOfFruit = new ArrayList<>(Arrays.asList(new Fruit("apple", 1), new Fruit("orange", 2)));
    }
    public HomeModel add(Fruit fruit) {
        // set id, it is 0 after deserializing from Json
        fruit.setId(Fruit.newId());
        listOfFruit.add(fruit);
        return this;
    }
    public HomeModel delete(final long id) {
        CollectionUtils.filter(listOfFruit, new Predicate() {
            @Override
            public boolean evaluate(Object object) {
                return ((Fruit) object).getId() != id;
            }
        });
        return this;
    }
    public HomeModel update(final Fruit fruit) {
        // find the fruit with the same id
        Fruit oldFruit = (Fruit) CollectionUtils.find(listOfFruit, new Predicate() {
            @Override
            public boolean evaluate(Object object) {
                return ((Fruit) object).getId() == fruit.getId();
            }
        });
        // update the fruit
        oldFruit.setName(fruit.getName());
        oldFruit.setQuantity(fruit.getQuantity());
        return this;
    }
    public HomeModel storeError(String error) {
        this.error = error;
        return this;
    }
    public HomeModel clearError() {
        this.error = null;
        return this;
    }
    public HomeModel show() {
        logger.debug(Arrays.toString(listOfFruit.toArray()));
        return this;
    }
}

And here is the com.github.springui5.domain.Fruit class.

 

public class Fruit implements Serializable {
    private static long offset = 0L;
    private long id;
    @NotNull
    @NotBlank
    private String name;
    @NotNull
    @Min(1)
    private int quantity;
    /**
     * Returns a new value for {@code id} attribute. Uses timestamp adjusted with the static offset. Used only for
     * illustration.
     */
    public static long newId() {
        return System.currentTimeMillis() + offset++;
    }
    public Fruit() {
        // default constructor
    }
    public Fruit(String name, int quantity) {
        this.id = Fruit.newId();
        this.name = name;
        this.quantity = quantity;
    }
    public long getId() {
        return id;
    }
    public void setId(long id) {
        this.id = id;
    }
    public String getName() {
        return name;
    }
    public void setName(String name) {
        this.name = name;
    }
    public int getQuantity() {
        return quantity;
    }
    public void setQuantity(int quantity) {
        this.quantity = quantity;
    }
    @Override
    public boolean equals(Object obj) {
        return obj instanceof Fruit && ((Fruit) obj).getId() == id;
    }
    @Override
    public String toString() {
        return "Fruit [id: " +
                id +
                ", name: " +
                name +
                ", quantity: " +
                quantity +
                "]";
    }
}

 

Upon the initial request for the model data (/home) this is what the controller returns. Notice how the list of Fruit domain objects was automatically serialized to Json for us.

 

init.png

 

If an invalid value is submitted as the part of the request body (for example the quantity of 0 when adding a new fruit) it is automatically picked up by Spring and assigned to the org.springframework.validation.BindingResult parameter of the corresponding request handling method. The application then exposes the validation error message as the value of the models "error" attribute.

 

Testing the application

 

This is a standard Maven application which needs some mandatory dependencies to compile and run.

 

<!-- all of the necessary Spring MVC libraries will be automatically included -->
<dependency>
  <groupId>org.springframework</groupId>
  <artifactId>spring-webmvc</artifactId>
  <version>4.0.0.RELEASE</version>
</dependency>
<!-- need this for Jackson Json to Java conversion -->
<dependency>
  <groupId>com.fasterxml.jackson.core</groupId>
  <artifactId>jackson-databind</artifactId>
  <version>2.3.0</version>
</dependency>
<!-- need this to use JSR 303 Bean validation -->
<dependency>
  <groupId>org.hibernate</groupId>
  <artifactId>hibernate-validator</artifactId>
  <version>5.0.2.Final</version>
</dependency>

It also uses a Tomcat Maven plugin for running the project in an embedded Tomcat 7 using: mvn tomcat7:run from the command line.

 

Conclusion

 

Using Spring MVC with OpenUI5, the way we have described here, has some advantages. We can easily setup a REST-like endpoint which will automatically convert Json payloads to Java domain objects allowing us to concentrate on manipulating the model in Java without worrying on how the changes will be reflected in the JavaScript on the client-side. We can also plug in domain objects validation based on annotations (JSR 303), using Spring's validation mechanism. This allows us to process all business logic validation on the server-side in a declarative and transparent manner, leaving only checks for formatting errors on the client-side.

 

There are some disadvantages to this approach, however, the main of which, of course, is that we are returning an entire model for each request, which results in an unnecessary large data transfer. This should not be a limitation for a relatively simple views, but can represent a problem for the complicated views with a lot of data.

Open source is changing the way software is being developed and consumed. It is also SAP’s intention to contribute to open source and integrate open source into the product line. With the same intention, OData JPA Processor Library headed off the open source way a few months back and yes, we are now an open source software along with OData Library (Java) on Apache Software Foundation (ASF), see Apache Olingo project for details

 

 

The OData JPA Processor Library is a Java library for transforming Java Persistence API (JPA) models based on JPA specification into OData services. It is an extension of the OData Library (Java) to enable Java developers to convert JPA models into OData services. For more details check SAP OData Library Contributed to Apache Olingo (Incubator) which gives you an introduction on why OData and the features of the OData Library (Java).

 

The artifacts to get started with OData JPA Processor Library, the documentation, the code are all available on Apache Olingo. The requirements for building an OData service based on a JPA model are quite low, for a quick start you can refer to the following tutorial. In short, you just have to create a web application project in Eclipse (both Kepler and Juno versions are supported), implement a factory to link to the JPA model and register the factory class within the web.xml file, it’s that simple. The OData JPA Processor Library also supports more enhanced features like the ability to redefine the metadata of the OData services (like renaming entity type names and its properties) and to add additional artifacts like function imports to the OData service.

 

So if you are on the lookout for a reliable and easy to use software for transforming JPA models into OData services, you now know where to go The libraries are out there in the open, please do explore them, extend them and let us know the new faces you give them. Use the mailing list available here not just to let us know how you have used the libraries but also to report bugs, ask questions, the team will be glad to hear from you and to answer your queries

 

With that, we say we have arrived into the Open Source Software world!! And also, this is just the beginning and there is more to come (or at least that is the intent), so keep an eye on the Apache Olingo project. 

 

Related Information:

In today’s mobile and agile business environment, it is important to unlock the enterprise data held by applications and other systems, and enable its consumption from anywhere. The Open Data Protocol (OData), on its way to be standardized by Microsoft, IBM, SAP and a lot of other companies within OASIS (an international standards body for advancing open standards for the information society), provides a solution to simplify data sharing across applications in enterprises, in the Cloud, and on mobile devices. OData leverages proven Internet technologies such as REST, ATOM and JSON, and provides a uniform way to access data as well as data models.

 

SAP contributed the Java OData Library recently to Apache Olingo (Incubator). After just a few days available in public we got a lot of interest from other companies. That makes us confident to build up a community working on evolving this library to the latest version which is the upcoming OData Standard as result of the standardization process at OASIS.

 

Talking about features there is already a lot to be discovered in the library. Since the Entity Data Model, the URI Parsing including all System Query Options and (De)Serialization for ATOM/XML and JSON is already supported one can build an OData Services supporting advanced read / write scenarios. Features like $batch are currently added, Conditional Handling, advanced Client Support and detailed documentation are on the roadmap for the upcoming months.

 

The guiding principles during the implementation of the OData Library were to be OData 2.0 specification compliant and have an architecture in place to enhance the library in a compatible manner as much as possible. The clear separation between Core and API and keeping dependencies down to a minimum is important. The community should have the option to build extensions for various data sources on top of the library. The JPA Processor as one additional module provided is an excellent example for such an extension.

 

Besides the Core and API packages there is also an example provided in the ref and ref.web packages in order to show the features in an OData Service Implementation and to enable to integrate new features in that service also for full integration tests (fit).

 

We’ll keep you posted once the first release is available to digest. You can already dig into the coding, provide bug reports, feature requests and questions via Jira or by using the mailing list. All the information is available in the support section of the web site.

 

Further Information:

Hi

 

SAP's software is known for its role running many of the world's largest companies, but not necessarily for its user-friendliness. As part of an ongoing effort to change this perception, SAP unveiled Fiori, a set of 25 lightweight "consumer-friendly" applications that can run on desktops, tablets and mobile devices, on Wednesday at the Sapphire conference in Orlando.

Fiori applications are written in HTML5, which makes multiplatform deployments possible. They also target some of the most common business processes a user might perform, such as creating sales orders or getting their travel expenses approved, according to SAP's announcement.SAP has grouped the initial Fiori applications into four separate employee types, including manager, sales representative, employee and purchasing agent. Fiori is priced per user and available now, but specific costs weren't disclosed Wednesday.It's possible to deploy Fiori as a single group of applications, as well as separate Web applications and within portals, according to a statement.Some 250 customers helped SAP develop Fiori and make the apps more user-friendly, SAP said.SAP has basically been compelled to develop something like Fiori, according to one observer."Customers want enterprise-class apps with consumer-grade experiences," said analyst Ray Wang, CEO of Constellation Research. "Fiori is one of the ways SAP customers can pull the data out of their existing systems, and democratize that information so that everyone can benefit from access to the SAP system.""For years, the issue was that SAP data was hidden or not easily accessed," Wang added. "This is one small step to make that change."SAP's App Haus, a startup-like development group within the company,has been working to create more usable and appealing application interfaces. It wasn't immediately clear Wednesday whether the App Haus team is involved with Fiori.

The vendor has also launched a product called Screen Personas, which gives users the ability to rejigger SAP software screens to better fit their job role and personal preferences.There's plenty more to come, SAP co-CEO Jim Hagemann Snabe said during a keynote.

 

Thank You

As some of you might know SAP is a contributor in the OpenSource project Eclipse. As part of that engagement we also organize so called "Eclipse DemoCamps" to show what one can do with this great development platform which is used a lot in the IT industry and is also the IDE of choice for SAP HANA Cloud Platform.

 

This year's Eclipse DemoCamp will be held at the day of planned release date for Kepler, the release name of Eclipse V4.3.

 

In case you are interested in joining the event you can register for free or even propose a speaking slot at the Eclipse DemoCamp and join speakers like Mike Milinkovich, the Executive Director of the Eclipse Foundation.

You'll be able to listen to interesting talks, get free drinks & food and a lot of possibilities to connect with other developers during the event.

 

So register today and join us in Walldorf for the Eclipse DemoCamp.

 

Best,

Rui

Welcome to the last episode about the SAP Open Source Summit 2012. In the first three episodes, I shared my overall impressions, key parts of my presentation on the corporate open source strategy, and insights from guest keynotes, respectively. Now I want to finish with a few areas that we are focusing on next.

 

Given that we now have an open source strategy in place at SAP, the focus is now much more on execution. Whenever you want to execute a strategy, you may however run into a number of challenges - be they of a technical, organizational or procedural nature. Specifically with regard to the much stronger use of open source and contribution to open souce projects, we have found four major types of challenges. These are reuse and versioning, alignment of release schedules, product security and long-term support as shown in the slide below. I used this slide also in my keynote last week at ApacheCon Europe 2012 to illustrate key challenges for managing open source from an enterprise perspective.

 

ApacheConEurope_Keynote_CvonRiegen.png

 

  • Reuse and Versioning
    If you focus on a given open source foundation like the Apache Software Foundation or the Eclipse Foundation, it is very likely that due to the nature of the foundation's development processes there is not much overlap between the various open source technologies. But open source projects today in many cases start somewhere else and don't necessarily follow a defined governance model. This is good since it is more flexible and allows the emergence of many different ideas. For example, GitHub today has almost 4.3 million projects. But the lack of coordination is also a challenge in that it becomes more difficult to find the right technologies and to understand their level of maturity and adoption. And it increases the likelihood of overlapping and similar technologies. This is not necessarily bad, but it can become a management challenge of open source adoption in the enterprise.

    One particular challenge in this regard is that different product teams might choose different technologies for the same purpose. The integration of, for example, four different open source XML parsers into our product line can result in maintenance overhead - four different technologies need to be maintained over the course of the products' support schedule. Another challenge is the selection of the right version of the open source technology. Even if all SAP product teams agree to select the same open source XML parser, they may have done so over the course of a few years and have chosen different versions of that XML parser. It is preferable to use only one version - and actually the most recent stable one - for all products because it also minimizes the risk to miss important security bug fixes in newer versions. But the upgrade might get complicated, for example, if the XML parser's APIs that were used to integrate it into the SAP product, have changed incompatibly.
  • Release Schedules
    The adoption of the most recent stable version of an open source technology is a reasonable goal to pursue on its own. But since the release schedules of the open source technology and the SAP product are not the same, this is not necessarily an easy task and some updates of the embedded open source technology might become necessary after the SAP product has already been shipped. There are some exceptions like the Eclipse Foundation that has decided to deliver one stable release of the Eclipse Platform once per year - this simplifies the planning exercise and allows us to, for example, develop a stable Eclipse release train SAP-internally on a yearly basis. The whole exercise can get more complicated when SAP is an active contributor to the open source project. By no means is there a guarantee that the extensions we developed SAP-internally will be adopted without any changes by the open source project lead and in time before the SAP product is being released to the market. This means that we sometimes need to live with a fork of the open source technology. The operational recommendation is to avoid such forks and to always seek a close alignment between the open source standard version and the version embedded in the SAP product.
  • Security
    Product security and security response management for SAP products clearly needs to include a responsibility for fixing security vulnerabilitites in embedded open source technologies. On the one side - since the source code of the open source technology is openly available and used by many other firms - there are more sources for finding potential security vulnerabilities. On the other side, this results in an obligation to even more quickly respond to such findings or adopt known resolutions. Enterprise customers need to be able to patch their systems with respecitve bug fixes before it is being discussed in the public. This may sound like a paradox since further development of the open source technology happens "in the open." But there are resolutions - the Apache Software Foundation, for example, has a process by which security vulnerabilities can be reported on a mailing list that is only accessible to the leads of the respective open source project. Until a resolution is available, the conversation continues in a private environment. The vulnerability is finally being reported publicly only at the time a bug fix has being made available. This significantly limits the risk that IT user organizations continue to run systems that are exploitable due to known vulnerabilities.
  • Long-Term Support
    SAP products are typically supported for at least seven years. Customers expect support for the complete solution, not just for the software components that were developed SAP-internally. From a support perspective, customers actually shouldn't need to know which open source technologies have been embedded. Consequently, we need to provide a means to support respective open source technologies, including the application of bug fixes as necessary. Fortunately, the various quality checks that are being applied when selecting open source technologies at SAP radically reduce the number of necessary bug fixes. But it can never be completely avoided and sometimes it is necessary for version 3.2 when the open source project has already released version 6.0 ...
    In principle, there are two main options: either to upgrade the embedded open source technology to version 6.0 or to apply a local bug fix to version 3.2. Both options can be rather complicated. An alternative is to join forces with other interested open source developers and users and to share the cost of maintaining older versions of the open source technology. SAP has long motivated such an approach for Eclipse projects. The recent establishment of the Long Term Support Industry Working Group (LTS IWG) is a good step in this direction and fortunately, a number of other Eclipse members do see the same need and have joined the LTS IWG to solve this important dilemma.

 

So I hope that this blog post series about the SAP Open Source Summit 2012 has been useful for you. From my perspective is was good to see a strong focus on operational excellence, knowledge sharing and applying open source development principles SAP-internally. Let me conclude with Mike Milinkovich's words: "Open source software is really really mainstream." And the journey will continue.

 

Here is the overview of the blog post series again:

  1. Overall Impressions
  2. What we think - SAP Open Source Strategy
  3. Insights from keynotes
  4. What we do next (this post)

In this blog post we'll look at how one can use Spring Security framework together with the usual way of securing a JEE application on NetWeaver 7.3 and what this can bring us. Here are the useful links :

 

 

For the reference, I've tested this setup with Spring Security 3.1.3.RELEASE (which in turn depends on Spring MVC 3.0.7.RELEASE).

 

Spring Security is a popular and very flexible framework which allows to configure and manage all aspects of securing a web application : authentication, authorization, access control to domain objects. One of the many useful things which this framework provides is a spring-security-taglibs module which allows one to protect the various pieces of the JSP page using security tags tied to the user's role membership (authorization). For example, we can have a JSP like this :

 

<%@ taglib prefix="sec" uri="http://www.springframework.org/security/tags"%>
<p>Hello, this page is accessible to all users with ROLE_EVERYONE</p>
<sec:authorize access="hasRole('ROLE_SFLIGHT_USER')">
    <p>This text and link below should only be visible to users with
    ROLE_SFLIGHT_USER</p>
    <a href="<%=request.getContextPath() %>/secure">secure page</a>
</sec:authorize>

 

Notice the use of sec:authorize tag which protects access to the part of the page depending on whether or not the current user has a role SFLIGHT_USER. We'll discuss below how we can configure Spring Security to work seamlessly with the security services provided by out JEE container. The idea behind this integration is quite simple: NetWeaver already handles the user authentication for us, there is also a mechanism to map the UME roles of the portal user to the roles referenced in the web.xml of our application. All we have to do is to find a way to make Spring Security framework recognize these roles as the "granted authorities" associated with the authenticated user.

 

We start with a simple web application set up. Here are the interesting parts of our starting web.xml :

 

<login-config>
    <auth-method>TICKET</auth-method>
</login-config>
<security-role>
    <role-name>EVERYONE</role-name>
</security-role>
<security-role>
    <role-name>SFLIGHT_USER</role-name>
</security-role>
<security-constraint>
    <web-resource-collection>
        <web-resource-name>Spring Security Integration Test Application</web-resource-name>
        <url-pattern>*</url-pattern>
        <http-method>GET</http-method>
        <http-method>POST</http-method>
    </web-resource-collection>
    <auth-constraint>
        <role-name>EVERYONE</role-name>
    </auth-constraint>
    <user-data-constraint>
        <transport-guarantee>NONE</transport-guarantee>
    </user-data-constraint>
</security-constraint>

 

We chose a "tiket" authentication method for our web application meaning that the user will be considered authenticated if he or she previously logged in on the portal and that there is JSESSIONID and MYSAPSSO2 cookies are present in his or her browser session (usual SAP SSO via SAP Log on ticket). We declare two security roles "EVERYONE" and "SFLIGHT_USER" for our example. The first one is a general role assigned to every UME user. The other one is an example role which we can create and assign to some test user. We then protect any access to our web application by setting url-pattern of the security constrain to "*". The idea is that we let the JEE container to manage the authentication of the user but the finer granularity access protection within the application we will delegate to Spring Security. For the mapping between the UME roles and the web application security roles we also need this in the web-j2ee-engine.xml file of the web application module :

 

<security-role-map>
    <role-name>EVERYONE</role-name>
    <server-role-name>EVERYONE</server-role-name>
</security-role-map>
<security-role-map>
    <role-name>SFLIGHT_USER</role-name>
    <server-role-name>SFLIGHT_USER</server-role-name>
</security-role-map>

 

Now we need to set up and configure the filter chain used by Spring Security. This is done by specifying a Spring's application context file containing all the security configuration in our web.xml file :

 

<listener>
    <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
</listener>
<context-param>
    <param-name>contextConfigLocation</param-name>
    <param-value>/WEB-INF/context/security-config.xml</param-value>
</context-param>
<filter>
    <filter-name>springSecurityFilterChain</filter-name>
    <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class>
</filter>
<filter-mapping>
    <filter-name>springSecurityFilterChain</filter-name>
    <url-pattern>/*</url-pattern>
</filter-mapping>

 

The security configuration file, security-config.xml, uses Spring Security namespace for convenience. It uses the security scenario of "pre-authentication" and relies on two classes provided for us by the framework: J2eePreAuthenticatedProcessingFilter and  which integrates with the container authentication process by extracting user principle from the HttpServletRequest and J2eeBasedPreAuthenticatedWebAuthenticationDetailsSource which is responsible to map a configured set of the security roles specified in the web application descriptor file to the set of GrantedAuthorities provided the user membership in these roles has been established. Here is the security-config.xml file :

 

<beans:beans xmlns:beans="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.springframework.org/schema/security"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd
        http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.1.xsd">
    <http auto-config="true" use-expressions="true">
        <jee mappable-roles="EVERYONE,SFLIGHT_USER" />
        <intercept-url pattern="/**" access="hasRole('ROLE_EVERYONE')" />
        <intercept-url pattern="/secure/**" access="hasRole('ROLE_EVERYONE')" />
    </http>
    <authentication-manager>
        <authentication-provider ref="preAuthAuthenticationProvider"></authentication-provider>
    </authentication-manager>
    <beans:bean id="preAuthAuthenticationProvider"
        class="org.springframework.security.web.authentication.preauth.PreAuthenticatedAuthenticationProvider">
        <beans:property name="preAuthenticatedUserDetailsService">
            <beans:bean
                class="org.springframework.security.web.authentication.preauth.PreAuthenticatedGrantedAuthoritiesUserDetailsService"></beans:bean>
        </beans:property>
    </beans:bean>
</beans:beans>

 

Notice the use of "jee" element in the "http" configuration, it is a shortcut for configuring an instance of J2eePreAuthenticatedProcessingFilter filter and registering it with the defaulf filter chain. Also, by default all the JEE security roles specified in the "mappable-roles" attribute will be mapped to the GrantedAuthorities with the names prefixed by "ROLE_".

 

Once the pre-authentication mechanism is successfully configured we can protect URL access using expressions of the kind "hasRole('ROLE_FROM_UME_HERE')" in the global "http" configuration element and in the security tags in our JSPs. For the reference, this is the Spring MVC configuration file, mvc-config.xml, which I've used for the the web application, it uses "mvc" namespaces for convenience :

 

<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans 
        http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
        http://www.springframework.org/schema/mvc 
        http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd">
    <mvc:annotation-driven />
    <bean id="viewResolver"
        class="org.springframework.web.servlet.view.UrlBasedViewResolver">
        <property name="viewClass"
            value="org.springframework.web.servlet.view.JstlView" />
        <property name="prefix" value="/WEB-INF/jsp/" />
        <property name="suffix" value=".jsp" />
    </bean>
    <mvc:view-controller path="/" view-name="index"/>
    <mvc:view-controller path="/secure" view-name="secure"/>
</beans>

 

It could be referenced in the web.xml file in the standard way :

 

<servlet>
    <servlet-name>springMvcServlet</servlet-name>
    <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
    <init-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>/WEB-INF/context/mvc-config.xml</param-value>
    </init-param>
    <load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
    <servlet-name>springMvcServlet</servlet-name>
    <url-pattern>/</url-pattern>
</servlet-mapping>

 

Using Spring Security in the standard JEE web application can be very useful. Other than the use of the security tags in the JSPs, described in this post, one can think of securing method's access in Java beans or using Access Control Lists (ACL) management for example.

Actions

Filter Blog

By author: By date:
By tag: