1 2 3 4 Previous Next

dj.adams

46 Posts

Gregor Wolf and I are attending the SAP Developer Advisory Board meeting on Tue 22 Apr 2014.

 

We'd like to ask you, as members of the general SAP Developer Community, for your thoughts. There isn't a particular agenda or categorisation we'd like to impose here, we just want to make it open enough for you to write what you think.

 

Here's a form where you can add your thoughts

 

We can't of course guarantee that there will be enough time to air all the questions but we'll do our best!

 

Thank you.

DJ Adams

Reaching Out

Posted by DJ Adams Mar 2, 2014

As a technology company SAP is over 4 decades old. Over that time it's innovated at a tremendous pace, and along the way it has abstracted, invented and reinvented technologies like no other company I know. In parallel with this, there's been an incredible growth in community, business and technical. In this post I want to focus on the technical.

 

The oft unspoken status quo with the SAP technical community is that the members operate within a bubble. It's a very large and comfortable bubble that powers and is powered by the activity within; folks like you and me learning, arguing, corresponding and building within communities like this one - the SAP Community Network. We have SAP TechEd, which is now called d-code. We have SAP Inside Tracks. We have InnoJams, DemoJams and University Alliance events too. Every one of these events, and event types, are great and should continue. But there's a disconnect that I feel is moving closer to the surface, becoming more obvious. This disconnect is that this bubble, this membrane that sustains us, is in many areas non-permeable.

 

There are folks who operate on both sides of that bubble's surface. Folks that attend technology conferences that are not SAP related. Folks that are involved in developer communities that have their roots outside the SAP developer ecosphere. Folks that write on topics that are not directly related to SAP technologies (but with a short leap of imagination surely are). But these folks are the exception.

 

SAP's progress in innovation has been slowly turning the company's technology inside out. Moving from the proprietary to the de facto to the standard. Embracing what's out there, what's outside the bubble. HTTP. REST-informed approaches to integration. OData. JavaScript and browser-centric applications. Yes, in this last example I'm thinking of SAP's UI5. In particular I'm thinking about what SAP are doing with OpenUI5 - open sourcing the very toolkit that powers SAP's future direction of UI development. With that activity, SAP and the UI5 teams are reaching out to the wider developer ecospheres, to the developer communities beyond our bubble. If nothing else, we need these non-SAP developers to join with us to build out the next decade.

 

I try to play my part, and have done for a while. I've spoken at OSCON, JabberConf, FOSDEM and other conferences over the years, and attended others such as Strata and Google I/O too. I've been an active participant in various non-SAP tech communities in areas such as Perl, XMPP and Google technologies. This is not about me though, it's about us, the SAP developer community as a whole. What can we do to burst the bubble, to help our ecosphere and encourage SAP to continue its journey outwards? One example that's close to my heart is to encourage quality Q&A on the subject of UI5 on the Stack Overflow site. But that's just one example.

 

How can we reach out to the wider developer ecosphere? If we do it, and do it with the right intentions, everybody wins.

 

 

Update 04 Mar 2014

 

The massively popular code sharing and collaboration site jsbin.com now supports OpenUI5 bootstrapping. Read this post for more details. Step by step!

saplabsil-meandaviad.pngI'm sitting in TLV airport waiting for my flight back to MAN via FRA. I've just spent a whirlwind 36 hours, more or less, in an amazing developer engine also known as SAP Labs Israel. Before I start though, I want to extend my gratitude and thanks to all those who made me so welcome (which is basically *everyone*) and especially to The specified item was not found. Rafi Bryl Amir Blich Gabi Koifman and Keren Golan. Of course as you may know Aviad, who organised the whole event, was announced as one of the new SAP Mentors today. Congratulations! I did have the "insider knowledge" yesterday, hence the double meaning of my "flying the @SAPMentors flag" tweet yesterday evening on the roof terrace of the SAP Labs building :-)

 

customercorner.pngIt was a mind stretchingly great time, in the form of a Customer Corner Event which saw attendees from Danone, Coca Cola and Bluefin (me). Aviad had prepared a full agenda for Day 1, which included HANA related customer stories, roundtable discussions on various deeper-dive topic such as UI Integration Services, Portal-esque features for a solid integrated UI/app strategy, HANA XS, River, OData and more. There was also a panel discussion that covered topics such as cloud, HANA adoption, performance tuning, and "Kindergarten code" (yes, that phrase will stick!).

 

The second (half-day) gave me a chance to get a deeper dive look at some of the things we'd covered in Day 1. Specifically I'm thinking of SAP HANA Integration Services and River. There's too little time before my flight to cover these topics decently, so I'll keep it short for now and encourage you to take a look yourself, either in the existing docs or look to TechEd and beyond for the amazing River features. Not long to wait!

 

SAP HANA UI Integration Services

 

You've built a few SAPUI5-based apps. You're also looking at SAP Fiori. But have you thought about your overarching UI strategy? Moving from inside-out to outside-in based development isn't just about building great apps for multiple runtimes. It's about a consistent experience, role-based access to apps, common services (e.g. persistence), tools for non-developer/user roles such as designers and administrators, and the ability to give your users a unified entrypoint to all this.

 

So you should be looking at a "frontend server" that might consolidate Gateway foundation/core, customisation and the repository for SAPUI5 runtime artifacts. The SAP HANA UI Integration Services provides the API layer as the foundation for this "unified shell", and is a very viable option (running on HANA) for such a "frontend server". Plus you get the toe-in-the-water benefits of a HANA system ready for trialling and experimentation. Stick this in the cloud and you're onto a winner.

 

Oh, and want portal-style ability to define multiple apps in the same 'site', communicating with each other via publish/subscribe, using an open standard? Add OpenSocial to the mix and you've got it. And they have. Define a simple SAPUI5 component with an OData model connection to a live backend service, have the data presented to you in that widget in familiar "Excel" format (rows and columns), and then pipe that data into another graph widget via pubsub. Excellent.

 

River Language (aka RDL)

 

River as a concept and in various concept/demo forms has been around for a while, dating back at least to 2010 when I got to see a demo at the Innovation Weekend before the SAP TechEd 2010 event in Berlin. Jacob Klein wrote more recently about River, and was at the deep dive today at SAP Labs. 

 

What I saw today bowled me over. Imagine a JavaScript-like, part-declarative, part imperative language where in Eclipse, in one breath, if you want to, you can:

 

  • define your data (entities, properties, relationships, etc)
  • write procedural code in the form of functions to provide custom logic (when a simple entity read/write, for example, is not enough)
  • declare authorisations and roles

 

and then, seconds later, use a table/column style UI still within Eclipse to create random / test data for the entities you've just defined, navigate those entities and jump between them via the relationships you also defined, and oh by the way have all the runtime artifacts generated automatically for you in the HANA backend. Further, the whole thing is exposed as an OData service with the appropriate entities, entitysets, associations, enumerations and function imports (I think you can guess which of these related to your data definitions in River).

 

Within the procedural code you can access any HANA-based data, or via adapters, reach out remotely to your (say) ABAP stack-based systems too. And yes, (I asked, and they showed me live) you can debug and single-step through this too. Debugging directly in Eclipse or triggering it via setting a header in the HTTP request from outside.

 

Rather impressive stuff.

 

So unfortunately I have to go and catch my flight (and find somewhere to sleep!). It was a pretty awesome (and packed) time and I was totally privileged to have been able to take part. Thank you all for having me!

DJ Adams

Engaging the Next Generation

Posted by DJ Adams Aug 17, 2013

As many of you might know (from my #YRS2013 tweets this month), I was involved again in Young Rewired State, an initiative that gathers kids all around the country, gives them a week-long opportunity to learn or improve in coding skills, embrace open data and understand the value of it, and work together to build hacks and apps using open source and that open data. I was centre lead for one of three Manchester-based centres this year, at MadLab, and the whole event, which culminated in the hundreds of kids and mentors from centres all round the country coming together for a weekend of show and tell (and prizes) in Birmingham, was a terrific success yet again.

 

From the show and tell and judging on the weekend, here's a quote from one of the kids during his team's presentation to the judges, to explain their use and choice of data sources and backend systems:

 

"<organisation> has an open API so we used that"

 

In my tweet I alluded to the fact that this was a sentiment echoed by all the participants at YRS - the kids building cool hacks on open data and sharing the source code are our future.

 

What are we doing to help form and guide this future? Well for a start, there are a great number of people who get involved with this sort of thing on a regular basis. John Astill for example took part in a "hyperlocal" instance of YRS - at YRS NYC, last month. And of course, for the second year running, SAP itself, through the guidance and steady hand of Thomas Grassl is headline sponsor, helping make the whole thing happen (thank you SAP, I'm proud to have been able to connect you with YRS in the first place, last year!). Ian Thain was there also and wrote up a piece on YRS this year too: SAP and the young Developers of tomorrow

 

As well as YRS there are other initiatives, regular events in the UK that take place. I wrote about what I've been involved with in a post on the Bluefin Solutions blog:

 

Computational Thinking and Kids - A Year in Review

 

In SAP's continous re-invention of itself, it is getting involved more and more in embracing a wider audience, engaging with those kids and students who are our future, and reaching out more broadly then ever. For this I applaud them. Yes, there are corporate goals and useful side-effects, such as bringing more developers closer to an SAP flavoured platform, and increasing the chances of SAP software longevity, but those side-effects have very real benefits in helping teach computational thinking and prepare our youngsters for a data-driven future.

 

If you're interested in this and more besides (such as the InnoJam and University Alliance initiatives), then watch this space - there will be a public SAP Mentor Monday event in September to cover these subjects. Hope to see you there. In the meantime, please let us know in the comments what you think, and what it might take for you to get involved too. Believe me, it is hugely rewarding as well as great fun.

 


(I posted this on my own blog on my way back from SAP #DKOM and a good friend suggested I repost it here too. So here it is).

At SAP TechEd Madrid (November last year) I wrote about the Developer Renaissance, covering my interview with Aiaz Kazi from the Technology & Innovation Platform, and SAP's re-focus on developers.

This week I had the great honour of being invited to, and speaking at SAP DKOM (Development Kick-Off Meeting) in Karlsruhe. It was a truly great event - thousands of SAP developers attending many tracks and sessions on everything from Analytics, through Database & Technology, to Cloud, and more besides. As I sit here in Frankfurt airport on my way home, I've been reflecting on perhaps the best single takeaway from this event. Yes the content of the talks was great (and I enjoyed giving my session on SAP NetWeaver Gateway too). Yes the venue and organisation was second to none. Yes it was great to see the SAP Mentor wolfpack and our illustrious leader Mark Finnern.

But most of all, I saw, felt, and experienced something that I last remember from over 20 years ago in my SAP career: The Developer Connection.

Back in the day, when I was (more) innocent, certainly a lot younger, and waist-deep in IBM mainframe tech, I moved around implementing and supporting R/2 installations in the UK and Europe. Esso Petroleum in London, Deutsche Telekom in Euskirchen, and so on. In those days you could catch up with all the OSS notes on your favourite topics over a couple of coffees. Most importantly however, you had connections to the developers at SAP who were building and shipping the code that you were implementing. We knew each other's names, and in many cases, shared phone numbers or email addresses too. There was a strong bond between customers and developers - and we worked together to make the software better.

That connection lost its way over the next few years, when SAP (consciously or unconsiously) built barriers between us. It became almost impossible in some cases to even find out the name of the developer or team responsible, let alone contact them directly.

Well - that connection is back. And better than ever before. Both at SAP TechEd Madrid, and this week at DKOM, developers were coming and saying hello. Developers who are building the great stuff we're exploring and using, like SAPUI5 and NetWeaver Gateway. People like you and me. We are connecting again. I think there are a number of reasons for this.

First, there's the amazing community called the SAP Community Network (SCN - although for me it will always be the SAP Developer Network - SDN) that brings together developers from all sources. Then there's SAP's re-focus on developers, and the corresponding coupling of empowerment and responsibility that SAP is giving directly to those developers. Further, there's the inexorable turning inside out manoeuvre that SAP began a few years ago now, moving cautiously at first but now gathering pace as more and more technology directions that SAP are following are from outside the SAP universe, not inside. SAP developers naturally are connecting with the wider development community in general.

Whatever the reason, it's a great sign that the future looks exciting for SAP development as a whole. Connections, collaboration and cooperation is returning. The Developer Connection is here again.

So following a very interesting podcast from Rui Nogueira with SAP's Michael Falk and Tim Back on the HTML5-based UI Toolkit for SAP NetWeaver Gateway (aka "SAPUI5") earlier this month, a beta version of  SAPUI5 was released to the world on SDN, specifically in the "Developer Center for UI Development Toolkit for HTML5" section. I downloaded it and unpacked the contents into a directory to have a look at the docu and guidelines, and have an initial poke around.

Wow, those folks have certainly put together some nice documentation already! Try it for yourself - once downloaded, open the demokit directory and you should be presented with a nice (SAPUI5-powered) overview, developer guide, controls and API reference:

 

The framework is based upon JQuery / UI and contains  a huge number of controls. It also supports data bindings, and one thing that had intrigued me from the podcast was that data bindings were possible to arbitrary JSON and XML ... and OData resources.

"Gosh", I hear you say, "that reminds me of something!" Of course, SAP NetWeaver Gateway's REST-informed data-centric consumption model is based upon OData. So of course I immediately was curious to learn about SAPUI5 with an OData flavour. How could I try out one of the controls to surface information in OData resources exposed by NetWeaver Gateway?

What I ended up with is SAPUI5's DataTable control filled with travel agency information from my copy of the trial NetWeaver Gateway system, via an OData service all ready to use. You can see what I mean in this short screencast.

Here's what I did to get the pieces together. I'm assuming you've got the trial Gateway system installed and set up (you know, fully qualified hostname, ICM configured nicely, and so on), and that you're semi-familiar with the SFLIGHT dataset.

Step 1. The OData Service

Check with transaction /iwfnd/reg_service, for the LOCAL system alias, that the service RMTSAMPLEFLIGHT is available, as shown here.

Check you can see the service document by clicking the Call Browser button (you may need to provide a user and password for HTTP basic authentication). You can also check the data by manually navigating to the TravelagencyCollection by following the relative href attribute of the app:collection element as shown here:

In other words, navigate from something like this:

http://gateway.server:port/sap/opu/sdata/IWFND/RMTSAMPLEFLIGHT/?$format=xml

to this:

http://gateway.server:port/sap/opu/sdata/IWFND/RMTSAMPLEFLIGHT/TravelagencyCollection?$format=xml

(The $format=xml is to force the service to return a less exotic Content-Type of application/xml rather than an Atom-flavoured one, so that the browser is more likely to render the data in human-readable form.)

Following this href should show you some actual travel agency data in the form of entries in an Atom feed (yes, "everything is a collection/feed!"):

Step 2. The SAPUI5 Framework

Make your SAPUi5 framework accessible. To avoid Same Origin Policy based issues in your browser, get your Gateway's ICM to serve the files for you. Create a 'sapui5' directory in your Gateway's filesystem:

/usr/sap/NPL/DVEBMGS42/sapui5/

 

unpack the SAPUI5 framework into here, and add an instance profile configuration parameter to tell the ICM to serve files from this location:

icm/HTTP/file_access_5 = PREFIX=/sapui5/, DOCROOT=$(DIR_INSTANCE)/sapui5/, BROWSEDIR=2

 

(here I have 5 previous file_access_xx parameters, hence the '5' suffix in this example)

and when you restart the ICM it should start serving the framework to you:

Step 3. The HTML / Javascript Application Skeleton

Actually, calling it an application is far too grand. But you know what I mean. Now we have the SAPUI5 framework being served, and the OData service available, it's time to put the two together.

Here's the general skeleton of the application - we pull in SAPUI5, and have an element in the body where the control will be placed:

    
<html>
  <head>
    <meta http-equiv="X-UA-Compatible" content="IE=edge" />
    <title>SAP OData in SAPUI5 Data Table Control</title>
    <!-- Load SAPUI5, select theme and control libraries -->
    <script id="sap-ui-bootstrap"
      type="text/javascript"
      src="http://gateway.server:port/sapui5/sapui5-static/resources/sap-ui-core.js"
      data-sap-ui-theme="sap_platinum"
      data-sap-ui-libs="sap.ui.commons,sap.ui.table">
    </script>

  <script>
    ...
  </script>

</head>
<body>
  <h1>SAP OData in SAPUI5 Data Table Control</h1>
  <div id="dataTable"></div>
</body>
</html>

 

In the final step we'll have a look at what goes in the "..." bit.

 

Step 4. Using the SAPUI5 Framework in the Application

So now it's time to flex our Javascript fingers, stand on the shoulders of giants, and write a few lines of code to invoke the SAPUI5 power and glory.

What we need to do is:

  • create a DataTable control
  • add columns to it to represent the fields in the OData entries
  • create an OData data model
  • link the DataTable to this data model
  • bind the rows to the TravelagencyCollection
  • place the control on the page

 

Simples!

Creating the DataTable control goes like this (but you must remember to add the control library to the data-sap-ui-libs attribute when loading SAPUI5 - see Step 3):

var oTable = new sap.ui.table.DataTable();
Each column is added and described like this:
oTable.addColumn(new sap.ui.table.Column({
  label: new sap.ui.commons.Label({text: "Agency Name"}),
  template: new sap.ui.commons.TextView().bindProperty("text", "NAME"),
  sortProperty: "NAME"
}));

 

There are three different templates in use, for different fields - the basic TextView, the TextField and the Link.

The OData data model is created like this, where the URL parameter points to the service document:

var oModel = new sap.ui.model.odata.ODataModel("http://gateway.server:port/sap/opu/sdata/iwfnd/RMTSAMPLEFLIGHT");

 

It's then linked to the control like this:

oTable.setModel(oModel);

 

The specific OData resource TravelagencyCollection is bound to the control's rows like this:

oTable.bindRows("TravelagencyCollection");

 

And then the control is placed on the page like this:

oTable.placeAt("dataTable");

 

I've put the complete code in a Github Gist for you to have a look at.

 

Result

What you end up with is live data from your SAP Gateway system that is presented to you like this:

 

Share and enjoy!

I'm totally enamoured by the power and potential of SAP's NetWeaver Gateway, and all it has to offer with its REST-informed data-centric consumption model. One of the tools I've been looking at in exploring the services is the Sesame Data Browser, a Silverlight-based application that runs inside the browser or on the desktop, and lets you explore OData resources.

 

One of the challenges in getting the Data Browser to consume OData resources exposed by NetWeaver Gateway (get a trial version, available from the Gateway home page on SDN) was serving a couple of XML-based domain access directive files as described in "Making a Service Available Across Domain Boundaries" - namely clientaccesspolicy.xml and crossdomain.xml, both needing to be served from the root of the domain/port based URL of the Gateway system. In other words, the NetWeaver stack needed to serve requests for these two resources:

 

http://hostname:port/clientaccesspolicy.xml

 

and

 

http://hostname:port/crossdomain.xml

 

Without these files, the Data Browser will show you this sort of error:

 

A SecurityException has been encountered while opening the connection.

Please try to open the connection with Sesame installed on the desktop.

If you are the owner of the OData feed, try to add a clientaccesspolicy.xml

file on the server.--

 

So, how to make these two cross domain access files available, and specifically from the root? There have been some thoughts on this already, using a default service on the ICF's default host definition, or even dynamically loading the XML as a file into the server cache (see the ABAP program in this thread in the RIA dev forum).

 

But a conversation on Twitter last night about BSPs, raw ICF and even the ICM reminded me that the ICM is a powerful engine that is often overlooked and underloved. The ICM -- Internet Communication Manager -- is the collection of core HTTP/SMTP/plugin services that sits underneath the ICF, and handle the actual HTTP traffic below the level of the ICF's ABAP layer. In the style of Apache handlers, there are a series of handlers that the ICM has to deal with plenty of HTTP serving situations - Logging, Authentication, Server Cache, Administration, Modification, File Access, Redirect, as well as the "ABAP" handler we know as the ICF layer.

 

Could the humble ICM help with serving these two XML resources? Of course it could!

 

The File Access handler is what we recognise from the level 2 trace info in the dev_icm tracefile as HttpFileAccessHandler. You all read the verbose traces from the ICM with your morning coffee, right? Just kidding. Anyway, the File Access handler makes its features available to us in the form of the icm/HTTP/file_access_<xx> profile parameters. It allows us to point the ICM at a directory on the filesystem and have it serve files directly, if a URL is matched. Note that this File Access handler is invoked, and given a chance to respond, before we even get to the ABAP handler's ICF level.

 

With a couple of these file_access parameters, we can serve static clientaccesspolicy.xml and crossdomain.xml files straight from the filesystem, matched at root. Here's what I have in my /usr/sap/NPL/SYS/profile/NPL_DVEBMGS42_nplhost parameter file:

 

icm/HTTP/file_access_1 = PREFIX=/clientaccesspolicy.xml,

   DOCROOT=$(DIR_INSTANCE)/qmacro, DIRINDEX=clientaccesspolicy.xml

icm/HTTP/file_access_2 = PREFIX=/crossdomain.xml,

   DOCROOT=$(DIR_INSTANCE)/qmacro, DIRINDEX=crossdomain.xml

 

(I already have file_access_0 specifying something else not relevant here).

 

What are these parameters saying? Well the PREFIX specifies the relative URL to match, the DOCROOT specifies the directory that the ICM is to serve files from in response to requests matching the PREFIX, and DIRINDEX is a file to serve when the 'index' is requested. Usually the PREFIX is used to specify a directory, or a relative URL representing a 'container', so the DIRINDEX value is what's served when there's a request for exactly that container. The upshot is that the relevant file is served for the right relative resource. The files are in directory /usr/sap/NPL/DVEBMGS42/qmacro/.

 

While we're at it, we might as well specify a similar File Access handler parameter to serve the favicon, not least because that will prevent those pesky warnings about not being able to serve requests for that resource, if you don't have one already:

 

icm/HTTP/file_access_3 = PREFIX=/favicon.ico,

  DOCROOT=$(DIR_INSTANCE)/qmacro, DIRINDEX=favicon.ico

 

The upshot of all this is that the static XML resources are served directly by the ICM, without the request even having to permeate up as far as the ABAP stack:

 

Handler 5: HttpFileAccessHandler matches url: /clientaccesspolicy.xml

HttpSubHandlerCall: Call Handler: HttpFileAccessHandler (1089830/1088cf0), task=TASK_REQUEST(1), header_len=407

HttpFileAccessHandler: access file/dir: /usr/sap/NPL/DVEBMGS42/qmacro

HttpFileAccessHandler: file /usr/sap/NPL/DVEBMGS42/qmacro/clientaccesspolicy.xml modified: -1/1326386676

HttpSubHandlerItDeactivate: handler 4: HttpFileAccessHandler

HttpSubHandlerClose: Call Handler: HttpFileAccessHandler (1089830/1088cf0), task=TASK_CLOSE(3)

 

and also that the browser-based Sesame Data Browser can access your Gateway OData resources successfully:

 

Data Browser

 

 

Success!

 

If you're interested in learning more about the Internet Communication Manager (ICM) and the Internet Communication Framework (ICF), you might be interested in my Omniversity of Manchester course:

 

Web Programming with SAP's Internet Communication Framework

 

Which is currently running in March (3rd and 4th) and May (9th and 10th) in Manchester.

 

 

 

 

REST (which stands for REpresentational State Transfer) is an architectural style that is informed to a large extent by, but theoretically not limited to, the HTTP application protocol (yes, _application_ protocol, not transport protocol!). As an approach to application integration, REST has often been compared to its 'rival' Service Orientated Architecture (SOA), although a RESTful approach to integration architecture known as Resource Orientated Architecture (ROA) might be a better comparison fit.

Fans of REST and ROA (I'm one of them!) state many advantages over SOA, such as:

  • loose coupling vs tight coupling
  • flexible vs brittle interfacing
  • simple vs complex implementation
  • easier vs harder to debug

and subtly, but importantly, ROA is a lot more deserving of the word "Web" in the phrase "Web Services", as it works and flows _with_ Web concepts, rather than, as in the case of SOA, fighting *against* them. SOA, incidentally, has been referred to as "CORBA with angle brackets", which is as funny as it is true.

REST concepts and ideas have been around SAP for quite a while now; there is of course some coverage here on SDN, such as:

"Forget SOAP - build real web services with the ICF" (me, Jun 2004)

"Real Web Services with REST and ICF" (me, June 2004, again)

"REST Web Services in XI (Proof of Concept)" (Wiktor Nyckowski, Mar 2009)

"A new REST handler / dispatcher for the ICF" (me, Sep 2009)

"VCD #16 - The REST Bot: Behind the scenes" (Uwe Fetzer, Sep 2009)

"REST-orientation: Controlling access to resources" (me, Sep 2009, again)

and recently:

"Put SOAP to REST using CE" (Werner Steyn, Nov 2009)

 

What especially delighted me was the coverage that REST concepts and ideas got at SAP TechEd 2009 in Vienna. Lots of people were talking about it, and mentioning it in presentations. Over half the DemoJam contestants mentioned REST too. I personally had a fascinating and very rewarding chat with SAP guru Thomas Ritter during RIA Hacker Night, and have also corresponded with the very knowledgable Juergen Schmerder. It seems that there is a lot of interest in REST at SAP.

But what about REST _in_ SAP? How might you use it, be guided by it and ultimately _build_ things with SAP NetWeaver technologies? 

If you're interested, you might want to attend our upcoming Mentor Monday session

"REpresentational State Transfer (REST) and SAP - An Overview", on Monday 25th Jan at 13:00-14:00 PST. 

You can get more information on the SAP Mentor Monday wiki page.

Hope to see you there!

Sitting in a traffic jam on the A34 this week, twice, I got the opportunity to catch up with the excellent Enterprise Geeks podcasts. In one particular TechEd Phoenix episode "Tech Skills Chat with JonERP" there was something that Ed said that resonated particularly with me, about the way to being "the best you can", becoming "that guru", was to READ.

I think this is a great piece of advice, and something that needs to be underlined. To this end, I'd like to tell you a bit of a story.

In the early 1990s, I was working at Deutsche Telekom, in their data centre in Euskirchen, near Bonn, in Germany. I was part of the IBM mainframe and SAP Basis team that ran a fantastically huge SAP installation - around 10 parallel  SAP R/2 systems that coordinated and shared data through a central system. The systems ran on IBM mainframes, and were powered by IMS DB/DC (DB for the database management layer, and DC for the transaction processing layer, for you young ones :-) They were the best of times. We hacked 370 assembler (yes, including qmacros!) while drinking coffee so strong the spoon would stand up, and wrote Rexx scripts & ISPF panel-based applications to heavy-lift SAP R/2 installations like they were Lego constructions (and yes, Sergio, we had SBEZ! :-)

Being an IBM disciple at the time, I was aware how good the IBM documentation was. Seriously. I relished every opportunity to visit the documentation room, where I could diagnose any problem imaginable. Everything you ever wanted to know was there, if you knew where to look.

Anyway, there was a consultant, a veritable guru, Tomaschek I think his name was. He came and went at unearthly hours, drove a Mercedes with double glazing, and Knew Everything. Everything I could imagine knowing about running R/2 on IMS, with VSAM, and more. He knew. Of course, his experience counted for a lot, but I was eager to know how he had become so knowledgable, and so respected. So I asked him.

And he replied: "I read".

Since then, I've made it my business to read as much as I can, about the things I'm interested in. Anything and everything. Source code. Dry documentation. Articles. Books. Magazines. Weblogs. I have a stack of "to read" papers, ready to pop and take with me in the train, to meetings (how many meetings that you are invited to actually start on time?), into the bath. At my time at Deutsche Telekom, I set aside 10 minutes each day to read all the new OSS notes on my favourite areas (it was possible then!)

I feel I've gained a tremendous amount from what I've read. Some stuff I've read and not completely understood. Other stuff I've read and given up, bored. And yes, there's a lot of SAP documentation that could be better.

But if I can give one piece of advice, it's the same advice that I received from Mr Tomaschek all those years ago.

Read.

And then read some more.

Background

Using my A new REST handler / dispatcher for the ICF, I can adopt a Resource Orientated Architecture (ROA) approach to integration. This gives me huge advantages, in that I can avoid complexity, and expose data and functions from SAP as resources - first class citizens on the web. From here, I can, amongst other things:

  • use off-the-shelf cacheing mechanisms to improve performance
  • easily debug and trace integration with standard logging and proxying tools
  • even make statements about the resources using RDF
Moreover, I can easily divide up the programming tasks and the logic into logical chunks, based upon resource, and HTTP method, and let the infrastructure handle what gets called, and when.

This is all because what we're dealing with in a REST-orientated approach is a set of resources -- the nouns -- which we manipulate with HTTP methods -- the verbs.

As an example, here's a few of the channel-related resources that are relevant in my Coffeeshop project; in particular, my implementation of Coffeeshop in SAP. The resource URLs are relative, and rooted in the /qmacro/coffeeshop node of the ICF tree.

ResourceDescriptionMethodAction
/qmacro/coffeeshop/HomepageGETReturns the Coffeeshop 'homepage'
/qmacro/coffeeshop/channel/Channel containerGETReturn list of channels
POSTCreate new channel
/qmacro/coffeeshop/channel/123/ChannelGETReturn information about the channel
POSTPublish a message to the channel
DELETERemove the channel

(For more info on these and more resources, see Coffeeshop's ResourcePlan page).

Problem

This is all fine, but often a degree of access control is required. What if we want to allow certain groups access to a certain resources, other groups to another set of resources, but only allow that group, say, to be able to read channel information, and not create any new channels? In other words, how do we control access following a resource orientated approach -- access dependent upon the noun, and the verb?

Perhaps we would like group A to have GET access to all channel resources (read-only administration), group B to have GET and POST access to a particular channel (simple publisher access) and group C to have POST access to the channel container and DELETE access to individual channels (read/write administration)?

What does SAP standard offer?

Before looking at building something from scratch, what does standard SAP offer in the ICF area to support access control?

When you define a node in the ICF tree, you can specify access control relating to the userid in the Logon Data tab:

image

This is great first step. It means that we can control, on a high level, who gets access generally, and who doesn't. Let's call this 'Level 1 access'.

You can also specify, in the Service Data tab, a value for the SAP Authorisation field ('SAP Authoriz.'):

image

The value specified here is checked against authorisation object S_ICF, in the ICF_VALUE field, along with 'SERVICE' in the ICF_FIELD field.

[O] S_ICF
|
+-- ICF_FIELD
+-- ICF_VALUE

This is clearly a 'service orientated' approach, and is at best a very blunt mechanism with which to control access.

As well as being blunt, it is also unfortunately violent. If the user that's been authenticated does have an authorisation with appropriate values for this authorisation object, then the authorisation check passes, and nothing more is said. But if the authenticated user doesn't have authorisation, the ICF returns HTTP status code '500', which implies an Internal Server Error. Extreme, and semantically incorrect -- there hasn't been an error, the user just doesn't have authorisation. So, violent, and rather brutal. Then again, service orientation was never about elegance :-).

What's our approach, then?

Clearly, what the SAP standard offers in the ICF is not appropriate for a REST approach to integration design. (To be fair, it was never designed with resource orientation in mind).

What we would like is a three-level approach to access control:

Level 1 - user authentication: Can the user be authenticated, generally? If not, the HTTP response should be status 401 - Unauthorised. This level is taken care of nicely by the ICF itself. Thanks, ICF!

Level 2 - general resource access: Does the user have access, generally, to the specific resource? If not, the HTTP response should be status 403 - Forbidden.

Level 3 - specific resource access: Is the user allowed to perform the HTTP method specified on that resource? If not, the HTTP response should be status 405 - Method Not Allowed. As well as this status code, the response must contain an Allow header, telling the caller what methods are allowed.

This will give us an ability to implement a fine-grained access control, allowing us to set up, say, group access, as described earlier.

How do we get there?

Clearly, we're not going to achieve what we want with the SAP standard. We'll have to construct our own mechanism to give us Levels 2 and 3. But, SAP standard does offer us a couple of great building blocks that we'll use.

 

Building block: Authorisation Concept

Why re-invent an authorisation concept, when we have such a good one as standard? Exactly. So we'll use the standard SAP authorisation concept.

So we'll create an authorisation object, YRESTAUTH, with two fields -- one for the method, and one for the (relative) resource. This is what it looks like:

[O] YRESTAUTH
|
+-- YMETHOD HTTP method
+-- YRESOURCE resource (relative URL)

We can then maintain as many combinations of verbs and nouns as we like, and manage & assign those combinations using standard SAP authorisation concept tools. Heck, we could even farm that work out to the appropriate security team! Then, when it comes to the crunch, and the ICF is handling an incoming HTTP request, our mechanism can perform authorisation checks on this new authorisation object for the authenticated user associated with the request.

Building block: Stacked Handlers

One of the most fantastic things about the generally excellent ICF is the ability to have a whole stack of handlers, that are called in a controlled fashion by the ICF infrastructure, to respond to an incoming HTTP request. The model follows that of Apache and mod_perl, with flow control allowing any given handler to say whether, for example, it has responded completely and no further handlers should be called to satisfy the request, or that it has partially or not at all been able to respond, and that other handlers should be called.

So for any particular ICF node that we want to have this granular 3-level access control, what we need is a pluggable handler that we can insert in the first position of the handler stack, to deal with authorisation. Like this:

image

As you can see, we have the main coffeeshop handler , and before that in the stack, another handler, Y_AUTH, to provide the Levels 2 and 3 access control. So when an HTTP request comes in and the ICF determines that it's this node ([/default_host]/qmacro/coffeeshop) that should take care of the request, it calls Y_AUTH first.

Y_AUTH is a handler class just like any other HTTP handler class, and implements interface IF_HTTP_EXTENSION. It starts out with a few data definitions, and identifies the resource specified in the request:

method IF_HTTP_EXTENSION~HANDLE_REQUEST.
  data:
      l_method     type string
    , l_is_allowed type abap_bool
    , lt_allowed   type stringtab
    , l_resource   type string
    , l_resource_c type text255
    , l_allowed    type string
    .
* What's the resource?
  l_resource = server->request->get_header_field( '~request_uri' ).

* Need char version for authority check
  l_resource_c = l_resource.

Then it performs the Level 2 access check - is the user authorised generally for the resource?

* Level 2 check - general access to that resource?
  authority-check object 'YRESTAUTH'
    id 'YMETHOD'   dummy
    id 'YRESOURCE' field l_resource_c.

  if sy-subrc <> 0.
    server->response->set_status( code = '403' reason = 'FORBIDDEN - NO AUTH FOR RESOURCE' ).
    exit.
  endif.

If the authority check failed for that resource generally, we return a status 403 and that response is sent back to the client.

However, if the authority check succeeds, and we pass Level 2, it's time to check the specific combination of HTTP method and resource - the verb and the noun. We do this with a call to a simple method is_method_allowed() which takes the resource and method from the request, and returns a boolean, saying whether or not the method is allowed, plus a list of the methods that are actually allowed. Remember, in the HTTP response, we must return an Allow: header listing those methods if we're going to send a 405.

* Level 3 check - method-specific access to that resource?
  l_method =  server->request->get_header_field( '~request_method' ).
  translate l_method to upper case.

  call method is_method_allowed
    exporting
      i_resource   = l_resource
      i_method     = l_method
    importing
      e_is_allowed = l_is_allowed
      e_allowed    = lt_allowed.

* If not allowed, need to send back a response
  if l_is_allowed eq abap_false.

    concatenate lines of lt_allowed into l_allowed separated by ','.
    server->response->set_status( code = '405' reason = 'METHOD NOT ALLOWED FOR RESOURCE' ).
    server->response->set_header_field( name = 'Allow' value = l_allowed ).

So we send a 405 with an Allow: header if the user doesn't have authorisation for that specific combination of HTTP method and resource. (The is_method_allowed() works by taking a given list of HTTP methods, and authority-checking each one in combination with the resource, noting which were allowed, and which weren't.)

Finally, if we've successfully passed the Levels 2 and 3 checks, we can let go and have the ICF invoke the main handler for this ICF node - Y_DISP_COFFEESHOP. In order to make sure this happens, we tell the ICF, through the flow control variable IF_HTTP_EXTENSION~FLOW_RC, that while our execution has been OK, we still need to have a further handler executed to satisfy the request completely:

* Otherwise, we're golden, but make sure another handler executes
  else.

    if_http_extension~flow_rc = if_http_extension~co_flow_ok_others_mand.

  endif.

endmethod.

And that's pretty much it!

To finish of, here are some examples of the results of this mechanism.

image

In the first call, the wrong password is specified in authentication, so the status in the HTTP response, directly from the ICF, is 401. This is Level 1.

In the second call, the user is authenticated ok, but doesn't have access generally to the /qmacro/coffeeshop/ resource, hence the 403 status. This is Level 2.

In the third call, we're trying to make a POST request to a specific channel resource. While we might have GET access to this resource, we don't specifically have POST access, so the status in the HTTP response is 405. In addition, a header like this: "Allow: GET" would have been returned in the response. This is Level 3.

I hope this shows that when implementing a REST approach to integration, you can control access to your resources in a very granular way, and respond in a symantically appropriate way, using HTTP as designed - as an application protocol.

One of the best underlying mechanisms to be introduced into the Basis / NetWeaver stack in the past few years is the Internet Communication Framework (ICF), which is a collection of configuration, interfaces, classes and a core set of processes that allow us to build HTTP applications directly inside SAP.

 

If you're not directly familiar with the ICF, allow me to paraphrase a part of Tim O'Reilly's Open Source Paradigm Shift, where he gets audiences to realise that they all use Linux, by asking them whether they've used Google, and so on. If you've used WebDynpro, BSPs, the embedded ITS, SOAP, Web Services, or any number of other similar services, you've used the ICF, the layer that sits underneath and powers these subsystems.

 

One of my passions is REpresentational State Transfer (REST), the architectural approach to the development of web services in the Resource Orientated Architecture (ROA) style, using HTTP for what it is - an application protocol. While the ICF lends itself very well to programming HTTP applications in general, I have found myself wanting to be able to develop web applications and services that not only follow the REST style, but also in a way that is more aligned with other web programming environments I work with.

 

An example of one of these environments is the one used in Google's App Engine. App Engine is a cloud-based service that offers the ability to build and host web applications on Google's infrastructure. In the Python flavour of Google's App Engine, the WebOb library, an interface for HTTP requests and responses, is used as part of App Engine's web application framework.

 

Generally (and in an oversimplified way!), in the WebOb-style programming paradigm, you define a set of patterns matching various URLs in your application's "url space" (usually the root), and for each of the patterns, specify a handler class that is to be invoked to handle a request for the URL matched. When a match is found, the handler method invoked corresponds to the HTTP method in the request, and any subpattern values captured in the match are passed in the invocation.

 

So for instance, if the incoming request were:

 

GET /channel/100234/subscriber/91/

 

and there was a pattern/handler class pair defined thus:

 

'^/channel/([^/]+)/subscriber/([^/]+)/$', ChannelSubscriber

 

then the URL would be matched, an object of class ChannelSubscriber instantiated, the method GET of that class invoked, and the values '100234' and '91' passed in the invocation. The GET method would read the HTTP request, prepare the HTTP response, and hand off when done.

For a real-world example, see coffeeshop.py (part of my REST-orientated, HTTP-based publish/subscribe (pubsub) mechanism), in particular from line 524 onward. You can see how this model follows the paradigm described above.

 

def main():
  application = webapp.WSGIApplication([
    (r'/',                                MainPageHandler),
    (r'/channel/submissionform/?',        ChannelSubmissionformHandler),
    (r'/channel/(.+?)/subscriber/(.+?)/', ChannelSubscriberHandler),
    (r'/message/',                        MessageHandler),
    (r'/distributor/(.+?)',               DistributorWorker),
    [...]
  ], debug=True)
  wsgiref.handlers.CGIHandler().run(application)

 

This model is absolutely great in helping you think about your application in REST terms. What it does is help you focus on a couple of the core entities in any proper web application or service -- the nouns and the verbs. In other words, the URLs, and the HTTP methods. The framework allows you to control and handle incoming requests in a URL-and-method orientated fashion, and leaves you to concentrate on actually fulfilling the requests and forming the responses.

So where does this bring us? Well, while I'm a huge fan of the ICF, it does have a few shortcomings from a REST point of view, so I built a new generic handler / dispatcher class that I can use at any given node in the ICF tree, in the same style as WebOb. Put simply, it allows me to write an ICF node handler as simple as this:

 

method IF_HTTP_EXTENSION~HANDLE_REQUEST.
  handler( p = '^/$'                                          h = 'Y_COF_H_MAINPAGE' ).
  handler( p = '^/channel/submissionform$'                    h = 'Y_COF_H_CHANSUBMITFORM' ).
  handler( p = '^/channel/([^/]+)/subscriber/submissionform$' h = 'Y_COF_H_CHNSUBSUBMITFORM' ).
  handler( p = '^/channel/([^/]+)/subscriber/$'               h = 'Y_COF_H_CHNSUBCNT' ).
  handler( p = '^/channel/([^/]+)/subscriber/([^/]+)/$'       h = 'Y_COF_H_CHNSUB' ).
  dispatch( server ).
endmethod.

 

The handler / dispatcher consists of a generic class that implements interface IF_HTTP_EXTENSION (as all ICF handlers must), and provides a set of attributes and methods that allow you, in subclassing this generic class, to write handler code in the above style. Here's the method tab of Y_DISP_COFFEESHOP, to give you a feel for how it fits together:

 

y_disp_coffeeshop_methodtab.png

 

The classes that are invoked (Y_COF_H_* in this example) all inherit from a generic request handler class which provides a set of attributes and methods that allow you to get down to the business of simply providing GET, POST, PUT and other methods to handle the actual HTTP requests.

 

Here's an example of the method list of one of the request handler classes:

y_cof_h_chnsub.png

One interesting advantage, arguably a side-effect of this approach, is that you can use nodes in the ICF tree to 'root' your various web applications and services more cleanly, and avoid the difficulties of having different handlers defined at different levels in the child hierarchy just to service various parts of your application's particular url space.

 

I'd like to end this weblog post with a diagram that hopefully shows what I've been describing:

diagram.png


If you're interested in learning more, or sharing code, please let me know. I'm using this for real in one of my projects, but it's still early days.

 

Update 01/05/2012 I've re-added images to this post that were lost when SDN went through the migration to the new platform. This project is now called ADL - Alternative Dispatcher Layer and is on the SAP Code Exchange here: https://cw.sdn.sap.com/cw/groups/adl

It's been pretty much six years to the day since Dashboard as extension to R/3 and SAPGUI client, Nat Friedman's project and implementation of a realtime contextual information system. So I thought it fitting to make a short demo showing integration between Google Wave and SAP, inspired by the cluepacket-driven style shown so nicely with Dashboard.

I got my Wave Sandbox account a week or so ago, and have had a bit of time to have a look at how robots and gadgets work -- the two main Wave extension mechanisms. To get my feet wet, I built a robot, which is hosted in the cloud using Google App Engine (another area of interest to me) and the subject of this weblog entry. I used Python, but there's also a Java client library available too. You can get more info in the API Overview.

What this robot does is listen to conversations in a Wave, automatically recognising SAP entities and augmenting the conversation by inserting extra contextual information directly into the flow. In this example, the robot can recognise transport requests, and will insert the request's description into the conversation, lending a bit more information to what's being discussed. 

The robot recognises transport requests by looking for a pattern:

trkorr_match = re.search(' (SAPKw{6}|[A-Z0-9]{3}Kd{6}) ', text)

In other words, it's looking for something starting SAPK followed by six further characters, or something starting with 3 characters, followed by a K and six digits (the more traditional customer-orientated request format). In either case, there must be a space before and a space following, to be more sure of it being a 'word'.

How does it retrieve the description for a recognised transport request? Via a simple REST-orientated interface, of course :-) I use the excellent Internet Communication Framework (ICF) to build and host HTTP handlers so I can Forget SOAP - build real web services with the ICF. Each piece of data worth talking about is a first class citizen on the web; that is, each piece of data is a resource, and has a URL.

So the robot simply fetches the default representation of the recognised request's 'description' resource. If the request was NSPK900115, the description resource's URL would be something like:

http://hostname:port/transport/request/NSPK900115/description

Once fetched, the description is inserted into the conversation flow.

I've recorded a short screencast of the robot in action.

I posted a review of Packt Publishing's "SAP Business ONE Implementation" by Wolfgang Niefert:

 http://www.pipetree.com/qmacro/blog/2009/08/book-review-sap-business-one-implementation/

The review was favourable - see the weblog post for more information. Nice work Wolfgang!

I'm sure you're all aware of the recent #blogtheft issue - where some rogue has been lifting content lock stock and barrel from here and reproducing it - sans author name - on their website www.sap-abap4.com. Stop Thief - It's #blogtheft! and Stolen Content have blogged about it here already.

In Craig's Friday Morning Report yesterday, I suggested:

13:32 qmacro: get SAP hosting to send alternative “STOLEN!” images to rogue referrer – Google “image theft apache” for examples

and straight after the conference call finished, I thought I'd demo how that could be done. I implemented such a mechanism for images on an SDN blog post of mine, images that just happened to be hosted on my machine. I wrote about how that was done in a weblog post:

Dealing with "#blogtheft" from SAP's Developer Network

This morning, @thorstenster alerted me to the fact that SAP have now implemented this for images hosted here on SDN:

image

Now that's a great reaction! Kudos to the SAP Community Network hackers who look after the servers here. To implement something like that in such a short space of time and on the production servers ... I take my hat off to you folks. Well done.

A few days ago, Mark Yolton pointed out to me that this Friday, 30th May will mark 6 years since my first SDN blog post "The SAP/MySQL Partnership", in SDN's first month.

My oh my, how things have changed and progessed! We've seen the rise and rise of Open Source, the rise and fall of SOA, and the incredible improvements in connectedness and social collaboration in SAP events such as Sapphire & TechEd. Excellent.

Some things haven't changed so much, though. I'm reading SDN in earnest again -- especially the weblog posts. And guess what? The use of frames in SAP portal technology is still hampering basic usability. A particular case in point is bookmarking; I can't usefully or easily bookmark a weblog post without some cut'n'paste gymnastics, because the page title is always the same: "SAP Network Blogs". It should be the entry-specific weblog post title, so you don't end up with 1001 bookmarks that you can't tell apart.

 

Before

 

Not to worry. A couple of Greasemonkey Javascript lines later, in the form of sdnpagetitle.user.js, (in the sdnpagetitle github repository) and things are fixed!

 

After

 

Funny, my last post before this one was OssNoteFix script updated for Greasemonkey 0.6.4 and Firefox 1.5 too!

Share and enjoy, and here's to the next 6 years :-)

Actions

Filter Blog

By date:
By tag: