Now that John Moy has done all of the hard work setting up an EC2 instance and installing NetWeaver Gateway from scratch, and documented the process in his three blogs, I thought I’d just add my two cents as a wannabe-system administrator on some AWS features which make operating an EC2 system a little simpler.

 

First things first: networking. AWS EC2 instances get assigned a public IP address and matching DNS name upon start-up. However, this is a dynamic address and will be different every time the instance is started. It’s also not terribly nice to remember - for example, right now the DNS name is ec2-54-251-14-47.ap-southeast-1.compute.amazonaws.com, but it would be different the next time we started it.

 

 

Elastic IP Addresses

Amazon of course offers a solution for this - Elastic IP addresses. Basically these are static IP addresses on the public Internet which are “rented” to your account and can be assigned to any running EC2 instance via the management console. Since IPv4 addresses are getting quite rare, they are not free. But they are cheap: $0.005 per hour when not assigned to a system. When it is assigned to a running EC2 instance, it’s free!

 

So let’s get one of those:

 

 

Step

Screenshot

In the EC2 Management console, go to Elastic IPs under the Network & Security group. image02.jpg
Click Allocate New Addressimage06.jpg
The default is fine - we want an address for EC2 - so just click Yes, Allocatezimage07.jpg
And we’re done! Our new, static IP address is shown zimage10.jpg
Now, we could manually assign this Elastic IP address to a running EC2 instance by right-clicking the IP address and choosing Allocate, but this assignment is lost every time the instance is shut down. So to avoid having to do this step every time, we’re going to automate it!image22.jpg

 

 

 

 

The Amazon EC2 API Tools

Luckily for us, almost any functionality in AWS is available via rich APIs, and Amazon even supplies tools which interact with those APIs. Which is convenient because we can automate otherwise tedious manual tasks, such as assigning that Elastic IP address every time we start the system.

 

 

Step

Screenshot

Log on to your EC2 instance using Remote Desktop. Using the Firefox browser you installed earlier, go to this URL: http://aws.amazon.com/developertools/351/
Click the Download the Amazon EC2 API Tools link under the Download heading and save the ZIP file to the D: drive of your server.
Open the zip file, and copy the folder ec2-api-tools-1.5.6.0 into the clipboard using Ctrl+C
Open a Windows Explorer window, and browse to C:\Program Files\Amazon and paste the directory here using Ctrl+V.
image03.png
Before we can actually use the tools, we need to meet a few more prerequisites such as a Java Runtime Environment. Using Firefox again, browse to http://www.java.com and follow the prompts to download the installer.

 

Double-click the installer and run through its prompts.

Next we need to set some environment variables. We do this by going to the Control Panel in Windows via Start > Control Panelimage05.png
In the search box in the top right, type variable to quickly find the entry Edit the system environment variables. Click on it to open the dialog. image15.png
Click on the Environment Variables buttonimage16.png
In the bottom section, find the variable called Path, and click the Edit button.

 

Assuming you installed the EC2 API tools in the folder mentioned earlier, simply add the following text to the end of the string:

 

;C:\Program Files\Amazon\ec2-api-tools\bin

 

Please make sure you include the semicolon!
image27.png
While we’re here, let’s add a new variable called JAVA_HOME using the New button. If you installed the latest version of the Java Runtime Environment into the default location, then the variable value should be:

 

C:\Program Files (x86)\Java\jre7

 

Click OK to save this.
image19.png
For the moment, we need one more variable called EC2_HOME. Follow the same process as above, and set its value to:

 

C:\Program Files\Amazon\ec2-api-tools-1.5.6.0

 

Leave this window open, we’ll need to create more variables in a moment.
image01.png
Now that we have set up the prerequisites for running the EC2 API tools, let’s configure them for our account. The AWS documentation provides some details on this here
First of all, we need to tell our tools which AWS Availability Zone the server is in. Earlier during the install process, we chose South-East Asia; if you chose a different region then the URL here will be different for you. To find out, go to Start > Run and enter cmd followed by the Enter key. image25.png
Run the command ec2-describe-regions, and note down the long string ending in “amazonaws.com” which matches the Availability Zone our EC2 instance is running in.

 

In our case, this is ec2.ap-southeast-1.amazonaws.com
zimage18 - Version 2.jpg
Going back to the Environment Variable screen, create a new System Variable called EC2_URL with a value of https://, followed by the string we just found from the commandline.

 

In our case, this is https://ec2.ap-southeast-1.amazonaws.com
image26.png
Now we need to authorise the client tools to access our AWS account and act on our behalf in order to automate things. We do this by installing the private key and X.509 certificate associated with the AWS account.

 

Using Firefox on the server again, go to your AWS Account page and log in with your AWS account here: https://portal.aws.amazon.com/gp/aws/manageYourAccount

 

Once there, click on Security Credentials.
zimage28 - Version 2 (1).jpg
Under the Access Credentials heading, click on the X.509 Certificates tab, then on the Create a new Certificate link. image20 - Version 2 (1).jpg
Download both the Private Key File and X.509 Certificate to the server. I would suggest creating a new folder called D:\aws and saving both files there.

 

Once you click “Close”, these files will not be accessible again, so this is important!
image09.png
Once downloaded, D:\aws should look like this: image21.png
Now we need to tell the API tools where to find those certificates. You guessed it - more Environment Variables!

 

Going back to the Environment Variable screen, create two new System Variables: One called EC2_CERT with a value of the complete file path to the cert-... file we just downloaded, and another called EC2_PRIVATE_KEY whose value is the complete path of the pk-... file we downloaded.
image13.png
Now, it seems like a long time ago but the whole purpose behind setting up these tools was to automate the process of assigning an elastic IP address to this EC2 instance. So let’s go:
First we need to find the ID of our EC2 instance. Conveniently, this is at the top of the information printed on the Desktop background of our system!

 

Note this for the next step.
image14.jpg
Start Notepad by Going to Start > Run and typing notepad followed by the Enter key. image29.png
Enter the following into notepad:

 

ec2-associate-address -i <instance ID> <Elastic IP address>, substituting the instance ID from two steps ago and the Elastic IP address we created earlier.

 

Next, go to File > Save As and save the file in a convenient folder such as the D:\aws folder we created earlier.

 

Make sure you save it with a .bat file extension, which is possible once you select All Files from the second drop-down.

 

And we have a script which assigns the Elastic IP address automatically to our server!
image11 - Version 2 (1).jpg
Now we just need to execute this script as part of the server’s booting process. Here’s how to do that:

 

Go to Start > Run and launch GPEdit.msc (The Local Group Policy Editor):
image17.png
In the Local Group Policy Editor, go into Computer Configuration > Windows Settings > Scripts (Startup/Shutdown), and double-click the Startup entry on the right. image00 - Version 2.jpg
Click the Add button and Browse to the .bat script we created earlier.

 

In our example, this is D:\aws\assignElasticIP.bat
image12.png
Click OK and you should see this:image08.png
Click OK again and you’re done! Every time the system boots up now, it will run this script which will assign the static, unchanging Elastic IP address to itself. Your Gateway system now has an address which can be referenced from bookmarks, JavaScript code or anywhere else that a frequently-changing IP address or server name is not convenient.

 

Let’s test it!

Log out of Remote Desktop, and log into your AWS Management Console from your local PC here: http://console.aws.amazon.com/ec2

 

Stop your instance by right-clicking it in the list and choosing Stop from the menu. Wait for the shutdown to finish which could take a minute or so.

 

When the instance has shut down, start it again by choosing Start from the same right-click menu. Wait for it to start up, which could take a minute or two.
image23 - Version 2 (1).jpg
When the instance has finished booting, its status will change to green and the Elastic IP address we created and assigned via the script should now be displayed in the properties area.

 

You may need to click the Refresh button once or twice for this to update.

 

If this is the case, then our changes were successful and the server can not be accessed via this static IP address.

 


If you have control of a domain such as mydomain.com, you could now assign a DNS hostname such as gw.mydomain.com by creating an A record which maps gw.mydomain.com to your Elastic IP address! I won’t go into the details here as this will depend on how the DNS is setup for your domain if you have one.
image24 - Version 2 (1).jpg

 

That's it for now! The EC2 API tools are really a treasure-trove of functionality and there would be many more ways of automating manual tasks with the EC2 infrastructure. Chris Paine alluded to some of this in his recent blog on managing their AWS systems, and I'm sure there are many more!

Now  that a few months have passed since SAP released NetWeaver Gateway, and  we’re starting to see more implementations of it in the wild, I’m seeing more and more discussions in the community about the finer points of Gateway and REST in general.

One opinion I have come across is that NetWeaver Gateway should be merged with PI rather than remain as a separate product with its own technology  stack. That way, PI will remain SAP’s “go to” integration hub and customers can avoid standing up further systems and the maintenance overhead that entails.

As a relatively recent convert to the ‘religion’ of RESTafarianism, and having done a bit of PI work over the past few years, I of course have an opinion here: “Don’t do it!”.

Let me explain:

 

REST is not a Protocol

REST is an architectural style; a way of architecting system interfaces rather than a formal protocol which one can implement. REST defines a set of principles and constraints. When we design interfaces within these boundaries, then we have something which is RESTful. This can take many shapes of course, as there is no check list or W3C standards document or reference implementation which someone can comply with.

SOA is another, different architectural style. It requires system interfaces to be self-contained services, with defined inputs and outputs, which are separable, minimise side effects, etc. It does not require per se the use of Enterprise Service Buses, UDDI directories, etc. - these things usually appear in specific implementations of an architecture following the SOA style.

There are other architectural styles pertaining to system integration of course. Message Oriented styles come to mind, and there are others which are (unfortunately?) still used such as database integration. In fact, Wikipedia has a list of some.

 

Of Adapters and Middlewares

SAP NetWeaver PI has been designed from the ground up as a message oriented middleware product with some ESB features. Over time, these ESB  features have grown as SOA architectures became more common and desirable, and that’s A Good Thing. A good middleware product can speak as many protocols as possible, because there are many different implementations of those message-oriented architectural principles. It’s also likely that your middleware product will need to talk to another middleware product, and lots of protocol adapters help bridge the divides between two different implementations of the same architectural style.

But REST is not a protocol, and that is why the idea of a “REST adapter” doesn’t make sense to me. How do you write an adapter into a different  architecture? If all you need is to simply send one request to one RESTful endpoint, PI already has the HTTP adapter which will be perfectly suitable for some simple scenarios. But beyond that, the fundamental differences between the SOA and MOM (Message-Oriented Middleware) architectural styles in  PI’s DNA, and those of a good REST API, will be simply too great to bridge with an adapter.

Let me try to explain this gap by way of some examples:

  • REST talks about clients and servers, and doesn’t talk about middleware. PI is middleware.
  • RESTful APIs present hyperlinks to clients, which follow the links to modify state. How would PI do this?
  • REST mandates no content format. In fact, it allows any content format you can think of. PI really only handles XML.
  • REST requires clients to keep state in interactions with the server. PI is  not good at that. (NW BPM will probably change this. NetWeaver BPM as a  REST adapter? Now we might be onto something...)
  • PI works best with asynchronous processing. REST does away with abstraction layers providing asynchronicity and relies on natural request/response mechanisms.

 

Now, I do think that PI should have the ability to interact with RESTful APIs; as middleware, its job is to talk to as many other things as possible. But the fundamental differences between these two worlds will preclude such an approach from being comprehensive or good for all use cases. It may work for really simple things, but it won’t be nearly enough to position PI as the “REST Adapter” into SAP systems. And that’s okay. We don’t need one integration hub to rule them all!

For my money, the NetWeaver Gateway team made the right architectural choices: hosted on an ABAP stack and with good tooling for accessing BAPIs,  custom code, database tables and other APIs internal to an ABAP-based SAP system, rather than loosely coupled to the backend, hosted on a Java stack, or designed around existing functionality of existing products.

That’s not to say that Gateway is perfect, and there are some features which I would dearly love to see in the product. But in my opinion, SAP got the basics right by avoiding the temptation to make PI do something which it wasn’t designed for.


Image by Derick Bailey. Thanks for the Creative Commons license mate!

It all started with a tweet by SAP Mentor John Appleby (@applebyj)...
@applebyj: Question SAPpers, should you install NW Gateway as standalone or integrated? What are the decision criteria?

Quite a few people responded in short order via twitter with their thoughts on the topic. Aside from John being well known (and followed on twitter), this is surely indicative of the level of interest in the technology. Kudos to SAP for getting the community excited with their products! On a complete tangent, I just love how twitter has this ability to stimulate such exchanges in 140 characters or less!

@vijayasankarv: @applebyj my guess is probably 80% peeps can use it on business suite server itself..choosing a stand alone for complex landscapes

@thomas_jung: @applebyj The developer in me likes it standalone so it can be upgraded independent of the NetWeaver layer under your ERP #shinyobjects

@qmacro: @applebyj @thomas_jung yes need to balance those factors also. Middleware is always good (dislike the term, but I have nothing better).

@jhmoy: @applebyj SAP says: For a prod env, we recommend that you install the SAP NW Gateway system separately from the application back-end sys.

 

So far so uncontroversial. Many of the conceptual architecture diagrams I have seen at TechEd and elsewhere follow this approach - like this one from MOB130 at TechEd 2011:

Vison: People-Centric Content from Multiple Sources

 

However, my position on the matter was a little different:

@sufw: @applebyj Integrated IMHO. I see GW as being analogous to the SOAP runtime rather than Yet Another Middleware, plus REST is abt efficiency.

 

Go for Integrated Deployment

My response runs counter to the recommendation made in the admin guide quoted by John Moy, so let me try to explain why I feel an integrated (i.e. remote) installation of NetWeaver Gateway is usually most appropriate architectural choice.

 

NetWeaver Gateway is not that different

Like others, I generally see Gateway in much the same way as the SOAP Runtime or the Proxy framework which already exist in ABAP systems – an add-on to an existing system which provides interfaces by facading internal APIs like RFCs and BAPIs. Gateway provides another set of doors alongside the existing SOAP-based openings through which other systems can gain access to the data and functionality of the Business Suite. Those consuming systems need not worry about the proprietary protocols required for RFC communication but can choose to interact using open standards. Gateway simply provides another, different open standard in the form of OData to complement existing SOAP-based interfaces.

From what I saw during our hands-on work on the NW Gateway 1.0 “beta” program, and from keeping abreast of the evolution of this into v2.0, the product isn’t complex enough to warrant a separate installation in my opinion.

 

NetWeaver Gateway is not middleware

Gateway’s raison d’etre is to expose representations of internal resources (in the REST sense) from Business Suite systems. However, it doesn’t provide any tools for building OData representations by composing or re-composing existing data from multiple systems, or even from multiple different BOR objects, into a new aggregate representation. Essentially, anything which isn’t a subset of a single BAPI return requires custom ABAP code to be written.

In a central, stand-alone installation as shown on conceptual architecture diagrams like the one above, this ABAP code would have to get data from the various underlying sources from which the single exposed OData resource is composed from. How would this work? Gateway doesn’t (yet) provide any mechanisms for consuming SOAP enterprise services and only relies on RFC calls. Not nice. In order to call SOAP web services, I would have to write custom code to aggregate the returns of multiple calls to ABAP consumer proxies into something the Gateway runtime can then transform into OData. So lots of "glue code". Of course, anything is possible using custom code, but I would expect some actual tooling here if the vision of a central, stand-alone Gateway deployment is to be realised. Middleware systems like PI do provide such tooling and don't require developers to write lots of mundane glue code...

This leads me to conclude that a single, detached layer of Gateway composing OData resources from a variety of backends is essentially wishful future-state thinking rather than something which customers can adopt right now using Gateway 2.0 and without much fuss and coding. I do have some ideas on what I’d love to see in Gateway 3.0, but that’s the subject of a future blog...

 

Efficiency wins!

One of the appeals of designing APIs in accordance with REST principles is their greater simplicity compared to SOAP and the associated WS-* mess. To me, any kind of “application gateway”-style translation or bridging – such as building up a RESTful representation by making 3 SOAP/RFC calls to the same backend – negates a lot of the efficiency benefits of REST. One could even argue that it makes things worse by adding another layer of complexity/failure.

 

When all else fails...

Security, rate-limiting and auditing benefits of a stand-alone instance are pretty weak arguments here as well. Maybe I’m being cynical here, but I only ever see these come up once all other arguments have been exhausted.

I struggle to see customers exposing any ABAP stack to the Internet without some kind of application gateway or reverse proxy in front of it. Once you have that layer, the security arguments espoused in the admin guide become moot. Those specialised tools are much better at security, auditing and rate limiting than Gateway or any other general-purpose system is ever going to be. It’s their bread and butter after all! Specialist vendors like apigee, Layer7 or Mashery offer API gateway products with very impressive features around load balancing, throttling, DoS protection, SLA enforcement and versioning of Internet-facing APIs. Companies are likely to want only one such API gateway because there are economies of scale here. And since they will likely want to provide APIs from non-SAP systems as well, using Gateway for this role does not seem attractive.

 

Go for the simple option!

So in my mind, installing Gateway on each Business Suite system is preferable as it is the simplest option and provides some advantages over a stand-alone deployment.

Companies with multiple systems such as CRM, ECC and SRM may end up with multiple Gateway installs. But because Gateway is pretty painless to install and configure and essentially runs itself, this should not cause any discomfort.

If there ever is a requirement for a stand-alone instance, and the product has some decent tooling which makes such a thing worthwhile, then this can always be retrofitted to result in a hybrid landscape. Nothing wrong with that. In my mind, that is no different to modern SOA deployment architectures, whereby most systems have SOAP stacks which may (or may not, depending on your beliefs) be supplemented with a central SOAP stack in the form of an ESB.

 

...but it depends...

One big hurdle right now is the fairly demanding version requirements. Unless you’re running NetWeaver 7.02 SP7 or later in your Business Suite system, an integrated install is a non-starter. However I am sure this will change for the better; it was mentioned in passing at TechEd that Gateway is planned to be on its own release track in the future and this could improve the situation, or (fingers crossed!) even result in it becoming available for older releases?

If you must have NetWeaver Gateway right now and your Business Suite does not meet the version requirements, then a stand-alone install on a small ABAP stack is always an option, and it’s great that SAP have provided a choice here. However, I would only ever regard this as a tactical solution.

The above does not apply to the Sybase Unwired Platform in my mind. SUP is a much more complex (read fully-featured) product with its own architecture, persistence, management mechanisms, etc. and is not closely coupled to any backend implementation in the same way NetWeaver Gateway is. In my opinion, SUP is well deserving of its own installation, and I can see a single instance serving all connected mobile devices and backends.

Thanks are also due to my colleagues John Moy (@jhmoy) and Jacek Klatt (@yatzeck) for being passionate and knowledgeable discussion partners in an email exchange of views on John Appleby’s tweet which ultimately led to this blog.

Actions

Filter Blog

By author:
By date:
By tag: