1 2 3 18 Previous Next

SAP Mobile Platform Developer Center

262 Posts

I was excited to hear the announcement about HANA Cloud Platform mobile services trial availability. This is a great opportunity to the developer community to try thehcpms1.png solution for free. HCPms is the cloud version of SAP Mobile Platform. SAP had another cloud version of SMP called SAP Mobile Platform, enterprise edition, cloud version it was deprecated.


Even there are few differences between SMP on-premise (SMP 3) and HCPms the mobile SDK is common for both, ie. an app written for SMP 3 runs against HCPms too without any code change - that caught my attention. Here I am going to run one of my existing hybrid mobile app developed for SMP 3 against HCPms.


Activate HCPms

Apache Ant should be installed and added to path

Android SDK - It also requires Java


Cordova - Version is 3.6.3-0.2.13



Configure Application in HCPms

From the HCPms admin cockpit (https://hcpmsadmin-<your HANA account user name>.dispatcher.hanatrial.ondemand.com/sap/mobile/admin/ui/home/index.html) click on Applications> Click on Add icon on the bottom bar and provide below details:

Application ID : com.kapsel.logon

Name: com.kapsel.logon

Type: Hybrid

Security Configuration : Basic

Optionally provide Description and Vendor.

ScreenHunter_16 Dec. 19 11.24.jpg



Click on Save.

Next, click on Backend and provide the below details.

Backend URL : http://services.odata.org/Northwind/Northwind.svc

Authentication Type: No Authentication

ScreenHunter_16 Dec. 19 16.59.jpg


Save the configuration.

Develop Hybrid Mobile App

Follow this blog to develop a kapsel logon based hybrid app. Replace index.html with index.html. Only change needed to run the app with HCPms is providing the HCPms host and port.

Run cordova command:

cordova prepare


Connect phone with PC using USB and execute the cordova command:

cordova run android to run the app in the device.




UI Development Toolkit for HTML5 Developer Center

SAP Mobile Platform Developer Center


Happy Coding !

Midhun VP


Hi there,


we have finally made the SAP HANA Cloud Platform mobile services trial available for all hanatrial accounts. Activating the mobile services trial is unfortunately not as easy as just to click on "enable". The following describes how to fully enable the mobile services trial.


Prerequisite: You are already subscribed to HANA Cloud Platform trial .



  • Open your browser and navigate to https://hanatrial.ondemand.com/cockpit
  • Click on "Services" in the Content pane on the left.
  • You see a list of Services. Look out for SAP HANA Cloud Platform Mobile Services and click on "enable".
  • After a couple of seconds it should look like this:


  • Now we need to subscribe the mobile services Admin Cockpit application to your trial account
    • Click on "Subscriptions" in the Content pane
    • Click "New Subscriptions"
    • Select Provider Account "sapmobile" and Application Name "hcpmsadmin"
    • Confirm by clicking "Create"
    • The screen should now look like this:


    • important is the first row
  • Now click on the link "hcpmsadmin" and select "Roles" in the Content pane
  • Click "New Role"
  • Type in the role name "HanaMobileAdmin" and confirm the dialog.
  • Click on "Assign..." in the lower part of the screen and assign your user to the freshly created role by providing your SCN ID in the dialog. Make sure the HanaMobileAdmin role is selected in the role list. If all is done correctly it should look like this:


  • In order to allow communication between the Admin Cockpit and the mobile services core you need to setup two destinations manually.
    • Navigate back to the start screen by clicking on your account name in the upper left corner (the link is labeled S00XXXXXXtrial).
    • Select "Destinations" in the Content pane
    • Create the following two destinations
Proxy TypeInternet
Cloud Connector Version2
Proxy TypeInternet
Cloud Connector Version2


  • It should look like this now:


  • The last thing we have to do is to assign another Administrator role to the service.
    • Click on "Services" in the Content pane
    • In the row of HANA Cloud Platform mobile services, click on the right icon showing a little person. The tooltips says "Configure roles".
    • Now select the role "Administrator" row in the list of roles.
    • Click "Assign..." in the lower part of the screen. Provide your S-User ID in the dialog and confirm by clicking "Assign".
    • it should look like this:


  • Now navigate back to the "Services" view using the Content pane and click "Go to Service". You should be redirected to the HANA Cloud Platform mobile services Admin Cockpit.


You can now start playing around with SAP HANA Cloud Platform mobile services.



In order to connect your mobile Application you want to point it to:


once you have a valid app configuration.


In another blog I will explain how to configure your first Application. Stay tuned.



Have Fun,



a couple of weeks ago I was very happy to announce SAP HANA Cloud Platform mobile services in my blog post SAP HANA Cloud Platform mobile services released.

While these new service where only available for customers and partners by entering the RampUp process (a kind of BETA), individual developers like you and me didn't have the opportunity to get their hands dirty with the mobile services. Luckily this uncomfortable situation will end soon and I am - again - happy to announce that we are preparing a public trial for the SAP HANA Cloud Platform mobile services.

If all went well you will have access to the mobile services within your HANA Cloud Platform trial account on hanatrial.ondemand.com beginning this week.


This is your mobile Christmas present - just for you.


But wait, what about the SMP Trial that is available on hanatrial.ondemand.com?


Well, your current SMP Cloud version trial subscription will be available for you until end of January. Please make sure that any configuration that you have created will be deleted and that you need to have manual backups of your configuration data and log files if necessary. We will remove the subscriptions to SAP Mobile Platform, enterprise edition, cloud version permanently.


Have Fun,

Martin Grasshoff


1. Update:

Also watch my CodeTalk with Ian Thain about the announcement of the Trial: https://www.youtube.com/watch?v=DMCP0_h-55w


2. Update:

Trial is already available: How to enable HANA Cloud Platform Mobile Services Trial

As we know that during SMP3 installation we provide the keystore password to protect SMP3 Keystore and Truststore locations. This Keystore password should be the same as all the private key passwords associated with the all the alias in the Keystore.


All the Keystore and Truststore related information are there in a single file. i.e. smp_keystore.jks (E:\SAP\MobilePlatform3\Server\configuration)


Keystore: The location where encryption keys, digital certificates and other credentials are stored (either encrypted or unencrypted keystore file types) for SAP                 Mobile Platform runtime components.

Truststore: The location where Certificate Authority (CA) signing certificates are stored.


Pre-requisite: Make sure to back-up the same file (C:\SAP\MobilePlatform3\Server\configuration\smp_keystore.jks)





1. First change the Keystore password by running the below command


E:\SAP\MobilePlatform3\Server\configuration>keytool -storepasswd -new s4pAdmin -keystore smp_keystore.jks

(Where s4pAdmin is the 'new password')

  • At prompt, enter the current password. (for me, it's s3pAdmin)





2. For changing the each of the passwords for all private keys in the Keystore, we need to change it one by one. By default, there are 2 private key alias entries in the SMP Keystore file. i.e. smp_crt and tomcat





2.1 To change the password for alias entry smp_crt, run the below command:



E:\SAP\MobilePlatform3\Server\configuration>keytool -keypasswd -alias smp_crt -new s4pAdmin -keystore smp_keystore.jks


     Keystore password:                        s4pAdmin (new keystore password as per step #1)

     Enter key password for <smp_crt> : s3pAdmin (current password)





2.2 To change the password for alias entry tomcat, run the below command:


     E:\SAP\MobilePlatform3\Server\configuration>keytool -keypasswd -alias tomcat -new s4pAdmin -keystore smp_keystore.jks


     Keystore password:                      s4pAdmin (new keystore password as per step #1)

     Enter key password for <tomcat> : s3pAdmin (current password)






3. Now, we need to configure the SMP to recognize the new password:


3.1  We have to encrypt the new password by obtaining the secret key from the-DsecretKeyproperty (E:\SAP\MobilePlatform3\Server\props.ini)





3.2 Run the below command:


               java -jar tools\cipher\CLIEncrypter.jar <secretKey> <newPassword>


E:\SAP\MobilePlatform3\Server>java -jar tools\cipher\CLIEncrypter.jar Vv4bm3LniE s4pAdmin




3.3 Open com.sap.mobile.platform.server.foundation.config.encryption.properties file available E:\SAP\MobilePlatform3\Server\config_master\com.sap.mobile.platform.server.foundation.config.encryption


  • Here we need to updateprivateKeystorePass to replace the existing password with the new encrypted password, keeping{enc}as the prefix.



  • Save the changes.
  • Restart restart the server for the changes to take effect.


To verify if above changes have been reflected, you can use keytool generator KeyStore Explorer to open Keystore file.


(A) . To verify Keystore password:



(B) To verify the password of alias smp_crt and tomcat

  • Open keytool explorer, Right click smp_crt>View Details > Private Key Details >Enter new password


  • If password is wrong, you would see an error message like below:


I hope it helps.




The SMP 3.0 OData SDK SP05 introduced the concept of store, which is an abstraction for services that can be consumed via OData protocol. There are two type of stores: online and offline. The methods for creating, updating, deleting and querying data are the same for both stores, however there are some differences. Let's get you started with the Offline store.



Your android project must include the following libraries under libs folder

  • AfariaSLL.jar
  • ClientHubSLL
  • ClientLog.jar
  • Common.jar
  • Connectivity.jar
  • CoreServices.jar
  • DataVaultLib.jar
  • E2ETrace.jar
  • HttpConvAuthFlows.jar
  • HttpConversation.jar
  • maflogger.jar
  • maflogoncore.jar
  • maflogonui.jar
  • mafuicomponents.jar
  • mafsettingscreen.jar
  • MobilePlace.jar
  • ODataAPI.jar
  • odataoffline.jar
  • ODataOnline.jar
  • perflib.jar
  • Request.jar
  • sap-e2etrace.jar
  • SupportabilityFacade.jar
  • XscriptParser.jar


The following resources should be imported under libs/armeabi folder

  • libmlcrsa16.so
  • thelibodataofflinejni.so


You can find the .jar and .so files in your OData SDK installation folder:

<Client SDK dir>\NativeSDK\ODataFramework\Android\libraries

<Client SDK dir>\NativeSDK\MAFReuse\Android\libraries

<Client SDK dir>\NativeSDK\ODataFramework\Android\libraries\armeabi



The offline store requires among other information, the collections (also called defining requests) that will be accessible offline. When the client app requests the initialization of the offline store this is what happens under the covers:

  1. The mobile services (either SMP 3.0 SP04 on premise or HCPms) will send a GET requests to the OData producer to get the metadata (OData model) and it will use the OData model to create the Ultralite database schema.
  2. For each defining requests, the mobile services will pull the data from the OData producer and will populate the database. The mobile services checks if there’s a delta token:
    1. If there is a delta token, it will cache it and use it in the following refresh.
    2. If there is not delta token, it will cache the keys populated in the database.
  3. The mobile services will notify the client app the database is ready
  4. Using Ultralite functionality the client app will download the database. At this point the database can use used offline



Code Snippet – How to open an offline store

//This instantiate the native UDB libraries which are located in the
//libodataofflinejni.so file

//Get application endpoint URL
LogonCoreContext lgCtx = LogonCore.getInstance().getLogonContext();
String endPointURL = lgCtx.getAppEndPointUrl();
URL url = new URL(endPointURL);
// Define the offline store options.
// Connection parameter and credentials and
// the application connection id we got at the registration
ODataOfflineStoreOptions options = new ODataOfflineStoreOptions();
options.serviceRoot= endPointURL;
//The logon configurator uses the information obtained in the registration
// (i.e endpoint URL, login, etc ) to configure the conversation manager

// It assumes you used MAF Logon component to on-board a user      
IManagerConfigurator configurator = LogonUIFacade.getInstance().getLogonConfigurator(context);

HttpConversationManager manager = new HttpConversationManager(context);



options.conversationManager = manager;


options.storeName ="flight";


//This defines the oData collections which will be stored in the offline store

options.definingRequests.put("defreq1", "TravelAgencies_DQ");


//Open offline store synchronously

ODataOfflineStore offlineStore = new ODataOfflineStore(context);



//A way to verify if the store opened successfully

Log.d("openOfflineStore: library version"+ ODataOfflineStore.libraryVersion());

Once the offline store is open, you can create, update, delete and query data offline. As we mentioned before, the methods for creating, updating, deleting and querying data are the same for both stores. Note that all offline store requests are sent to the local database.


Code Snippet – How to query data with an offline store

//Define the resource path

String resourcePath = "TravelAgencies_DQ";

ODataRequestParamSingle request = new ODataRequestParamSingleDefaultImpl();



//Send a request to read the travel agencies from the local database

ODataResponseSingle response = (ODataResponseSingle) offlineStore.executeRequest(request);

//Check if the response is an error

if (response.getPayloadType() == ODataPayload.Type.Error) {

       ODataErrorDefaultImpl error =  (ODataErrorDefaultImpl) response.getPayload();

       //TODO show the error

//Check if the response contains EntitySet

} else if (response.getPayloadType() == ODataPayload.Type.EntitySet) {

    ODataEntitySet feed = (ODataEntitySet) response.getPayload();

    List<ODataEntity> entities = feed.getEntities();

       //Retrieve the data from the response

    ODataProperty property;

    ODataPropMap properties;

    String agencyID, agencyName;

       for (ODataEntity entity: entities){

              properties = entity.getProperties();

              property = properties.get("agencynum");

              agencyID = (String) property.getValue();

              property = properties.get("NAME");

              agencyName = (String) property.getValue();

              . . .





When connectivity is available, the client app must send all the local changes, this process is called Flush. When the client app requests a flush, this is what happens under the covers:

  1. The offline store communicates with mobile services
  2. For each requests the mobile services attempts to execute the request against the OData Producer
  3. The mobile services send the responses (errors and successes) to the client app and the errors will be stored in the ErrorArchive collection of the offline store.

Code Snippet - Flush


After the flush, the client app must receive all the changes from the OData producer that have occurred since the last refresh. When the client app requests a refresh, this is what happens under the covers:

  1. If delta token is enabled, for each request the mobile services requests data with the delta token
  2. Otherwise, the mobile services retrieve all the data from the OData producer and retrieve keys from cache and compute the delta. Reducing the traffic from the mobile services to the client app
  3. The mobile services transform all the changes to the relational mobilink protocol and send it back to the client app.
  4. The client app Ultralite database performs all the instructions


Code Snippet - Refresh



The code snippets showed in this blog are using the synchronous methods for simplicity purposes. Please note there are asynchronous methods available.

This blog assumed

  • You have configured an application in the mobile services with Back-End Connections

http://<sap gateway host>:<port>/sap/opu/odata/IWFND/RMTSAMPLEFLIGHT/  

  • A user has been on-boarded with the mobile services using MAF Logon component.


For more information on how to create an application configuration, visit Deploying Applications

If you prefer hands-on exercises, check these guides out

How To... Enable user On-boarding using MAF Logon with Template Project (Android)

How To...Consume OData Services in Offline Mode (Android)

How to... Handle Synchronization Errors (Android)


Hope you find this information useful,


I frequently program directly against a NW Gateway when I'm starting or prototyping, then add SMP to the landscape, once the application is fleshed-out, and I want to add offline functionality.  This has tended to be when I add MAF Logon to the app in the past.


With the simple bootstrapping with Cocoapods, and the reusable STSOData framework, I get MAF Logon for free from the start.   Great, I like it, and I especially like using the new Discovery Service on-boarding to auto-import my connection settings from the cloud.  Using the STSOData framework's LogonHandler, I set the application's applicationID from in the AppDelegate -applicationDidBecomeActive: method.


[[LogonHandler shared].logonManager setApplicationId:@"stan.flight.https"];

The applicationID should match the applicationID in the SMP Admin console.


But today, my SMP system is being re-installed by QA, and I can't afford the down-time.  How do I switch back to connecting directly-to-NW Gateway?


I don't want to change anything in the application.  I know that I won't be able to use the offline features without SMP, but I can toggle that off in the STSOData DataController.  What should I do?


The easiest way to switch back to directly-to-NW Gateway, after working with SMP, is by changing the applicationID value set above.


The Solution

MAF Logon constructs the connection URL from the protocol/host/port parameters, then appends the applicationID to the url path.  So, these settings:


MAF Logon settings:

host:                     smpqa12-01.sybase.com

port:                      443

protocol:              https


Programmed in app:

applicationID:      stan.flight.https


are concatenated into this URL:  https://smpqaXX-XX.sybase.com:443/stan.flight.https/.  This is the base URL that the SODataStores use for querying $metadata, Collections, FunctionImports, etc. when connecting via my SMP server.


My NW Gateway system has this URL:  http://usxxxx21.xxx.sap.corp:8000/sap/opu/odata/IWFND/RMTSAMPLEFLIGHT/.  I can use the same inputs between MAF Logon, and the applicationID, by swapping in the OData application path components for the SMP applicationID, to produce the URL:  http://usxxxx21.xxx.sap.corp:8000/sap/opu/odata/IWFND/RMTSAMPLEFLIGHT/.


I accomplish this by changing the value when I set the applicationID on the LogonHandler, as above:


//[[LogonHandler shared].logonManager setApplicationId:@"stan.flight.https"];

[[LogonHandler shared].logonManager setApplicationId:@"sap/opu/odata/IWFND/RMTSAMPLEFLIGHT"];

Do not append a forward-slash "/" to the 'applicationID' value when substituting the OData application path components; the application is already adding the slash for the regular SMP applicationID value.

13.pngHi everyone!


I have great news...

Loads of How-To-Guides have been published. Hope these documents will help you to learn what has been discussed in the previous blogs even quicker and efficiently.

They are surely covering essential topics for both online and offline stores. In addition to it, it covers following topics:

- User Onboarding without MAF UI

- Push Notification

- Batch Request with Online Store

- Log & Trace

Together with the H2Gs, the associated Xcode projects are all ready for you. It has a set of Exercise (the one you can go through with H2G step-by-step instruction to complete) & Solution (completed one - it will run by adding the required SDK libs). In the Github UI, "master" indicates Exercises and "solution" does Solution Xcode projects.


Perhaps I should address a few tips, which are demonstrated in the example Xcode projects.

Reloading TableView & Showing Alert in the Main Thread

After you fetch the data, the second thing you would do is to render the data via a table view or popup. Have you encountered a strange situation where the data fetch works pretty fast but after the data retrieval, it takes a while to invoke table view to render…?

You have to make sure if you're calling it in the main thread. You can google the detailed general discussion but in a nutshell, here's a tableView example:

01  [tableView reloadData]; // normal way
02  // calling it in the main thread
03  [tableView performSelectorOnMainThread:@selector(reloadData)
04                              withObject:nil
05                           waitUntilDone:NO];
06  // alternative way to call it in the main thread
07  dispatch_async(dispatch_get_main_queue(), ^{
08    [tableView reloadData];
09  });

By calling it in the main thread, you can confirm the runtime speed boosts up. The same story goes with other UIs such as alert.

01  [alert performSelectorOnMainThread:@selector(show)
02                          withObject:nil
03                       waitUntilDone:NO];

How come we have to do this? This is not really OData SDK remark but a general iOS tip. Apple's API reference says:

"Note: For the most part, use UIKit classes only from your app’s main thread. This is particularly true for classes derived from UIResponder or that involve manipulating your app’s user interface in any way."

OData SDK's HttpConversationManager does not call back on the main thread, so the SODataStore also call back on a background thread. It’s the task of the app developer to take care of proper UI calls in the way explained above. The NSURLSession also calls back on a background thread.

Conclusion - Always call UI in the main thread! ;-)

OData Format in either JSON or XML


By default the online store handles OData in XML. Here's how you switch it to JSON format. As JSON is far lightweight than XML, you would like to go with JSON - but you might want to use XML during the development, as it would be easier to debug if something went wrong.

01  // Use options to configure the store to send the request payload in JSON format
02  SODataOnlineStoreOptions *storeOptions = [[SODataOnlineStoreOptions alloc] init];
03  storeOptions.requestFormat = SODataDataFormatJSON;
04  onlineStore = [[SODataOnlineStore alloc] initWithURL:[NSURL URLWithString:endpointUrl]
05                               httpConversationManager:httpConvManager
06                                               options:storeOptions];

The offline store only sends modification requests in JSON format. The server component can perform refreshes in either XML or JSON, but the default is JSON - Just about every OData producer supports JSON nowadays.


MAF UI Redirects to Afaria Client App

If you're deploying the app in your iOS device, you would notice the MAF UI redirects to Afaria Client App, every time you onboard. The context switch happens as MAF UI checks if Afaria provides configuration for the particular application. This could be annoying if you haven't configured Afaria - here's how to turn off the default context switch behavior.

1. Find the "MAFLogonManagerOptions.plist" in the bundles folder of OData SDK libs in the Xcode project.


2. Switch the "keyMAFUseAfaria" value to NO.


3. Make sure there's "MAFLogonManagerNG.bundle" in Copy Bundle Resources in Build Phases tab in Xcode project. If not - add it.


That's all, happy learning with H2G :-)

See you in the next blog,


List of blogs

In some cases, extending the Agentry Product JARs (like the SAPWM-x.x.x.x.jar) in an object oriented way is not an easy task and it can be tiresome if you want to add some generic functionality like adding additional (trace) logging, error handing, monitoring and so forth to "every" StepHandler/BAPI/etc.-class. If you do not want to touch the actual SAPWM-x.x.x.x.jar, you might want to consider using AspectJ load-time weaving. I will not explain the basic concepts of AspectJ here as there are plenty of tutorials and examples in the net. If you are new to aspect-oriented coding, I strongly recommend, you get your feet wet with some standalone Java application first. In the following, I just want to explain, how to set up AspectJ for the Agentry Java Backend of the SMP 3.0.


First, you need some tools and libraries:

  • AspectJ Development Tools for Eclipse
  • From the aspectj-x.x.x.jar:
    • lib/aspectjrt.jar
    • lib/aspectjweaver.jar


Basic configuration of the SMP for AspectJ:

  • Put the aspectjrt.jar into the Agentry Application Java folder (where the Agentry-v5.jar is located)
    • You need to modify the META-INF/MANIFEST.MF in the aspectjrt.jar:
      • Add the following line to make the JAR osgi compatible

                        Export-Package: org.aspectj.lang;org.aspectj.runtime

  • Add ;.\Java\aspectjrt.jar to your Agentry.ini classpath property.
  • Put the aspectjweaver.jar in the SMPs Server folder (not in the Server\lib folder)
  • Add the following lines to the SMPs Server/props.ini file in the jvm section (the -D options can be removed / set to false, once you are confident you have everything set up properly)
    • -javaagent:.\aspectjweaver.jar
    • -Dorg.aspectj.weaver.showWeaveInfo=true
    • -Daj.weaving.verbose=true


Write your AspectJ code (or use the attached code), compile and JAR it (e.g. as aopdemo.jar) and put it into the Agentry Application Java folder (where the Agentry-v5.jar is located). Don't forget to add your JAR to the Agentry.ini classpath property (e.g. ;./Java/aopdemo.jar).

  • Now, upon SMP startup, you should be able to see some AspectJ initialization logging in the <SERVER>-smp-server.log (assuming you have aj.weaving.verbose set to true). Look for the following lines:


AppClassLoader@142e6767 info AspectJ Weaver Version 1.8.2 built on Thursday Aug 14, 2014 at 21:45:02 GMT

AppClassLoader@142e6767 info register classloader sun.misc.Launcher$AppClassLoader@142e6767

AppClassLoader@142e6767 info using configuration file:/.../aopdemo.jar!/META-INF/aop.xml     

AppClassLoader@142e6767 info register aspect aopdemo.MyAspect

  • As soon as some aspect code is woven in, you should be able so see something like this (you might have to synchronize or perform the proper actions on the Agentry client, depending on your pointcut definitions; for the aopdemo.MyAspect this is not required):


AgentryApplicationClassLoader@7cc49e01 weaveinfo Join point 'method-execution(java.util.ArrayList com.syclo.sap.bapi.GetUserProfileDataBAPI.processResults())' in Type 'com.syclo.sap.bapi.GetUserProfileDataBAPI' (GetUserProfileDataBAPI.java:68) advised by around advice from 'aopdemo.MyAspect' (MyAspect.aj:42)

  • For the aopdemo.MyAspect, there should be a lot of console output like this:


[AOP] (BAPIFactory.java:34)                    boolean com.syclo.sap.BAPIFactory.validateClass(String, String) returns java.lang.Boolean: true
[AOP] (BAPIFactory.java:34)                    boolean com.syclo.sap.BAPIFactory.validateClass(String, String)
[AOP] (BAPIFactory.java:34)                      arg java.lang.String: WorkorderTransferBAPI
[AOP] (BAPIFactory.java:34)                      arg java.lang.String: com.syclo.sap.component.workorder.bapi.WorkorderTransferBAPI
[AOP] (BAPIFactory.java:82)                      void com.syclo.sap.BAPIFactory.register(String, String)
[AOP] (BAPIFactory.java:82)                        arg java.lang.String: WorkorderTransferBAPI
[AOP] (BAPIFactory.java:82)                        arg java.lang.String: com.syclo.sap.component.workorder.bapi.WorkorderTransferBAPI
[AOP] (BAPIFactory.java:82)                      void com.syclo.sap.BAPIFactory.register(String, String) returns <null>
[AOP] (BAPIFactory.java:34)                    boolean com.syclo.sap.BAPIFactory.validateClass(String, String) returns java.lang.Boolean: true


For the aopdemo.MyAspect, the client sync should be extremely slow due to the amount of logging data. Your next step should be to change the AspectJ code to reduce the number of join points by adjusting the pointcut definition. This should lead to less logging and better performance.


If you have been able to reproduce the above steps for your SMP installation, you have done it. From here on, its up to you, to identify those extensions, that are a pain with the object-oriented approach and can be done nicely by using aspect-orientation.

I would be interested to hear about your ideas on where AspectJ can be beneficial. Feel free to post them here...


Hi everyone!

Another blog came in :-) The topic this time is not really critical, but nice-to-know tips around (mostly) offline CRUD.

First off let's make it clear - a few things:

  • The CUD queue in the offline store will have the same sequence of how you issue CUD operations locally, and after the flush, an OData service receives the queue and execute them one-by-one.
  • The offline store queue doesn't do any combining of the operations. This is because the offline store might encounter side-effects in the OData services from each operation. (e.g. If you create and delete an entity, they will always go up as two separate requests.)

And let's get into further CRUD geek knowledge...

Read with Delta Queries


In the blog #7, the term "Delta Query" was presented.


For those who're not yet familiar with this concept, it is nicely written here - “give me all data that were created/changed/deleted since I last asked”.

From the client programmer perspective, when we fetch the OData collection that supports Delta Query, the OData payload should contain this sort of href link:

<link rel="delta" href="TravelAgencies_DQ?!deltatoken='00237DD11C661ED49AFE5715E7776E7C_20141114113818'"/>

And next time you fetch the same collection, you use this URL instead of using plain collection name - and you'll get the delta portion of data since the last time you fetched them. This delta link string can be picked up by using the deltaPath property.

01  id<SODataEntitySet> myEntityset = (id<SODataEntitySet>)responseSingle.payload;
02  NSString* deltaPath = myEntityset.deltaPath;

You might be tempted to use this value - but actually you don't need to use it explicitly, these are automatically handled by SODataStore. Here's a few notes:


  1. SDK SP6 will introduce a new Online Cache API. It makes use of deltaPath value explained above, but it is inner feature. No need to be aware of the deltaPath.
  2. Don't use deltaPath value explicitly with offline store. The delta links are handled behind the scenes with the defining requests.

So the conclusion is, a new Cache API of online store will make use of the Delta Queries, and offline store handles it too - both are smart enough to handle it automatically for you (most likely, in online case, you don't want to use it without Cache API).

Create with resourcePath

In the blog #7 and #8, we learned how to work with CUD queue. An interesting tip for Create operation.


Let's assume you had created one entity. It is still in the local queue.


Right after you create an entity (that is, invoked scheduleCreateEntity method), requestServerResponse callback will be triggered. You should be able to obtain the entity which is created in the local queue.

01  if (request.mode == SODataRequestModeCreate) {
02    id<SODataResponseSingle> responseSingle = (id<SODataResponseSingle> )requestExecution.response;
03    if ([responseSingle.payload conformsToProtocol:@protocol(SODataEntity)]) {            
04      id<SODataEntity> myEntity = (id<SODataEntity>)responseSingle.payload;
05      NSString* path = myEntity.resourcePath;
06      ...

Please pay an attention in the line #05. It returns the local resourcePath string - it should have a funny value like "(lodata_sys_eid = X'3B3F7EB049DB43FE9B016ACDB2B4CC2D00000000’)". When it is local you have to use the lodata_sys_eid key and once you flush & refresh,


...you can obtain the real ID (= entity resource path in OData service) by scheduleReadEntityWithResourcePath with the local resource path.


You can also keep using the lodata_sys_eid key even after the flush/refresh (it will keep working until the next time your closing & opening the offline store after the flush/refresh).


Update & Delete with ETags


Have you ever heard of "ETag"?

ETag is something needs to be implemented in OData services. NetWeaver Gateway has been supporting it for a while...it is not hard to implement. and what has been happening was no one knows how to use it :-) - until we have the offline store.

I hear some discussion if ETag can work with the concurrency of the entity. But ETags are not really meant to optimize concurrent access. They are instead a safety mechanism to make sure no data gets lost if operations conflict.

What does that mean? Let's have an example:

If the backend OData Producer supports ETags, then the offline store will do it’s best to reflect that behavior on the client as well.


Consider the scenario:


  1. The application reads entity1 out of the Offline Store with ETag: etagA
  2. In the OData service side, the entity1 has been changed. - The data in client and server are different now!
  3. The application does a refresh that downloads a change to entity1 and the ETag has now changed to etagB.
  4. The application then attempts to modify entity1 using the version it read in step 1.
  5. The request will fail at step 4 on the scheduleUpdate/Delete operation and the operation will not be added to the request queue at all.


So the offline store will do its own check with the ETag values. Makes sense?

Note: You can spot the ETag value in the property on either the SODataEntity or the SODataRequestParamSingle object. When you read an entity out of the store it will already have its ETag property pre-filled in with the latest known value.  So as long as you use an entity object that you have read from the store to do subsequent updates then you do not need to do anything.

What if we don't refresh in the step 3?

The request would succeed in step 4 (because the ETag would match the local version "etagA") and be added to the request queue. However, it would fail during the flush (because the ETag wouldn’t match the server version) and get an error in the offlineStoreRequestFailed method.


So the ETag is not meant for optimizing concurrent access. If you expect the app can be offline without refresh, design your backend system so that Update & Delete operations never conflict in the first place or have the backend perform its own conflict resolution instead of failing the requests.

That's all for the CRUD trivia :-) thanks for reading.

See you in the next blog,



List of blogs

In Agentry 6.x (prior to SMP 3.0) one would add the following line to the [Java-1] section of the Agentry.ini file of the Agentry Server:


     nonStandardJavaOptions=-Xdebug -Xrunjdwp:transport=dt_socket,address=7090,server=y,suspend=n


In this case the debugging port would be 7090, but this could be freely selected (as long as a free port was specified).


On SMP 3.0 the Agentry Server is using the JVM of the SMP itself, and adding above line to Agentry.ini has no effect.


Now the debugging port must be specified for the entire SMP server. This can be done in one of 3 ways (at least).


     1. As described by Jason Latko in this blog Debug SMP 3.0 Agentry Java in Eclipse - SAP Mobility - SCN Wiki


     This method is very easy to use and requires no permanent changes to your SMP installation. Only disadvantage is that you cannot use this when your server

     is running as a service.


     2. As described by Robert Turner as a comment to this wiki thread

               http://wiki.scn.sap.com/wiki/display/SAPMOB/Debug+SMP+3.0+Agentry+Java+in+Eclipse .


     This method is not verified (by me) and I strongly dislike it, as you need to do changes to the Registry of your host.


     3. In the <SMP_HOME>\props.ini file add the following 2 lines in the #jvm section:




     Now the SMP will be listening on the debug port specified (7090 in this case). This will work both when executing in the foreground (Go.bat) as well as when

     running as a service. This is however a permanent change and is not suited for a production environment. But for a development environment it is very

     transparent and easy to modify.


Please note that when enabling debugging, you can no longer control which Agentry Server you are debugging, if more than one Agentry instance is deployed to your SMP 3.0 (supported from SP04).

You will potentially be debugging all Agentry Server instances on your SMP server.


Søren Hansen.

Sample code Fridays?  Sounds good.


The last several posts have been about the HttpConversationManager and SODataStore components.  This ties in nicely with one of the other components which got a major facelift in the SDK 3.0 SP05 release:  Supportability.  Also, I'll touch on the Usage library, which isn't part of Supportability, but has a similar behavior for the purpose of this discussion.


Supportability Components

Our Supportability component refers to a set of libraries for:

  1. Client logging
  2. E2E Trace (SAP Passport/BTX XML)


When using these libraries, you typically use the SupportabilityFacade singleton, to get the respective manager, which holds the context of the respective log content throughout the application session.


Client Logging:


id<SAPClientLogManager>logManager = [[SAPSupportabilityFacade sharedManager] getClientLogManager];


E2E Trace:


id<SAPE2ETraceManager> e2eTraceManager = [[SAPSupportabilityFacade sharedManager] getE2ETraceManager];


Uploading Logs

When you're ready to upload the contents of the logs or BTX document to the Mobile Platform server, you call an upload method on the manager, which takes a SupportabilityUploader as the parameter.  As you can see, the SupportabilityUploader takes both a HttpConversationManager and NSURLRequest as parameters.  We've already discussed how to construct a HttpConversationManager, so it's easy to reuse the regular manager that's used for all other data requests.  But, you need to know the URL of the endpoint where the log (or trace) should be POSTed.


SupportabilityUploader *uploader = [[SupportabilityUploader alloc] initWithHttpConversationManager:self.httpConvManager urlRequest:request];


[e2eTraceManager uploadBTX:uploader completion:^(NSError* error) {

    if (error == nil) {

        NSLog(@"upload succeeded");


    else {

        NSLog(@"upload failed: %@", [error description]);




For the Usage library, the developer does not need to construct an uploader, since the Usage library uses Reachability settings (wifi vs. cellular data) to determine when to upload the Usage records.  But, it also requires a URL and HttpConversationManager--passed in its initialization method:


[Usage initUsageWithURL:[self.baseURL clientUsageURL] httpConversationManager:self.httpConvManager];


URL Schemes Reference

For reference, here are the main URL's required for Supportability, and also Usage, and Data requests:


  • Data:            <protocol>://<host>:<port>/<appId>/
  • Client Logs: <protocol>://<host>:<port>/clientlogs
  • E2E Trace:    <protocol>://<host>:<port>/btx
  • Usage:          <protocol>://<host>:<port>/clientusage

These URL schemes are as documented as of SDK 3.0 SP05/06.  Please consult the documents for your particular version, if you encounter any issues.

Note that only the Data requests include the applicationId as a URL path component.  The other URL endpoints are constant for all applications on a Mobile Platform server.  The server uses the X-SMP-APPCID or X-HM-APPCID cookies passed with the request to map the uploaded documents to the correct application.


Sample Code

I've mocked up a helper category on NSURL, that generates these url schemes.  It is posted (along with the code snippets shared above) in the open-source STSOData project here:  NSURL+MobilePlatform.


@interface NSURL (MobilePlatform)

@property (nonatomic, copy) NSString *appId;


helper constructor for handling the output of MAFLogonRegistrationContext


-(NSURL *)initWithHost:(NSString *)host port:(int)port protocol:(BOOL)isSecure appId:(NSString *)appId;


base constructor... could be changed slightly, depending on the interface of how configurations are acquired


BaseURL string should be:  <protocol>://<host>:<port>


-(NSURL *)initWithBaseURLString:(NSString *)urlString appId:(NSString *)appId;




-(NSURL *)applicationURL;




-(NSURL *)clientLogsURL;




-(NSURL *)btxURL;




-(NSURL *)clientUsageURL;



The concept is simple:  when you know the application connection settings (either from MAFLogon, in the -logonFinishedWithError: callback, or from your own code), construct a NSURL containing host, port, protocol, and applicationId.  Then, store it somewhere easily accessible:  on the AppDelegate, or a LogonHandler or DataController singleton class.


When you're setting up the SupportabilityUploader, Usage library, or SODataStore's, you can just access the getter on this base URL to construct the correct scheme.


Here is an example configuration of the SupportabilityUploader, implemented on a category of the class containing my shared base URL:


@implementation LogonHandler (E2ETrace)


-(SupportabilityUploader *)configuredUploader {


    BTX upload endpoint is constant for all applications on a host:port


    NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:[self.baseURL btxURL]];



    Set the HttpConversationManager and request to the SupportabilityUploader


SupportabilityUploader *uploader = [[SupportabilityUploader alloc] initWithHttpConversationManager:self.httpConvManager


    return uploader;



//Happy Weekend!

The HttpConversationManager filters


When we were starting the design for our next-generation network component, the HttpConversationManager, we looked extensively at simply reusing and extending the two common iOS networking components:  namely, NSURLSession, and AFNetworking.  We're very conscious of the work in the AFNetworking community, building on NSURLConnection and NSOperation, opening side-modules for OAuth, web-sockets, implementing pinning, etc.  Also, the modern NSURLSession APIs that came with iOS7, with the "data task" concept, file transfer, background activity, block interfaces, integrated Kerberos, gave us confidence that most enterprise-grade connectivity cases could be implemented with high-level native API's.  This was really our goal:  if iOS, Android, and Windows OS's provided enterprise-grade API's, reuse them!


The limitation we hit on both of these options was native support for SAML2, which is critical in many enterprise landscapes, and for SAP's HANA Cloud Platform identity provider.  Also, we commonly have unique requirements around custom SSO tokens, etc., which need to be ordered in the HTTP request procedure.  So, taking some ideas from the open-source community, we implemented a very thin wrapper on NSURLSession with NSURLSessionDataTask, which is now the HttpConversationManager.


I've written some here about some of the concepts in HttpConversationManager.  What I'd like to share in this post is an explanation of the three filter types that can be added to the HttpConversationManager, to do pre- and post- request processing, and handle authentication challenges.


For most cases, no pre/post-processing is required, so you can support Basic and Client Certificate authentication by adding a single filter (Challenge). SAML2 authentication is a post-processing operation, and requires a single Response filter.  OAuth2 requires both a Request and Response filter.  SAP has already implemented these: as a developer, you just need to set them to your HttpConversationManager.  For custom SSO procedures, you can order your custom filter(s) around the regular authentication types as necessary.


Filters on a HttpConversationManager

In short, there are three filter types:  the RequestFilter, ResponseFilter, and ChallengeFilter.


  • The RequestFilter is processed before the request is sent over the network, and can modify the request.  Also, the original request can be paused, while a 2nd operation is completed (like obtaining a token from a 3rd-party service, then re-started when the RequestFilter is complete.
  • The ChallengeFilter is processed next in the HTTP request, if the server responds with a HTTP authentication challenge.  When a ChallengeFilter is executed, it should return a credential in response to the challenge.  This is identical to supplying a credential in the NSURLSession -didReceiveChallenge:(NSURLAuthenticationChallenge)challenge delegate method.
  • The ResponseFilter is processed last in the HTTP request flow, after the response is complete.  Like the RequestFilter, it allows you to inspect the response payload, and complete additional operations, before passing the finished response to the app.  Unlike the RequestFilter, you cannot modify the payload of the response.


Request flow, from Application to Service Provider

Screen Shot 2014-11-13 at 4.22.41 PM.png


The RequestFilter



Delegate method called before executing the request.

@param mutableRequest: the request instance which will be executed

@param conversationManager: copy of HttpConversationManager instance, can be used for starting additional requests

@param completionBlock: call the block when the filter finished the modification of <i>mutableRequest</i> object


-(void) prepareRequest:(NSMutableURLRequest*)mutableRequest conversationManager:(HttpConversationManager*)conversationManager completionBlock:(void (^)())completionBlock;

The interface for creating a new request on HttpConversationManager

-(void) executeRequest:(NSMutableURLRequest*)urlRequest completionHandler:(void (^)(NSData* data, NSURLResponse* response, NSError* error))completionHandler 

takes a NSMutableURLRequest as an input parameter.  The RequestFilter may touch that mutableRequest instance, to add headers, etc.


The RequestFilter -prepareRequest: method also passes a copy of the HttpConversationmanager, which "can be used for starting additional requests".  This is important, if, for instance, you were using OAuth2, and knew that each request must include a Bearer auth token.  If the auth token isn't available, you need to call an endpoint on the IdP in order to get it--before sending the original request to its destination.  Otherwise, the original request will fail with a 403.  So, you could use the conversationManager instance to execute this secondary request.


Using a copy of the original HttpConversationManager for these types of secondary requests is convenient, but not required.  The convenience factor comes from the fact that your remaining Request, Challenge, and Response filters are still attached to the conversationManager instance, so you don't need to create a new conversation, configure it, etc.  However, it is important to know that the conversationManager instance variable is a copy of the original HttpConversationManager, and that the completed and in-progress RequestFilters have been removed from the instance's requestFilter array.  So, you don't need to worry about the infinite loop problem, but you should think for a moment to check that your secondary request on this conversationManager instance doesn't expect any discarded RequestFilters.

In general, this should never be a problem.  If a complex case existed which required all the original filters for a secondary request, you could simply initialize a new HttpConversationManager, configured specifically for this request.


Once the NSURLMutableRequest is completely prepared by the RequestFilter, then you should invoke the void completionBlock().  The next RequestFilter in the HttpConversationManager will then be executed, according to the order of the HttpConversationManger allRequestFilters array.


The ChallengeFilter



Delegate method called when authentication challenge occurs.

@param challenge: NSURLAuthenticationChallenge

@param conversationManager: copy of HttpConversationManager instance, can be used for starting additional requests

@param completionBlock: call the block when the filter finished its job. <br> Return YES, and a NSURLCredential object, if the challenge is handled, or return NO and nil, if the challenge is not handled. If return YES and NSURLCredential object is nil, it will be handled as no credential provided.


-(void) handleChallenge:(NSURLAuthenticationChallenge*)challenge conversationManager:(HttpConversationManager*)conversationManager completionBlock:(void (^)(BOOL useCredential, NSURLCredential* credential))completionBlock;

The ChallengeFilter is *only* executed when the NSURL loading system invokes the handler of NSURLAuthenticationChallenge.  Inside the HttpConversationManager, after the RequestFilters are all completed, a NSURLSessionDataTask is created from the NSMutableURLRequest, and sent to the server.  When the NSURLSessionDelegate method


- (void)URLSession:(NSURLSession *)session task:(NSURLSessionTask *)task didReceiveChallenge:(NSURLAuthenticationChallenge *)challenge completionHandler:(void (^)(NSURLSessionAuthChallengeDisposition disposition, NSURLCredential *credential))completionHandler


is invoked, the array of `challengeFilters` is enumerated.


The SAP SDK ships with standard ChallengeFilters for UsernamePassword (Basic) and ClientCert.  When they are invoked, they check the protectionSpace.authenticationMethod value of the NSURLAuthenticationChallenge, to see if they should try to supply a credential.  For example, if the authenticationMethod == NSURLAuthenticationMethodClientCertificate, the ClientCertChallengeFilter will try to supply a credential.


Adding CredentialProviders

The way that the ChallengeFilters find the credentials to supply is through an array of attached Providers.  A Provider is a protocol which a developer can implement, with their own customizations to match their particular landscape, MDM/MAP provider, SCEP provider, etc.  These can be as simple as just returning a hard-coded username/password, showing a secure-style UIAlertView, or more complex:  calling a 3rd-party API to get a client certificate, showing a custom UI, etc.


The simplest version is a UsernamePasswordProviderProtocol, which can be as easy as just returning a NSURLCredential with username & password from code.


- (void)provideUsernamePasswordForAuthChallenge:(NSURLAuthenticationChallenge *)authChallenge completionBlock:(void (^)(NSURLCredential *, NSError *))completionBlock


    NSURLCredential *credential = [[NSURLCredential alloc] initWithUser:@"myName"




    completionBlock(credential, nil);


The ClientCertProviderProtocol has a similar signature:  after obtaining the client certificate from the keychain, DataVault, 3rd-party API, etc., pass it into the completion block as a NSURLCredential:


- (void) provideClientCertForAuthChallenge:(NSURLAuthenticationChallenge*)authChallenge completionBlock:(void (^)(NSURLCredential*, NSError*))completionBlock


    // obtain the certificate(s) and construct SecIdentityRef


    NSURLCredential *cred = [[NSURLCredential alloc] initWithIdentity:mySecIdentityRef




    completionBlock(credential, nil);



For the standard Basic auth & client certificate auth cases, I'd recommend just using the ChallengeFilters out-of-the box from the SDK, implementing the Provider to supply the user credentials.


If you're implementing your own ChallengeFilter (this is ok, even for the standard types, so long as you substitute your filters for the standard ones on the HttpConversationManager, or order them in-front of the standard ones) then you might not bother with using the Provider protocol--you might just supply the credential directly from within the filter.  In this case, make sure that the NSURLCredential is passed into the -handleChallenge: completionBlock.



    // filter code, and obtain the NSURLCredential

    completionBlock(BOOL useCredential, NSURLCredential* credential);



The ResponseFilter

The response filter behaves exactly like the request filter, except it is executed on a successful NSURL response, and the contents of the response are not mutable.


The developer has access to the NSURLResponse and NSData payload, and can execute custom read procedures before calling the completion block. The primary use case for the response filter is trapping response payloads related to authentication operations. To this end, the method signature contains a parameter named shouldRestartRequest, which should be set to YES in the event that the response contains an authentication token, or an authentication redirect (as in the SAML2 case), and the original requests can now be successfully authenticated.


In general, the response filter should not be used for handling the response payload, as the developer expects to get the successful payload in the completion block of his request. But, the response filters are executed before payloads are passed to the parser under the SODataStore, so they can be an effective way to trap error responses, especially HTML content, from a server--before it ends up in the OData parser.



In summary, the HTTPConversationManager’s Request- and ResponseFilters give you the ability to touch requests from the application before they go to the server, and handle the responses from the server before they're returned to the application. The ChallengeFilters give you the ability to respond to an NSURLAuthenticationChallenge, in the same way you normally would within the NSURLSessionDelegate.  To implement your own filters, conform to the protocol, and add them to the array on the conversation Manager.

The MAF Logon component is one of the most common reusable components of the Mobile Application Framework (MAF) and it provides easy integration for applications that use logon UI behavior. It is a key component, because before any communication can take place with a backend OData producer, the app needs to on-board users onto the SAP Mobile Platform.


To use the MAF Logon Component you must import a number of libraries and resources into eclipse. You can find these libraries and resources in the folder you specified when you executed the SMP Client SDK installer. For a step-by-step guide on how to integrate MAF Logon component into your android project, you can check this guide: How To... Enable User On-boarding using MAF Logon.


Although the process to integrate MAF Logon into your android project hasn’t changed in SP05, there are several additions that are worth mentioning:


What’s New in SP05?


First, all classes that implement LogonListener must now implement a new method onRefreshCertificate. This method is called after a certificate refresh is triggered at the provider and it’s completed either successfully or not. If you are not using certificates you can leave this method empty as shown below.


New method in LogonListener interface


public void onRefreshCertificate(boolean success, String errorMessage) {

      // TODO Auto-generated method stub


Second, even though, SMP 3.0 SP05 on premise does not support SAML, MAF Logon supports SAML authentication for SAP HANA Cloud Platform mobile services. Developers must define the SAML activity in the Android manifest file as shown in the following code snipped


New activity in android manifest file
<activity android:name="com.sap.smp.client.httpc.authflows.SAML2AuthActivity"></activity>


Finally, the LogonUIFacade contains a new method getLogonConfigurator that returns a manager configurator for the HttpConversation library. The configure method assigns the conversation manager with all the information and filters needed to respond to authentication challenges when requesting information to the backend. Below you will find a code snipped to get the manager configurator to configure a conversation manager.


New method in LogonUIFacade class

IManagerConfigurator configurator = LogonUIFacade.getInstance().getLogonConfigurator(context);

HttpConversationManager manager = new HttpConversationManager(context);


For more information about the MAF Logon component, please visit help.sap.com

Today, I found a use case for testing out the $filter operation on the SODataOfflineStore, and was really pleasantly surprised by the results.  My problem was that the back end has some garbage data, so I wanted to filter out all Contacts which do not have a value for "function" or "company".

Unfortunately, "company" and "function" are not filterable on the back end.  So, if I want to clean up the data, I'm going to have to do it client-side. 

This is a bit of a lab scenario, since in production, I'd expect my data to be clean.  But, it will show how I can use OData $filter method on entries in a client side database with the SODataOfflineStore APIs, and I can share a simple benchmarking procedure to test the performance in your code.


The SODataOnlineStore approach


The dirtiest option here is to query everything, then filter the result set on the device with an enumeration, on every request.  If I'm using the "SODataOnlineStore", this is basically what I'll be stuck with.  My -fetchContacts: code would look something like this:


-(void)fetchContactsWithCompletion:(void(^)(NSArray *entities))completion {


    NSString *resourcePath = @"ContactCollection";


    [self scheduleRequestForResource:resourcePath withMode:SODataRequestModeRead withEntity:nil withCompletion:^(NSArray *entities, id<SODataRequestExecution> requestExecution, NSError *error) {


        if (entities) {


            NSMutableArray *completeEntities = [[NSMutableArray alloc] init];


            [entities enumerateObjectsUsingBlock:^(id<SODataEntity> obj, NSUInteger idx, BOOL *stop) {


                NSDictionary *properties = (id<SODataEntity>)obj.properties;

                NSString *company = (NSString *)[(id<SODataProperty>)properties[@"company"] value];

                NSString *function = (NSString *)[(id<SODataProperty>)properties[@"function"] value];


                if (company.length > 0 && function.length > 0) {

                    [completeEntities addObject:obj];





        } else {


            NSLog(@"did not get any entities, with error: %@", error);





Here, I query for the entire ContactCollection, get back an NSArray of entities, enumerate the entities to filter out those with values for "company" and "function", then call the completion  block to pass the entities that pass the test.


This approach is really inefficient for every single request.  To improve the speed of loading UI views in the application, I would probably create a singleton Model class, with a property value to store only these entities which pass the test, so that I don't end up running the enumeration block every time I refresh a UI.


But, there's a better way, if I'm using the SODataOfflineStore:


The SODataOfflineStore solution

The concept behind the SODataOfflineStore is that when you initialize your application for the first time, you supply a list of "defining requests", which are analyzed to construct the local Ultralite database schema, and then executed on the back end, to populate the database.  (See some additional details about the behavior of these defining requests here.)  The SODataOfflineStore APIs read and write to the local database; CUD entries executed locally are "flushed" to the server, and changes on the server are "refreshed" to the database.


One major benefit of using "defining requests" to populate the database is that the bulk download and insertion into the local database is optimized (using Mobilink protocol), so it is very fast to download very large sizes of records; once the records are on the local database, it is much faster to read/write locally than communicating over the network.


So, let's return to this problem of filtering out the garbage data:  I can't use OData to $filter on the attributes directly, so I'm going to end up downloading all the records to the device anyway.   In that case, let's specify "ContactCollection" as a defining request, so that I can use the optimized Mobilink protocol to download the Contact records in bulk, and then I have them in the local database, where it's much faster to read.


I configure my SODataOfflineStore with a set of SODataOfflineStoreOptions, which is where I set my defining requests:


- (SODataOfflineStoreOptions *)options


    SODataOfflineStoreOptions *options = [[SODataOfflineStoreOptions alloc] init];


    options.enableHttps = self.data.isHttps;

    options.host = self.data.serverHost;

    options.port = self.data.serverPort;

    options.serviceRoot = [NSString stringWithFormat:@"/%@", self.data.applicationId];

    options.definingRequests[@"req1"] = @"ContactCollection";

    options.enableRepeatableRequests = NO;


    options.conversationManager = self.httpConvManager;


    return options;


Defining requests are stored in a dictionary, where keys should be named as:  [NSString stringWithFormat:@"req%i", index + 1];

i.e.:  [req1, req2, req3, ...]


Once my SODataOfflineStore is opened, I'll have access to the complete Contact collection.  Now, I can write my -fetchContacts: method to use standard v2 $filter semantics, saving me the cost of enumerating the generated id<SODataEntity> entities.  gist link.


-(void)fetchContactsWithCompletion:(void(^)(NSArray *entities))completion {


    NSString *resourcePath = @"ContactCollection?$filter=length(company) gt 0 and length(function) gt 0";


    [self scheduleRequestForResource:resourcePath withMode:SODataRequestModeRead withEntity:nil withCompletion:^(NSArray *entities, id<SODataRequestExecution> requestExecution, NSError *error) {


        if (entities) {




        } else {


            NSLog(@"did not get any entities, with error: %@", error);






What good is this comparison without some real numbers??  The SAP Mobile SDK actually has a new feature we snuck into SP05 named "Usage".  The majority of the Usage features are currently dependent on the upcoming HANA Cloud Platform Mobile Services release, so I won't talk about it in detail, but there's a very useful little helper class named "Timer", that we can use to benchmark the performance of filtering though enumeration (option 1) versus $filter in the database (option 2).


The Timer object is generated with a factory method on Usage, where I pass a name.  It has a simple method "stopTimer" that I can invoke directly; then, I can read out the duration in milliseconds.


-(void)fetchContactsWithCompletion:(void(^)(NSArray *entities))completion {


    NSString *resourcePath = @"ContactCollection?$filter=length(company) gt 0 and length(function) gt 0";


    Timer *t = [Usage makeTimer:@"ContactFilter"];

    [self scheduleRequestForResource:resourcePath withMode:SODataRequestModeRead withEntity:nil withCompletion:^(NSArray *entities, id<SODataRequestExecution> requestExecution, NSError *error) {


        if (entities) {


            [t stopTimer];

            NSLog(@"t = %@", [t duration]);




        } else {


            NSLog(@"did not get any entities, with error: %@", error);





     2014-11-05 14:40:45.163 SAPCRMOData[11255:3299863] t = 5.281984806060791

In normal practice, the Timer object is stopped by invoking [Usage stopTimer:t], which results in the record being written to the Usage database.  Calling stopTimer directly on the Timer object will not result in the record being saved.


My database has 200 Contact records, of which 26 entities meet the criteria of length(company) > 0 and length(function) > 0.



  • Executing the filter as an enumeration on results from local database averaged between 18.5 and 19 milliseconds.
  • Executing the $filter in the database averaged between 5.2 and 5.6 milliseconds.


Executing the filter as an enumeration on results from the network averaged ~2.3 seconds round-trip

Error Code 1202 is always a bummer:  your server certificate is untrusted.  Untrusted server certificate errors can be a pain on iOS simulators; the cert install capabilities have improved incrementally over iOS 6/7/8, but seem to be inconsistent from one xCode release to the next.  The standard community approach continues to be to disable TLS security features in your code, which eliminates the server trust guarantees.  But this opens security holes which you must close before shipping or distributing test clients to your end users.


Apple published a technical note on HTTPS Server Trust Evaluation, in which it references these common community recommendations, and disabuses them with stern warnings.  It's official recommendation is to fix the problem on the server for an invalid certificate, or for an unknown certificate authority, to include a copy of the certificate from a Custom Certificate Authority in your application, create a certificate object, then set the certificate as the trusted anchor for the trust object.


If you plan to package the server certificate in your application in production, then this approach is of course fine.  But if your certificate will already be installed on-device, through a MDM profile, etc., then a shortcut for development in the simulator is to install the certificate in your simulator's 'device' trust store.  This does not compromise the integrity of the TLS verification framework, nor require additional 'simulator-only' code.  It takes about 3 minutes.

Note:  this methodology is not supported by Apple or SAP, and could be broken in the future by incompatible changes on xCode and iOS Simulator.  It has been confirmed to work on iOS 5, 6, 8.0/8.1.


1.  Get a copy of the certificate from the server.

  • Navigate to a page on the server in Firefox
  • Click on the 'lock' icon in the navigation bar, and click 'More Information' to see the 'Security' tab
  • Click on 'View Certificate' link, then switch from the 'General' view to the 'Details' view
  • In the 'Certificate Hierarchy' window, select <the cert name>, then click 'Export...'
  • Save As: CertName.pem (add the .pem suffix to the file name, as well as setting the format to PEM)


For some sites, you may be able to drag & drop the .pem file onto the screen of your iOS simulator, click "Install" when prompted, and operate as desired.  It's worth a try.


If this does not resolve the issue, continue as follows:

2.  Download an open source python script by the team at ADVTOOLS.  Extract the iosCertTrustManager.py file

3.  Run the iosCertTrustManager.py file, giving it the location of the certificate to be installed on your simulator.


i826181$ python iosCertTrustManager.py -a ~/Documents/SAPNET_CA.pem


subject= C = DE, O = SAP-AG, OU = SAP-AG, CN = *.wdf.sap.corp


Import certificate to Resizable iPad 8.1 [y/N] y

Importing to /Users/i826181/Library/Developer/CoreSimulator/Devices/20BECA4E-76F5-4660-A190-C0F3BF021EF9/data/Library/Keychains/TrustStore.sqlite3

  Certificate added

Import certificate to iPad Air 8.1 [y/N] y

Importing to /Users/i826181/Library/Developer/CoreSimulator/Devices/977AB354-C155-42C6-99BC-F52C330D1D48/data/Library/Keychains/TrustStore.sqlite3

  Existing certificate replaced

Import certificate to iPhone 6 Plus 8.1 [y/N] y

Importing to /Users/i826181/Library/Developer/CoreSimulator/Devices/A8361243-2B2A-487E-81EE-F86C9EE9A920/data/Library/Keychains/TrustStore.sqlite3

  Certificate added

Import certificate to iPhone 6 8.1 [y/N] y

Importing to /Users/i826181/Library/Developer/CoreSimulator/Devices/B67C61CA-4CFA-4D19-AB84-431150BAF59F/data/Library/Keychains/TrustStore.sqlite3

  Existing certificate replaced


     You will be prompted to install the certificate on each simulator which you've run.


At this point, try hitting your server, from the application:  you should be able to complete the SSL handshake.


Note:  ADVTOOLS is not (known to be) an affiliate of SAP. 


Filter Blog

By author:
By date:
By tag: