SAP Notes are great sources of information on issues occurring in the code or documentation of a product. Use them to find out what issues impact a particular release and how you can work around them.


When can SAP notes help you?


  • You’re considering whether to upgrade your system to a newer version and you want to know what issues affect that version. Look up the SAP Notes for the product so you can plan and work around them quickly and efficiently.
  • You recently upgraded your system and you suddenly see an error message you’ve never seen before. Search for SAP Notes with that error message text to find out how to get around the issue.
  • You’re stuck on a particular task and you’re not sure what to do next. Look at the SAP Notes for the product – it’s possible that one of the notes details something missing from the documentation that can help you work around the issue.
  • You’re looking for sizing information - dynamic tiering, for example, has an SAP Note that summarizes current sizing recommendations.


Where can you find SAP notes?

  1. Go to the Support Portal.
  2. Select Knowledge Base & Incidents in the top menu, then SAP Note & Knowledge Base Article Search.
  3. Click the Launch the SAP Note and KBA search button.
  4. Enter the product name in the Search Term field to find all notes for that product.




OR, if you know the number of the note you’re looking for, simply type it in the Number field in the top right corner.


TIP: Dynamic tiering has a central SAP Note for each release that lists all notes associated with that product version. For example, if you want to find the central note for the SPS 11 release, search for "dynamic tiering” and look for the note titled “SAP HANA Dynamic Tiering SPS 11 Release Note.”




You can also find SAP Notes on the Help Portal


The Help Portal has a direct link to the current release’s central SAP Note. To find the central SAP Note for dynamic tiering:

  1. Go to the SAP HANA Dynamic Tiering documentation on the Help Portal.
  2. You can find a link to the central SAP Note for the release under the What’s New – Release Notes section. This note has direct links to all issues actively impacting this product version.

For example, for SAP HANA dynamic tiering SPS 11, the central note is


Did you know that…?


Once an issue is resolved in a particular version, the SAP Note is archived and is no longer publicly available through that version’s central SAP Note?


What has your experience been with SAP Notes?


Have you had issues finding them? Do you find the content helpful or do you think it could be improved in some way? Feel free to post any comments or feedback regarding smart data streaming SAP Notes.


[Writer's Note: Thanks to Claudia Gosav, who wrote the original version of the post for Smart Data Streaming and granted permission for me to adapt and rebrand it.]

Note: all the below steps are done by HANA SYSTEM user. If you use other HANA user, you may grant the roles/privilege if needed.


  • Enable Data Provisioning Server


  • Import Data Provisioning Delivery Unit

Download HANA SDI 1.0 and install Data Provisioning Delivery Unit. You could use the UI in this Delivery Unit to monitor the replication task status later.

  • Grant Data Provisioning Monitor privilege to SYSTEM

          Log on HANA using SYSTEM and execute:



  • Test DP monitor cockpit

         Then you should access the UI of the Data Provisioning Delivery Unit:




  • Install Data Provisioning Agent in your Hadoop server

Note: you may install the agent in a separate Linux server.

Log on Linux server using root user. Add one new OS user name with dpagent.


Extract the installation file in /home/dpagent/installation:

./SAPCAR_617-80000935.EXE -xvf ./IMDB_DPAGENT100_01_0.sar


Install Data Provisioning agent:

cd /home/dpagent/installation/HANA_DP_AGENT_20_LIN_X86_64
./hdbinst --silent --batch --path="/usr/sap/dpagent" --user_id=dpagent

                --path: specify the directory of dpagent installation.

                --user_id: specify use which user to run the dpagent service



  • Start and Connect DPAgent to SAP HANA

Use root user:

cd /usr/sap/dpagent
vi ./dpagentconfig.ini

Modify the following configurations:

start agent:

nohup ./dpagent &


  • Register DPAgent

Start Xming on your terminal server.

Log in to RedHat running Data Provisioning Agent use root user:

cd /usr/sap/dpagent/configTool

Now you should see below dialog to configure Data Provisioning Agent:


Click ‘Connect to HANA’.

HANA database user: SYSTEM

HANA database user password: password_of_SYSTEM

Then you should connect to HANA:


Click Register Agent, and input:

Agent Name: mydpagent   (you could use your own name instead of this)

Agent Hostname: vcRHL660 (you should use your server name to instead of this)


Click Register, then your agent should be registered successfully:


Now we need to register the HiveAdapter distributed by SAP.

Select HiveAdapter and click ‘Register Adapter’:


The ‘Registered with HANA’ should be ‘Yes’:


  • Copy Hive JDBC to /usr/sap/dpagent/lib/

Now let’s configure the HiveAdapter.

Download Hive 0.13.1 from the below link:

               Note: please download one version of 0.12.0, 0.13.0 or 0.13.1.

Use root user to log on the Linux Server running Data Provisioning Agent, copy the jar files into /usr/sap/dpagent/lib/




  • Create Remote Source using ‘SYSTEM’

Use SYSTEM log on HANA in HANA studio and create one new remote source to point to the Hive through your Data Provisioning Agent:


Please input the information according to your own environment:

Source Name:            HIVE_EIM (you could input your own source name)

Adapter Name:           HiveAdapter

Source Location:        mydpagent (your agent configured in the above)

Hive Version:             0.13.1

Host:                         your own Hive server name

Port:                          10000 (Port of Hive)

Database Name:        test (your own Hive database name)

Use SSL:                  false

Credentials Mode:      Technical User

Username:                 hive

Password:                  hive


  Then please save and activate it. Now you should see your HIVE_EIM under Remote Sources:


You can create virtual table based on your remote table:


Data Preview from the virtual table:


  • Grant Role and Privilege to ‘SYSTEM’

If you would like to create Replication Task in HANA, your should grant role and privilege as below:



Grant 'sap.hana.xs.ide.roles::EditorDeveloper' to SYSTEM user:

CALL GRANT_ACTIVATED_ROLE('sap.hana.xs.ide.roles::EditorDeveloper','SYSTEM');




  • Create Replication Task



log in using SYSTEM and create Replication Task:


Input your Replication Task Name, such like ‘HIVE2HANA’. Then please fill the information as below:





When saved and activated successfully, please log on HANA in HANA studio using ‘SYSTEM’ user. You should find the below objects created by HANA automantically:


HANA will create one virtual table points to your remote Hive table, you could review the data from this virtual table. One destination table is also created by HANA, now it’s empty. You could execute the procedure created by your replication task to load remote Hive data into your local HANA table. Double click the procedure to review its code:


Then open one SQL console and execute the procedure:



  After it’s executed successfully, then you could preview the data in your local table.


  • Monitoring the replication tasks

You could also monitor the Data Provisioning Agent and replication tasks in the UI of Data Provisioning Delivery Unit you imported in the above.

Please open this link to monitor the Data Provisioning Agent status:




Open this link to monitor the replication tasks status:







I have a HANA XS application that can be deployed to separate tenant databases when my HANA system is setup as Multitenant Database Containers (MDC).  Naturally, I require HTTP access to those tenant databases and I was running into problems configuring this for both my SystemDB and my tenant databases.


I found the excellent post by Wenjun Zhou Exposing HTTP access to multitenant database containers in SAP HANA SPS09 and although the procedure is different for HANA SPS11, it pointed me in the right direction.  The documentation also has a section on this topic (Configure HTTP(S) Access to Multitenant Database Containers - SAP HANA Administration Guide - SAP Library) that was useful, but there were a few steps that needed extra attention to complete the process, which I'll describe in this blog post.


Note: I'll only discuss HTTP access here, but the procedure is similar to configure HTTPS access.




  • The HANA databases in your system should already be set up for access via DSN alias hostnames.
  • The Web Dispatcher is set for automatic configuration.
  • You have already created one or more tenant databases in your HANA MDC system.


There are a few other prerequisites and the list is found in the documentation link above.


Accessing the XS Engine


By default, you can access the XS engine for the SystemDB by opening the URL http://<fqdn_hostname>:80<instance>, where "fqdn" is the fully qualified domain name (FQDN).  For example, if my HANA system is "" and instance number is 00, then the XS engine URL for the SystemDB is


Now, if you try to access the XS engine for one of your tenant databases (assuming you've set up its DSN alias hostname), you'll immediately get the following error:

503 Service not available

For example, if you created a tenant database called "TN1" with alias "mdc-tn1", then the XS engine for that database is - however you'll get that 503 error when you try to open this URL.

Configure HTTP Access for the Tenant Database

What you must do now is specify the public URLs for all your tenant databases.  Here's how you do this in HANA Studio (example with SQL is below as well):

  1. Connect to the SYSTEMDB (not the tenant database) as SYSTEM and open Administration.
  2. Switch to the Configuration tab and expand xsengine.ini | public_urls.  Double-click on "public_urls" to invoke the Change Configuration Values dialog.


  3. Click the Databases drop-down box, select the tenant database(s) you want to configure and click OK.


  4. Provide the new URL using the alias hostname and click Save.


You are now able to XS engine for the tenant database (e.g. without getting the 503 error.

If you want to use SQL instead of HANA Studio, you can execute the following statement in a SQL Console.  Don't forget to connect to the SYSTEMDB (not the tenant database!) as the SYSTEM user.

ALTER SYSTEM ALTER CONFIGURATION ('xsengine.ini', 'database', 'TN1')
SET ('public_urls', 'http_url') = ''

Re-Configure HTTP Access for the SystemDB

If you now try to access the SystemDB XS engine using the same URL as before (e.g., you'll find that you'll get (oh no!):

503 Service not available

The documentation clearly states that the URL for the SystemDB changes once you've configured the URLs for the tenant databases.  Instead of using the FQDN, the SystemDB XS engine is available under http//<localhost>:80<instance>.  In our example, this means that the XS engine would now be found at http//mdc:8000.

What you need to do now is execute the following SQL statement to re-configure the SystemDB URL to use the alias host name:

ALTER SYSTEM ALTER CONFIGURATION ('nameserver.ini', 'system')
SET('public_urls', 'http_url') = ''

Note that if you try to make this change in HANA Studio by editing the xsengine.ini configuration file, the change will not take effect.

Voila!  You now have access to the XS engines for the SystemDB and tenant databases using the alias hostnames.

Final Thoughts


You can spend a lot of time trying to figure out how to enable HTTP access to your SystemDB and tenant databases.  Usually once you configure the tenant databases properly, you scratch your head determining why the SystemDB URL no longer works.  The procedure to fix this is quite straightforward, once you know where to look.  At a minimum, following these steps will allow you to open each tenant database's HANA Cockpit, as that's also a HANA XS application.

1. Windows Users- remember to use "Node.js command prompt" and NOT  "Node.js" : When you download and install Node.js for windows version, along with Node.js environment the installation also adds icons for quick launch of Node.js console. If you use quick launcher "Node.js", it does open a console which appears to be Node.js console, however when you try to run any command, it ends up in a weird error.








Complete list of installers is available here.



2. Install packages, Node.js does not contain everything in it:  Depending on the logic, your Node.js code may use various libraries. Don't be under impression that the initial setup makes everything available to you. You may need to add those packages explicitly to your environment. NPM (Node.js package manager) is a Node.js console based command which allows you to install different packages.


Packages like express, JQuery, bower, underscore, debug, request etc. needs to be installed explicitly. The complete list of NPM packages is available here



The syntax is like below


npm install <packagename>



npm i  <packagename>  (incase you feel lazy to type complete spelling of install )

3. Want to test Node.js online? Try Windows Azure: Windows Azure provides one month of free trial to host web applications online. Azure comes with many useful features to manage web contents. It also supports integration with Github which is used for versioning of source code.


Important : If you want to deploy Node.js web application on Azure, you will have to use ports supported by Azure, in other words -hardcoded port number may not work on Azure.

Smart Data Access (SDA) is slightly older feature in HANA .I recently got a chance to setup for SQL Server12 . I read lot of documents/blogs/you tube videos but still running into issues and couldn’t implement in first go. It took me two or three iterations to get it installed correctly. I decided to put exact steps one should follow to implement SDA for SQL Server. I believe with these steps one should be able to install SDA for SQL Server in first go without wasting much time.

SDA Definition:


SAP HANA smart data access enables remote data to be accessed. It enables remote data to be accessed as if they are local tables in SAP HANA, without copying the data into SAP HANA. Specifically, in SAP HANA, you can create virtual tables which point to remote tables in from data sources. It is possible to write queries in HANA combining Hana native tables and virtual tables.

SDA Architecture:


We are configuring HANA Server SH1 and connecting to SQLSERVER and another HANA Database.

Steps to Configure SDA :


a.      Linux Users

b.      Download / INSTALL unixODBC Driver Manager

c.      Download / INSTALL SQL Server Drivers

d.      SETUP .odbc.ini file

e.      Testing

f.      Setup SDA for SQL server in HANA STUDIO



a. Linux Users

I am sure everyone knows about users but just wanted to reiterate .When you install Hana with root user it automatically creates a user <SID>User. If you don’t know the <SID>User you can run following command and know the <SID>User.

>> cat /etc/passwd


Root User:


With root user you will setup complete configuration of HANA System that including  UnixODBC and other ODBC drivers.




<SID>User may not have permissions to setup configurations files unless he is given special permissions. Generally all HANA configurations are done with root user.

Hana Studio is owned by <SID>User  so some of the configurations you did as root user the others users in the system can’t see those configuration files .



b. Download / INSTALL unixODBC.2.3.0 drivers Manager.


Download the update the unixODBC driver to 2.3.0




Once you download this driver (unixODBC-2.3.0.tar.gz)  move it to HANA Server (can use FileZilla or other file transfer s/w) .Go to the directory when you have copied this file.


Login as root user and go to the directory when you copied the drivers.

>>  gunzip unixODBC-2.3.0.tar.gz

>>  tar xvf  unixODBC-2.3.0.tar

>>  ls 


Will show you the unixODBC-2.3.0 folder , type the following command to install Unixodbc drivers .


>> cd unixODBC-2.3.0

>> ./configure

>> make

>> make install


Once the making of binaries are done check if the drivers installed properly.

>> isql –version

The output should be unixODBC 2.3.0

c . Download /Installing SQL Server Drivers :


Download MS SQL SERVER Driver for Suse Linux:

(google for other flavors of Linux drivers for SQLSERVER)


Once you download , copy the SQL server drivers to HANA Server ( May be at same location where you downloaded /installed UnixODBC file using filezilla  or other file transfer software)


INSTALL MS SQL server drivers:  unzip the file:


Login as root user

>> gunzip msodbcsql-11.0.2260.0.tar.gz

>> tar –xvf msodbcsql-11.0.2260.0.tar


This will install a folder with msodbcsql-11.0.2260.0 and also it will install in /mnt file directory

to check whether it installed SQL server drivers properly check the following path sql odbc directory.


>> cd /mnt/drivers/msodbcsql-11.0.2260.0


Verify the version and install:


>> ./ verify


>> ./ install


Check again to see if it’s installed.


>> ./ verify


d. SETUP .odbc.ini file


Check the odbc.ini file by using the command


Login as root user


>> odbcinst –j



Odbc.ini file is visible at two locations .One at root directory and one at /etc/unixODBC/odbc.ini


Note: if you want all the users to have access to .odbc.ini file then add config in root user and add entries in “/root/.odbc.ini” file



>> vi  /root/.obdc.ini       -- Open the file and enter following and save it


----------------------------  MS SQL SERVER DRIVERS --------------------------










Login to Hana Server with Root User (in Putty)  and test connectivity for Sql Server :


Login As root user

>> isql  -v  MSSQL sa  Welcome1


This works for root user .

f.   Add data source from HANA studio



  1. Error : SAP DBtech JDBC L403) Internal error Cannot remote source objects [unixODBC] Data Source name not found and no default driver specified.


Is you see the above error which tells there is a problem with unixODBC drivers but the error message is misguiding .This means the  HANA Studio cannot  see some of the configuration files in HANA HOME Directory.Remember .Hana Studio is owned by <SID>USER..


Do the following


Log in HANA Server as root user in putty


>> cp /root/.odbc.ini  /usr/sap/<SID>/home    (ex:  cp /root/.odbc.ini  /usr/sap/SHI/home)


Logout of Hana Server and come to HANA Studio and try to connect SQL Server .I got success when I do this process.



Configuring  Other Hana Server (Server sid name : FDC )


Login to HANA Server (SH1 ) as root user


>> vi /root/.odbc.ini

----------------------------  MS SQL SERVER DRIVERS --------------------------





--------------------------------- HANA SYSTEM (FDC) ---------------------------------------

Server=FDC: 30015



>> cp /root/.odbc.ini  /usr/sap/SHI/home ( For HANA Studio to see this file )




Connecting Other Databases:  Will update this section when I configure .






Now we have connected to SQL Server and different HANA Server. This worked for me and hope it works for you as well in your first go. Whenever I add other databases as sources I will update the process with my findings in this blog and thanks for reading my blog.

DB2 to SAP HANA Migration -Db2 Translate ()  function in SAP HANA

DB2 Query  :



(TRANSLATE(PHONE_NUMBER,'','1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz()-/')) as PHNE

from  "Phone_test"

In above query where ever its find any letter from  this string '1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz()-/' in phone number  It replace with Space like ‘ ‘

result s(Out put) :

Db2 Image.PNG

In SAP HANA there is no  Translate function  or similar function is not available , There is a work around using Regular Expression for similar scenario.

SAP HANA Query  with out put:

HANA Image.png




In this blog post, we will explain, how to set up Jupyter as a browser-based frontend to easily query and visualize your data.


Jupyter is a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text, see Project Jupyter.


This tutorial consists of two parts.

You are currently reading part one, which explains the basic steps how to set up and configure Jupyter.

It is essential to complete part one before continuing with part two!


Part two demonstrates how to run queries in Python and how to visualize data using matplotlib.






Before starting this tutorial, please make sure your cluster is up and running.

You should have at least once started the spark shell and run some queries to test its functionality.


To complete part 2 of this tutorial, you need sample data, which can be downloaded here:

Dropbox -


This file contains TPC-H sample data at scale factor 0.001.

Please download the file and extract its content to your HDFS.


Alternatively, you may generate the sample data on your own by downloading and compiling DBGEN:


Please do not use the Ambari webinterface for uploading files, because it may corrupt them:





To get startet, we need to install several packages, that should come bundled with your Linux distribution.

Please run the following commands on a RedHat-based machine:

sudo yum install python-pip
sudo yum install python-matplotlib
sudo yum install gcc-c++
sudo pip install --upgrade pip
sudo pip install jupyter

You may install Jupyter on a jumpbox outside the cluster, for example, on an Ubuntu-based system.
Then, the first two commands are slightly different:

sudo apt-get install python-pip
sudo apt-get install python-matplotlib
sudo apt-get install g++
sudo pip install --upgrade pip
sudo pip install jupyter





Next, we need to set some environment variables to inform Jupyter about our Spark and Python settings.

Please adjust the paths and version number below according to your local environment, then either run these commands on the shell as the "vora" user, or put them in your ".profile", to have them loaded every time you log in:

export PYTHONPATH=/home/vora/vora/python:$SPARK_HOME/python:$SPARK_HOME/python/lib/
export ADD_JARS=/home/vora/vora/lib/spark-sap-datasources-<version>-assembly.jar
export PYSPARK_SUBMIT_ARGS="--master yarn-client --jars $ADD_JARS pyspark-shell"



Configure Jupyter


Please run this command as the user "vora" to generate the initial configuration for Jupyter:

jupyter notebook --generate-config

Now, open an editor and edit the file "~/.jupyter/"

Since we are running on a remote machine with no Window Manager, we configure Jupyter to not open up a webbrowser on startup.

Please uncomment the line

# c.NotebookApp.open_browser = False

Uncomment means removing the pound sign at the beginning of the line.


To be able to access Jupyter from remote, we need to uncomment the following line as well:

# c.NotebookApp.ip = '*'

Notice: This will give everyone access to the Jupyter webinterface.

In a production environment, you might want to set up access control.

Please refer to this guide, how to secure your Jupyter installation:

Securing a notebook server


After applying the above changes to the config file, please save your changes and close the editor.



Usually, cloud providers and IT departments are very restrictive and may block access to Jupyter's TCP port (default: 8888).

Please make sure to include a rule in the firewall configuration allowing access to the port on the machine running Jupyter.

Consult the provider's documentation or your IT department for details.

Running Jupyter


To run Jupyter, first, create an empty folder where you want to store your notebooks, and go into that folder.

Then run the following command as the user "vora", e.g.:

mkdir notebooks
cd notebooks
jupyter notebook

This will start a Jupyter notebook server, listening on port 8888 for connections.

The console output will be similar to this:

[I 09:39:29.176 NotebookApp] Writing notebook server cookie secret to /run/user/1000/jupyter/notebook_cookie_secret
[W 09:39:29.200 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended.
[W 09:39:29.200 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using authentication. This is highly insecure and not recommended.
[I 09:39:29.204 NotebookApp] Serving notebooks from local directory: /home/d062985/notebooks
[I 09:39:29.204 NotebookApp] 0 active kernels
[I 09:39:29.204 NotebookApp] The IPython Notebook is running at: http://[all ip addresses on your system]:8888/
[I 09:39:29.204 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).

Now we can fire up a webbrowser on another machine and navigate to the URL of the host running Jupyter, e.g. http://jumpbox.yourcluster.internal:8888/

You should see a website like this:



By clicking New, you can start a new notebook, that is waiting for your input:


After clicking, the empty notebook will open up:



Now, we can start submitting queries by entering the query into a paragraph and hitting the play button on top.
This will then execute the snippet in the background and return results to the webpage.



Submitting queries and plotting data


The final part of this tutorial will take place in Jupyter.

Please download the attached Jupyter Notebook "", unzip it, and copy it to the notebook folder on your machine running jupyter.

Then, open the file in the Jupyter webinterface in your webbrowser.

Today, we celebrate an important day in US history, Martin Luther King’s birthday (MLK) – which is officially on January 15th, however, in the US it is celebrated on the 3rd Monday in January. MLK was an important figure in our History, amongst other things, but also because in August 1963, he delivered the famous speech called “I have a dream”.


Similarly to MLK’s speech, many IT professionals have had a dream. A dream that one day, data from different sources, colors, structures, and backgrounds can all be gathered together, analyzed, and served under one central repository. That central repository which now houses data together (persisted or virtually) is called SAP HANA.


SAP, the HANA team and other SAP technologies have made this “dream” a reality for the IT world and the enterprises.


1) SAP Landscape and Transformation (SLT) is a tool provided by SAP that facilitates the near real time data replication from sources such as ECC, MSSQL , Oracle (and others) into HANA.  Also, via SLT we can create virtual Models and we are able to utilize those models in HANA.


2) Smart Data Access – SDA makes it possible to access remote data, without having to replicate the data to the SAP HANA database beforehand.


3) Smart Data Integration - SAP HANA smart data access (SDA) functionality is further enhanced in SAP HANA smart data integration to provide virtual access for non-critical data



4) Data Services – is an ETL tool that provides a solution for data integration, data quality, data profiling, and text data processing that allows you to integrate, transform, improve, and deliver trusted data to critical business processes


5) HANA Extended Application Services, aka HANA XS. With HANA XS, we are able to serve data via http(s) to other sources. Moreover, XS is the engine which allows web applications to be built and with the help of sapui5 (JS library) we can build modern and responsive applications. Further, the consumption of external data is also possible via outbound connectivity.


Please feel free to ask questions about any of the HANA topics above.

Thank you for reading this short blog and happy MLK day!

The base idea was to share my exploration on SAP HANA Text Analysis thru this blog. In the process, I have used custom configurations and custom dictionary to build my full text analysis index tables and used a dynamic query in the procedure to fetch the number columns of a table.


If there are no entries in text analysis table, then the query will return the fields maintain in the custom configuration table, else it adds a new column to the query which was requested and stored in the text table. Thought was the same procedure should return the data with X or Y number of columns without changing the procedure code.



STEP 1: Creating a Custom Configuration .hdbtextconfig:

Custom dictionary is used to define our own tokens or variants. Variants will be categorized by TA_TYPE.

In this case, I have built a custom dictionary which is based on database table and fields.

  1. In order to do this, I have created a new file customconfig.hdbtextconfig
  2. I have copied the code inside EXTRACTION_CORE_VOICEOFCUSTOMER.hdbtextconfig
  3. Added the below line in my custom configuration file, customconfig.hdbtextconfig




Path for TA:

Path for TA.jpg




Code in Custom Configuration:

Code - VoC.jpg




STEP 2: Creating a Custom Dictionary .hdbtextdict:

I have added the lines in the file to create a custom dictionary. When the entries are populated in the table which has text analysis index created with the customconfig.hdbtextconfig, it will read the below dictionary to identify the TA_TOKEN and TA_TYPE. I have maintained the table name "" and field name "status" in the custom dictionary under different categories.


<?xml version="1.0" encoding="utf-8" ?>
<dictionary xmlns="">
    <entity_category name="querytable">
      <entity_name standard_form="">
            <variant name ="serial numbers"/>
  <entity_category name="queryfields">
      <entity_name standard_form="queryfields">
            <variant name ="status"/>

STEP 3: Tables & Text Analysis full text index:

  1. Table for default selection "".


        Table was created to maintain the default query. This table will have 3 fields Schema name, Table name and the default fields. This query will get                    executed when there are no entries in the text analysis table.



    2. Table to fetch the data. Maintain some sample records. In this case ""


    3. Table to store the requests for the selection of the fields from the above table. In this case "WRK_SCH".""


    4. Create a full text index on the table "WRK_SCH"."". This table will hold the newly requested FIELDS. Refer the custom          configuration file for creating an Index.


CONFIGURATION 'amohas97.session.textproj::customconfig'
TOKEN SEPARATORS '\/;,.:-_()[]<>!?*@+{}="&'

STEP 4: Procedure:

  1. Get the default query maintained in the "" to fetch the data from ""
  2. Check whether there are entries in the text analysis table
  3. If entry exist, get the values of TA_TOKEN & TA_NORMALIZED. These entries will be the values of table and field.
  4. Build a new query using the step 3 information
  5. Execute the query which was built in step 4

                LANGUAGE SQLSCRIPT as                  
                --SQL SECURITY INVOKER
                --DEFAULT SCHEMA <default_schema_name>
                --READS SQL DATA AS
declare lv_query nvarchar(100) := null;
declare lv_new_q nvarchar(100) := null;
declare lv_old_q nvarchar(100) := null;
declare lv_q_fin nvarchar(1000) := null;
declare lv_table nvarchar(100) := null;
declare lv_fields nvarchar(100) := null;
declare lv_count nvarchar(3) := null;
                Write your procedure logic
-- Configurable table where the fields are maintained for default selection
WHERE schema_name = 'WRK_SCH'
-- Index table
select count(*) into lv_count from "WRK_SCH"."$TA_TXTIDX2" where ta_type = 'querytable';
                if lv_count > 0 then
                                -- --To get the table name, QUERYTABLE is category name. "serial numbers" this text is a token to identify the table
                                select top 1 ta_normalized into lv_table from "WRK_SCH"."$TA_TXTIDX2" where ta_type = 'querytable' order by time desc;
                                -- --To get the filed name, QUERYFIELDS is category name. "status" this text will be identified as field
                                select top 1 ta_token into lv_fields from "WRK_SCH"."$TA_TXTIDX2" where ta_type = 'queryfields' order by time desc;
                                -- --To get the COLUMN name, or to check whether column exists
                                select column_name into lv_new_q from TABLE_COLUMNS where schema_NAME = 'WRK_SCH' and table_name = lv_table and upper(comments) = upper(lv_fields);
                                -- --Concatenate the field name with the exisiting query
                                lv_query := lv_old_q || ',' || lv_new_q;
                                -- --Prepare the final query
                                lv_q_fin := ' select ' || lv_query|| ' from "WRK_SCH"."" ';
                -- --Existing query in the table
                lv_q_fin := ' select ' || lv_old_q|| ' from "WRK_SCH"."" ';
                end if;
-- Execute the QUERY

STEP 5: Execution:


Run the procedure in SQL console : CALL "amohas97.session.ta::dquery"();

when we first execute the procedure, the query will return the data/entries with three columns from the table ""

as the default selection of the fields is maintained in the "" table.

In this table I have maintained only 3 columns. so the query returned only 3 columns.


1.      Now in SQL console, I will insert a new entry in this table with a statement in English in the table "" table. I have created a full text index on this table. So whenever there is an entry created it, my text analysis table will have an entry with categorized TA_TOKENS and TA_NORMALIZED.

insert into "."""WRK_SCH"."" values('1','shahid','give me the status of serial numbers');

This is how the text table looks like


1.      Now run the procedure again, this time the procedure will return the data with 4 columns





Thank you for reading.

This post is part of an entire series

Hana Smart Data Integration - Overview



The SDI product consists of three main building blocks: The Hana Index Server, the Data Provisioning Server and the Data Provisioning Agent.

While the first two are processes of the Hana instance itself, the agent is an external process and as such can be installed anywhere.


SDI architecture.png


SDA use case


As the architecture involves quite a few components, best is to start with an example and follow its way through all the steps.

The user did execute a SQL statement like "select columnA, substring(columnB, 1, 1) from virtual_table where key = 1".

SDI architecture - SDA.png

  1. This SQL statement is part of a user session in Hana and enters the SQL Parser. The first thing the parser needs is the required metadata - does a virtual table of that name even exist, what are its columns etc. All of this are data dictionary tables in Hana.
  2. The SQL Optimizer does try to pushdown as much of the logic as possible. In order to make adapter development simpler, it cannot simply pushdown the full SQL but rather look at the metadata what kind of statements the adapter told to support. In this example the adapter shall be a very simple one, all it supports are select statements reading all columns and no functions; simple equal where clauses it does support. Hence the optimizer will rewrite the statement into something like "select columnA, substring(columnA, 1, 1) from (select * from virtual_table where key = 1)". This statement will return the same as the original one but now it becomes obvious what parts are done inside the adapter and what has to be done in Hana. The inner "select * where.." is sent to the adapter and the adapter will return the row with key = 1 but all columns. Hana will then take that row, read columnA only and return its value plus the result of the substring function to the user.
  3. The SQL Executor is responsible for getting the actual data, so it will tell the Federation Framework to retrieve the data of the optimized SQL.
  4. The Federation Framework is responsible of combining the data coming from Hana and the remote system. In this simple example the SQL select reads remote data only, so it will call the appropriate methods in the adapter, that is an open call to establish the connection to the source (if not done already for that session, provide the inner SQL to the adapter by calling the executeStatement method and then call the fetch method until there is no more data.
  5. Since the Federation Framework cannot talk via the network to that remote adapter directly, is has a Delegator for the Federation Framework. This components calls the equivalent methods in the Adapter Framework of the Data Provisioning Server.
  6. The Adapter Framework itself sends the command via the network to the agent where the corresponding methods in the Adapter are called. The responsibility of that component is to route the commands to the correct adapter and deal with any error situations like agent cannot be reached etc.
  7. The adapter acts as a bridge. It gets an open call with all the parameters provided by the remote source object, it should open a connection to the source system. It does receive the SQL command in the executeStatement method, hence it does translate that into the equivalent call for the given source. Its fetch method is called, the adapter should return the next batch of data by reading the source data and translating the values in to the Hana datatype value.


Realtime push


The definition of what remote tables the user wants to subscribe to follows the same path as SDA from step 1 to 5. A SQL statement is executed - create remote subscription on ... - and all the validation like, does the table exist, does the adapter support the required capabilities etc are performed.

SDI architecture - Realtime.png

  1. The interesting part starts when the subscription is made active. With the "alter remote subscription ..queue" command the Realtime Provisioning Client is told to start the subscription. There the first checks are performed, e.g. prevent starting an already started subscription.
  2. The Realtime Provisioning Manager inside the DP Server decides what needs to be done in order to get the change data. Basically this means two things, either telling the adapter to start sending the changes or, if the adapter is sending the changes already for another subscription and its data can be reused, simply consuming it as well.
  3. If the adapter has to be notified about the request of getting change data, the Adapter Framework forwards that request to the agent and from there to the adapter.
  4. And the adapter does whatever needs to be done in order to capture the requested changes. This is really source specific, for databases it might mean reading the database transaction log, for other sources it could be implemented as a listener and the source does push changes. and in worst case the adapter has to frequently check for changes in the source. From then on the Adapter keeps sending all change information for the requested data back to the Adapter Framework and from there into the Change Data Receiver.
  5. The Change Data Receiver has to deal with various situations. If the subscription is in queue state still, then the initial load of the table is in progress and hence no change information should be loaded into the target yet. Hence it has to remember these rows somewhere, the Change Data Backlog Store. In case the source adapter does not support re-reading already sent data, then all received data is put into that Backlog Store as well to allow the source sending more data, even if the data has not been committed in Hana already. In other cases the receiver provides the data to the Applier for processing.
  6. The Change Data Applier is loading all data into Hana in the proper order and using the same transactional scope as the data was changed in the source. It is the Applier who deal with the case that one change record is used for multiple subscriptions and loads the data into all targets then. In case the target is a table, it interprets the opcode (insert, update, delete, ...) received and performs the proper action on the target table. And in case of a Task, it makes the incoming change set unique per primary key before sending the data to the Task object (insert+update+update = insert row with the values of the last update statement).
  7. the Applier currently creates regular SQL statements like from ITAB; or start task using parameter...; and these statements are executed by the Index Server like any other SQL statements.


Data Dictionary Metadata


One important aspect of the architecture is that all, really all, information is stored in Hana itself and nothing in the adapter. The reason is simple, there might be multiple agents for the same source system for failover, the adapter/agent might stop working and upon restart needs to know where to pickup the work, the adapter can be reinstalled.

All this data is stored in Hana tables and can be quite interesting to debug a problem. Usually these tables are not queried directly but rather a public synonym pointing to a view, which has the row level security implemented inside, is used. Here is a list of such objects

  • AGENTS: Returns the list of all known Data Provisioning Agents and how to reach them.
  • ADAPTERS: Returns the list of adapters known by the Hana database. New entries are added whenever an agent does deploy an adapter previously not known. When one of these adapters has the flag IS_SYSTEM_ADAPTER = true, then it is an Adapter based on ODBC and executed by the Index Server. All other adapters are the SDI adapters.
  • ADAPTER_LOCATIONS: As one adapter can be hosted on one agent but not the other or on multiple agents, this table tells the relationship.
  • REMOTE_SOURCES: For each created remote source one line is returned.
  • VIRTUAL_TABLES: All created virtual tables can be found in there.
  • VIRTUAL_TABLE_PROPERTIES: Additional metadata the adapter requires at runtime are stored in Hana and can be seen via this view.
  • VIRTUAL_COLUMNS: The columns of each virtual table.
  • VIRTUAL_COLUMN_PROPERTIES: Additional metadata can be added to columns as well.
  • REMOTE_SUBSCRIPTIONS: The list of all remote subscriptions and their state.
  • REMOTE_SUBSCRIPTION_EXCEPTIONS: In case a subscription has an error, the exception number and the reason can be found here and using the exception id a recovery can be triggered manually.


The SQL statements used can be found here: Hana Adapter SDK - Interaction via SQL




Using above metadata tables all monitoring can be done using pure SQL commands. But there is a Hana Cockpit based set of screens as well in a separate distribution unit found on Service Marketplace. Very handy to install these instead plus it shows the status of the Task framework as well, the calculation engine based data transformation framework part of the SDI solution.


Agent Configuration Tool


In order to help setting up the agent and the adapters, the Agent Configuration Tool part of any Agent installation can be used. It does execute the SQL commands for adding agents, adapters, etc and edits the local configuration files.


SAPDB Adapters


Above the only adapters explained where the SDI adapters. If the ODBC based adapters of the Index Server are used, then no Data Provisioning Server, Agent etc is required. Instead the Federation Framework does access these ODBC adapters via the SAPDB Adapters component. Although more and more adapters will be moved to the SDI, this path will continue to exist but for SAP owned databases only. Sybase ASE and IQ for example. All those databases where going though multiple hops and an SDK does introduce limitations of some kind.


C++ Adapters


Speaking of multiple hops and SDK, one question might be why the Agent is required even if the source is local. For this case there is a C++ version of the SDK available as well and such an adapter can be accessed by the Data Provisioning Server directly. The only adapter implemented currently is the OData adapter. And as this SDK is the regular Adapter SDK, such an adapter could be deployed on the Agent as well. Then a local source would be accessed directly via the DP server, a remote source via the agent.

The normally used Adapter SDK is the Java version however and as Hana is written in C++ it cannot run the Java code directly, there has to be some inter-process communication which the Agent handles. Hence for a scenario, where the source is local, installing the Agent on the Hana box could be an option but the is needed still.

Hi all,


Some months back, I started a discussion on OData read modification exits, and had a bit of a moan on SCN about the fact that they didn't exist. I was building a UI5 app and needed to be able to filter out values in an OData entity on the server. I also wanted to make sure that

  1. the records returned for any given entity belonged to the user, and
  2. to ensure that one user couldn't retrieve the records of another.

Relying on OData filters was not a suitable solution. It seemed, however, that a read modification exit in the OData service definition would suit this need perfectly! So why wasn't there such a mechanism?


Now, to begin the story...


I have a table MYTABLE, with the columns ID, VALUE, USER. This table records sensitive values in the VALUE column, and each row belongs to a user, denoted by the USER column.


Originally, I built my OData service definition like this:


service {

    "SCHEMA"."MYTABLE" as "SecureValues"("ID");



Too easy. Expose my table via. an entity, and read only the records I need by filtering on USER. Unfortunately, this meant that anyone with the service URL (i.e. http://server/path/to/service.xsodata) could merely replay a query from my UI5 app to get access to all of the values in this table/entity. Even if I was filtering by USER in the app, this could easily be subverted to obtain all values in a raw OData response. So that was a BIG problem.


Then came my moaning. Mostly about the fact that OData entity definitions support update, create, and delete modification exits, but not read. I was like, "WHY?!?!?!?!?!" In my head it seemed that would be the easiest thing ever! So I continued hunting around, mostly in vain, to find a solution. Finally, I settled on using an intermediary server (Node.js) as a reverse-proxy to filter records in requests. In effect, this intermediary performed some work on the URL to apply the username to the query string, and forwarded this on to HANA. It was inelegant and didn't really solve the security problem, but did mean I no longer had to hard code the username query string parameter into URLs. Small win. Or was it? I had just introduced another server into the chain. With more code. And more bugs... That didn't really simplify anything, especially where this was to be deployed within a corporate network. In fact, it made it less appealing to interested parties. Back to the drawing board.


So I then poured through the documentation for Views in HANA, and analytic view privileges surfaced as a possible solution. But these brought with them a host of other issues (not to do with the technology, but to do with my usage of them). All I was trying to do was filter out values on the server. By using an analytic view, I was also introducing sums/counts/aggregations into the process, and I didn't need any of it.




Analytic views also didn't allow for unions. For a union, I'd need an Attribute view. So where I needed a union between two (or more) tables, I'd have the tables unioned in an Attribute view. Overhead. The Attribute view (read: union) would then be embedded in an Analytic view with Analytic privileges. Overhead++. The Analytic privilege used a stored procedure to identify the current user. Overhead++.


Basically, that didn't work. And waaaaaaay too complicated for what I needed. "There has to be an easier way".


While hunting around SCN for solutions to another problem, I arrived at it. It was like a bolt of lightening struck me square in the forehead. Or, for those Despicable Me fans, a light bulb moment.


despicable me.gif



The other problem - Configuring an App Site

I was trying to configure a new App Site for our HANA Fiori Launchpad. Not a difficult procedure, but there are a few steps. You can read up on that if you're keen...


Two articles from Wenjun Zhou

Creating news tiles with SAP HANA UI Integration Services (UIS)

Creating custom tiles with SAP HANA UI Integration Services (UIS)


And another useful article from Ian Henry

Exposing HANA Calc Views via OData to Fiori Tiles


Lastly, some other threads helped me get that all sorted, one of which I contributed to with a vital piece of information that I had continually ignored: Create new Fiori launchpad in CAL-HANA | SCN


Anyway, back the story. So in troubleshooting HANA Fiori Launchpads, I browsed through a lot of SAP-delivered code on HANA. And I discovered a beautiful thing. I discovered the HANA Table Function, or UDF for short (something Rich Heilman covered way back in 2012! - Table User Defined Functions( Table UDF ) in HANA). Goodness I felt dumb.


And when used in combination with the very simple HDBView artifact (.hdbview), you end up with a view dependent on a user-scripted SQL procedure. In other words, you now get to write the code to read the table data! I also knew, from hunting through SAP code, that an .hdbview could be used in an XSOData entity definition. By this stage, I was well excited. "To HANA Studio!" I declared in the office, probably a bit loudly. I did get looks.


To recap, the mission is to filter out values that do not belong to the currently authenticated user. The app resides on HANA only, and uses HANA native form authentication to log in. Which means we can access the current, and session, user from HANA SQL Script.


So our table function (this is not compiled code, so bear this in mind when using it yourself - you will need to adjust this if you're using CDS, for example):












That's get_values_by_user.hdbtablefunction.

Quite clearly, this function reads from MYTABLE, but crucially, only reads the values that belong to the current user. It returns a table type of MYTABLE which you now know will only contain rows that match the currently logged in user. So that's the first part.


Now we wrap the .hdbtablefunction artifact up into an .hdbview. This is really simple.


schema = "SCHEMA";

query = "select * from \"GET_VALUES_BY_USER\"()";



And that's get_values_by_user_view.hdbview. Note, the view must have a different name to the table function. That's why I append _view to the name.

Activate those two. Now we have a view that only returns records belonging to a user! Ha! Awesome.


Last step - throw it into your .xsodata service definition. You will need to specify the key now with keyword key, because the view doesn't have one of it's own (perhaps you can add this somehow, I don't know).


service {

    "SCHEMA"."get_values_by_user_view" as "SecureValues" key ("ID");


Now, when you call this .xsodata service entity, SecureValues, you will only get the rows belonging to the authenticated user. Bingo! What's more, all the OData filter parameters still work, but they will only be performed on the records returned by your view. So all of your UI5 model filters, and so on, will not need to change (other than to stop worrying about filtering on USER). One more thing - because your OData entity is no longer bound to a physical table (it's now bound to a view), you will need to implement create, update and delete modification exits for all OData entities for which you do this. No biggie. Some simple INSERT, UPDATE/UPSERT and DELETE (or delimit) SQL Script will sort that out - or XSJS if that's your thang.

I hope, hope, hope that others hunting around for solutions to server-side record filtering in OData will stumble across this post. I've attempted to wrap all of the articles/threads that helped me get to a workable solution into the one posting for this purpose.

Best o' luck,


The SDI File Adapter is one of the adapters preinstalled with every SDI Agent. Its purpose is to make any file, fixed width or delimited, in every character encoding and all formats available to Hana as a virtual table to select from as easy as from any other Hana table..




Once the File Adapter is made visible in Hana it provides multiple tables out of the box, e.g. FILEDIRECTORY to get a list of all available files or FILECONTENT with file content as one large BLOB. As this might allow any Hana user with the create-remote-source privilege to read files, there needs to be some security built around.

When deploying the adapter in the agent, a root directory is to be set. The adapter does not allow to reach outside of this root directory, ever. Therefore sensible files like /etc/passwd or C:\Windows cannot be read. Second is some password (or AccessToken as it is labeled) to make sure the person creating the remote source in Hana is allowed to use this adapter.

The administrator installing the agent decides on these settings by deploying the FileAdapter and changing the preferences of this adapter in the AgentConfig tool.



As one virtual table specifies the file format definition and multiple files can have the matching structure, there is no 1:1 relationship between files and virtual tables. So what should the adapter return when browsing the list of available virtual tables? To provide this browsing option the adapter does scan a File Format Root Directory for files of name *.cfg and each file contains the format information. This is the second directory to configure.


With this information the remote source and the virtual tables can be created and the file content selected from.


Remote Source


When creating a remote source the root and file format directories are set by default to the directories of the adapter and can be further restricted here. The idea is, maybe one remote source points to the plan data subdirectory, another to the employee data (both sub directories of the adapter root directory) and users are allowed to use either the one or the other remote source. Just in case more fine grained security is needed.

The AccessToken has to match the token entered in the adapter preferences as security measure.



Alternatively the remote source can be created via SQL as well



CONFIGURATION '<?xml version="1.0" encoding="UTF-8"?>

<ConnectionProperties name="ConnectionInfo">

<PropertyEntry name="rootdir">/usr/sap/FileAdapter/FileServer</PropertyEntry>

<PropertyEntry name="fileformatdir">/usr/sap/FileAdapter/FileformatDefinitions</PropertyEntry>



  '<CredentialEntry name="AccessTokenEntry">




The built-in tables


The following tables are provided by the adapter always. These are supposed to help building applications using the File Adapter.



  • FILEDIRECTORY: List all files of the remote source, including the sub directories. Supports pushdown of LIKE and = on the PATH and NAME column.
  • FILECONTENT: Allows to read a single file as is. It requires the PATH and NAME column to be restricted to a single file using a where clause and returns the entire file content as one BLOB.
  • FILECONTENTTEXT: As above but this time the file content is returned as NCLOB. That means the file is read with a specific character encoding. This can either be the default - based on the Byte Order Mark of the data file or the OS default - or provided as dataprovisioning parameter. In this example a ASCII file is read with UTF32 codepage, hence producing wrong characters.
  • FILECONTENTROWS: Again returns a BLOB but this time one row per line. A line is supposed to end with a \n (ASCII newline) character/byte.
  • CODEPAGES: Returns the list of all supported values for codepages by querying the adapter's JVM installation.
  • LOCALE: Returns the locales the JVM does support.


Defining file formats


One option of creating a virtual table for a given file format is executing the create virtual table statement manually and providing the format information as additional metadata. Below is an example of the syntax and it also lists all possible parameters. Only some of them are mandatory, most can be omitted to leave them at a useful default.


create virtual table "v_plan" at "FileSource"."<NULL>"."<NULL>"."v_plan"

REMOTE PROPERTY 'dataprovisioning_parameters'=

'<?xml version="1.0" encoding="UTF-8" standalone="yes"?>


<Parameter name="FORCE_FILENAME_PATTERN">plan%.txt</Parameter>

<Parameter name="FORCE_DIRECTORY_PATTERN"></Parameter>

<Parameter name="FORMAT">CSV</Parameter>

<Parameter name="CODEPAGE">UTF-8</Parameter>

<Parameter name="ROW_DELIMITER">\r\n</Parameter>

<Parameter name="COLUMN_DELIMITER">;</Parameter>

<Parameter name="TEXT_QUOTES"></Parameter>

<Parameter name="SKIP_HEADER_LINES">0</Parameter>

<Parameter name="QUOTED_TEXT_CONTAIN_ROW_DELIMITER">false</Parameter>

<Parameter name="ESCAPE_CHAR"></Parameter>

<Parameter name="TEXT_QUOTES_ESCAPE_CHAR"></Parameter>

<Parameter name="ERROR_ON_COLUMNCOUNT"></Parameter>

<Parameter name="LOCALE">de_DE</Parameter>

<Parameter name="DATEFORMAT">dd. MMMM yyyy</Parameter>

<Parameter name="TIMEFORMAT">HH:mm:ss</Parameter>

<Parameter name="SECONDDATEFORMAT">yyyy.MM.dd HH:mm:ss</Parameter>

<Parameter name="TIMESTAMPFORMAT">yyyy.MM.dd HH:mm:ss</Parameter>

<Parameter name="COLUMN">YEAR;Integer</Parameter>

<Parameter name="COLUMN">COST_CENTER;varchar(8)</Parameter>

<Parameter name="COLUMN">PLAN_AMOUNT;decimal (15, 2)</Parameter>

<Parameter name="COLUMN">CHANGE_SECONDDATE;seconddate</Parameter>



The other option is to create a .cfg file inside the Directory of file format definitions or a sub directory thereof. The content of the file are logically the same. The file has to have a first line with a description and then lists all parameters as key value pairs. Above example as cfg file would look like:


Fileformat to read a CSV file with plan data




















COLUMN=PLAN_AMOUNT;decimal (15, 2)



At the end both versions have the same endresult, a virtual table with all these parameters as virtual table attributes. The only difference is with the cfg files the browsing shows something and then the virtual table is created, whereas in the first option the table is created directly.


Note: Changing the cfg file after creating the virtual table has no effect. These parameters are really copied into the Hana data dictionary.




In case the file is a fixed width file, so the column start and end position is always the same, the position information is needed in addition.


Fixed width demo









COLUMN=PLAN_AMOUNT;decimal (15, 2)



Parameters in detail



  • FORCE_FILENAME_PATTERN, FORCE_DIRECTORY_PATTERN: A user might execute a simple "select * from virtualtable" without any where clause on directory and name of file. In that case every single file in the root directory and subdirectories will be read and parsed according to this virtual table format definitions. That might take a while a produce many errors. But if the virtual table does match to files in a certain directory, directory tree or to certain file names only, it makes sense to hard wire this information into the virtual table directly. Examples:
    • FORCE_DIRECTORY_PATTERN=/usr/sap/FileAdapter/FileServer/plandata: Reads all files in that directory only
    • FORCE_DIRECTORY_PATTERN=/usr/sap/FileAdapter/FileServer/plandata/%: Reads all files in that directory and subdirectories
    • FORCE_DIRECTORY_PATTERN=/usr/sap/FileAdapter/FileServer/plan%: Reads all files in directories that start with "plan"
    • FORCE_FILENAME_PATTERN=plan%.txt: Reads files like plan20120101.txt
    • FORCE_DIRECTORY_PATTERN=/usr/sap/FileAdapter/FileServer/plandata  FORCE_FILENAME_PATTERN=plan%.txt: Files inside the directory and matching the provided name pattern
  • FORMAT: Is either CSV (default) or FIXED
  • CODEPAGE: The character encoding the file is read with. By default the operating system default is used. In case the file has a Byte Order Mark this codepage is used always. Valid values of the Java installation can be found by creating a virtual table for CODEPAGE and querying its contents.
  • ROW_DELIMITER: A character sequence indicating the end of a row in the file. In case these are non-printable characters they can be provided encoded as /d65536 or /xFFFF or as Unicode notation /u+FFFF. Alternatively the typical \r and \n is supported as well. Examples:
    • \n Unix standard
    • \r\n Windows standard
    • \d13\d10 Windows standard but characters provided as decimal number
    • \x0D\x0A Windows standard but characters provided as hex number
  • SKIP_HEADER_LINES: In case the file has a header, the number of lines to be skipped is entered here.
  • ERROR_ON_COLUMNCOUNT: By default a row with less columns then defined is considered okay. By setting this parameter to true, it is expected that all rows of the file have as many columns as defined.
  • LOCALE: The decimal and date conversion from the strings in the file into native numbers or dates might be locale specific. For example the text "3,1415" is a perfectly nice value for PI but in the German notation where the comma is the decimal separator. With an English locale it would be a large integer number 31415. Same thing with dates in case the month or day name is used. "14. Oktober 2000" with locale German is fine, for all other languages it is not. Again, valid values for the locale can be found by querying a virtual table based on the LOCALE table of the adapter.
  • DATEFORMAT, TIMEFORMAT, SECONDDATEFORMAT, TIMESTAMPFORMAT: The file format can use these datatypes for the date/time related values. Each can have a different format string. The syntax of the format string is the Java SimpleDateFormat syntax.
  • COLUMN: multiple entries each consist of the columnname;datatype where the datatype is any normal Hana datatype.


CSV only

  • COLUMN_DELIMITER: The character sequence indicating the next column. If non-printable characters are used then either one of the encodings will work \d65536, \xFFFF or \u+FFFF
    • ; Meaning the ; is the column separator, so a line looks like 2000;IT Costs;435.55
    • | using the pipe character as delimiter
    • \d09 using an ASCII tab character as delimiter
  • TEXT_QUOTES: Sometimes text data is enclosed in quotes so a column delimiter inside the text does not break the format. The line 2000;IT Costs; software related only;435.55 would appear as 4 columns as the text contains a semicolon as well. If the file was created with quotes like 2000;"IT Costs; software related only";435.55 then there is no such issue but the file parser needs to act more carefully and not just search for the next column delimiter. It needs to check if the text is inside the text quote character or outside.
  • ESCAPE_CHAR: Another way to deal with inline special characters is to escape those, like in 2000;IT Costs\; software related only;435.55. Here the \ char is an escape char and indicates that the subsequent character is to be taken literally, not as e.g. column delimiter.
  • TEXT_QUOTES_ESCAPE_CHAR: Leaves the question on how to make quotes appear inside the text, like in IT Costs; "software related" only. One option the file creator might have used is to simply use the global escape character: 2000;"IT Costs; \"software related\" only";435.55. Another popular method is the have the quotes escaped by another quote like in 2000;"IT Costs; ""software related"" only";435.55. In that case both the TEXT_QUOTE=" and the TEXT_QUOTE_ESCAPE_CHAR=" are set to the " character.
  • QUOTED_TEXT_CONTAIN_ROW_DELIMITER: The default value is false and tells the parser regardless of any quotes or escape characters the text inside a row does never contain the row delimiter character sequence. In this case the parser can break the file into rows much faster, it needs to search for the character sequence only, and only the column parsing has to consider the escape and quote characters. If set to true parsing will be slower.


Fixed Width only

  • COLUMNSSTARTENDPOSITION: In a fixed width file the column positions need to be specified for each column. Example:
    • 0-3;4-11;12-37;38-53 defines that the first 4 characters are the first column, the next 8 contain the data for the second column etc. Columns need to be in the proper order.
    • 0;4;12;38 is equivalent to above example, the last column ends with the line end
    • 0;4;12;38-53 can be used as well. In fact every single column can either specify the start and end position or just the start.
  • ROWLENGTH: In fixed with files there does not need to be a row delimiter. Often the file has some and then they need to be stripped away. Examples assuming the last data character is at index 53 as specified above:
    • ROWLENGTH=56 ROW_DELIMITER= would work for a file that has a row delimiter. The payload text ranges from 0..53 and hence 54 characters long plus two characters for \r\n. But the last column does not contain the \r\n as it is told to end at index 53.
    • ROWLENGTH=54 ROW_DELIMITER=\r\n is equivalent to above. Each row is expected to be 54 characters plus 2 characters long. The main advantage of this notation is that COLUMNSSTARTENDPOSITION=0;4;12;38 would work as well as the trailing \r\n is stripped away. In the previous example the the last column would start at 38 but end at index 55 due to rowlength=56 and hence contain the \r\n characters in the last column.




Using the virtual table


When the virtual table is created it has 5 additional columns:

  • PATH: The path name of the file being read
  • NAME: The file name this row belongs to
  • ROWNUMBER: The line number inside the file
  • PARTITION: All rows get evenly distributed into 10 partitions (ROWNUMBER modulo 10)
  • ERROR: A string with all column parsing errors




Executing a select * from virtualtable will return all rows of all files the virtual table is allowed to read, that is all files or the matching ones based on FORCE_DIRECTORY_PATTERN and FORCE_FILENAME_PATTERN settings. Therefore each row tells what file it belongs to in the PATH and NAME columns. As this text can be quite lengthy, it might be a good idea to not select those columns unless the information is needed of course.

This is actually an important capability of the adapter, it supports projections. Therefore only those columns actually needed are transferred, parsed, converted into the proper datatype. The "top" clause is pushed down to the adapter as well, it would read the first lines and then finish.

The adapter also supports filters on PATH and NAME hence all where clauses with equal, in-lists or like are pushed to the adapter and it can navigate to the matching files directly.


When a parsing error occurs the reading does continue still but the information about all errors in that row are put into the ERROR column. Things like "Cannot convert to seconddate using the provided format and locale (abcd)". Hence reading the file without checking the ERROR column is dangerous.


In order to simplify finding issues in the file, the line number is returned in the ROWNUMBER column. In above example the file starts with row number 2 as the first line is the header line.


In case the file is very large and has lots of expensive data conversions the virtual table provides a PARTITION column to read a subset of the file only. This can be used to conveniently execute for example two statements in parallel, each processing half the file.

insert into target select * from v_us_county_census where partition in (0,1,2,3,4);

insert into target select * from v_us_county_census where partition not in (0,1,2,3,4);

Since the flowgraph supports partitioned reading as well, the file adapter works well together with that feature.

I've been working to build a deliverable package in HANA (UI5 application and supporting back-end) to send out to customers. And part of the development process, as many readers will know, is debugging stored procedures. In some cases, you're awesome, and you don't need to do this - "I know it works, I wrote it". But sometimes, just sometimes, it's helpful to turn to the debugger to see why a particular piece of functionality is not working as expected. But what to do when you don't have the auths to debug your own stored procedure?! Eeeeep!


To set the scene, I'm writing a UI5 app, with an OData HANA-only (i.e. no Gateway here) back-end, and have in place a number of OData modification exits. The modification exits are mostly stored procedures (.hdbprocedure files). Read up on these over here: Custom Exits for OData Write Requests - SAP HANA Developer Guide for SAP HANA Studio - SAP Library


I've got an update modification exit on one of my OData entities, Forecasts.


service {

"fa.apps.hpc.xs.views.forecasts::forecasts_by_user_view" as "Forecasts" key ("id")


  "ForecastFields" as "Fields",

  "ForecastDataSet" as "DataSet",

  "ForecastRuns" as "Runs"


  create using "fa.apps.hpc.xs.procedures.forecast::createForecast"

  delete using "fa.apps.hpc.xs.procedures.forecast::delimitForecast"

  update using "fa.apps.hpc.xs.procedures.forecast::updateForecast";



That'd be this guy, in bold. If you're savvy, an update modification exit stored procedure needs a particular signature.


PROCEDURE "FA_APPS"."fa.apps.hpc.xs.procedures.forecast::updateForecast" (












-- Weave your update magic here




I wanted to debug it, just for fun (actually, for a very real reason). The trouble I found however, is that because this stored procedure exists in another schema to my user's one, debugging is no longer automatically available. In Eclipse (I use Eclipse with HANA Add-ons, not HANA Studio, although effectively the same thing) you'll still be able to set the breakpoint, as per below. Nice blue dot. Perfect! Break point set.


But when you start debugging said procedure, you'll notice a little cross over your breakpoint.





That didn't happen before when the stored proc belonged to my schema. But of course, that was my schema. My user owns it and everything in it, including debug privileges for all stored procedures. So I messed around a bit, not quiet grasping what was required here. I also followed the inimitable Rich Heilman's instructions, which you can peruse over here:Unable to Debug a activated Procedure in HANA XS environment. but they didn't quite do what I needed. Presumably, I already had sufficient auths in HANA to debug a procedure, but I didn't have auths to debug this procedure. So I persevered. As you do. And keep doing. And keep on doing... until eventually it clicked! Hunting though the Object Privileges I could apply to my user, low and behold, I see my stored procedure in the list:




"Hmmmmm" I think to myself. "I wonder what will happen if I..."



And there you have it. Once you add the stored proc to your user under Object Privileges, you then determine what specific privileges you should have (note: I'm logged in here as a user with user admin auths, so I can grant privileges to my user, but you could just as well use SYSTEM or whatever if this is not an important HANA instance).


So that's lovely to know. I can add stored proc auths directly to my user. But this is going to be a deliverable package/app. It follows that I should define such auths in an HDB Role artifact - heck, I'm doing that already for all other components of this app, so best keep on point. So, for the purposes of this post, I'll demonstrate adding this to my existing HDB Role artifact. Were you to do this productively, you might consider creating a separate .hdbrole file for debug/administration of the app. I'll keep everything all together for now. I should note, that I am not saving my user with the privileges above. I don't want to have to apply individual privs to each and every user - I already have an .hdbrole file, so I'll use that. Please also note that, shortly after writing this post, I found Rich's post on pretty much exactly this topic. New SQLScript Features in SAP HANA 1.0 SPS7 - specifically, the last section Debugging Authorisation Concept. I've basically just expounded on what's covered in this post, and the one I linked to earlier.


Right then, to my HDB Role! Now, this role is already applied to my user. So any changes to it should automatically apply to my user too.


role fa.apps.hpc.xs.roles::USER_ROLE {


// This is a very broad smattering of privileges for a schema, and will be paired back once the supporting

// stored procedures are in place.

schema fa.apps.resources:FA_APPS.hdbschema: SELECT, INSERT, UPDATE, DELETE, EXECUTE, DEBUG;


// Lots of guff here


// Here's the stored procedure declaration with the required privileges - EXECUTE and DEBUG

catalog sql object "FA_APPS"."fa.apps.hpc.xs.procedures.forecast::updateForecast" : EXECUTE, DEBUG;

// Lots of guff here




Add our procedure privileges as above (including DEBUG on the schema), activate the role, restart debugging of the procedure and...


Et voila. Our red cross has happily transformed into a green tick and we're off debugging! I don't know about you, but there's a special, warm place in my heart reserved for green ticks.




In the debugger...


We are breaking exactly out our breakpoint now. Excellent.


This is just a smattering of information about the gauntlet that can be HANA auths. The thing is, they're not really complicated. But it can be hard sometimes to figure out what's required. As this post ironically proves.


For completeness:


My instance is HANA Developer Edition, hosted in AWS via. CAL, ver 1.00.091 (SPS9)

Eclipse is Mars running latest HANA and UI5 Add-ons


Credit for both references to Rich Heilman

Dear Friends,


We have seen many queries in SDN about accessing HANA using Microsoft Excel. Many of us were facing lot of issues/error while installing ODBC drivers, registering MDX library and Connecting to HANA in excel.


Scope of this blog is just to help you to setup the HANA connection through excel.


Here is my configuration which most of you will have and face many issues in creating ODBC connection.


OS: Windows 7 64 bit

Office Version: Office 2013 32 bit

HANA Studio: Version: 2.1.15


Below are some steps( Its very simple).


Step1: Download SAP HANA Client from service market place( HANA Studio do not provide any libraries to connect HANA from other clients. For example. Excel, JAVA UI, etc). It is just user interface for accessing HANA artifacts. Don't download HANA Client 64 bit.


Installations and Upgrades - H" --> SAP In-Memory (SAP HANA )"--> HANA Platform Edition"--> SAP HANA PLATFORM EDITION"--> SAP HANA PLATFORM EDIT. 1.0 (Support Packages and Patches)"--> Entry by Component"--> HANA client


Download latest version of below OS and install. This also works on Windows OS 32Bit


Step2: Make sure your HANA server is Up and running. Enter your URL in browser as   here instance is 10 and host is


Step3: After installing HANA Client, ODBC driver for HANA would get installed automatically. You can notice this while installation.

            To make sure, please follow below steps.

         --Go to Control Panel --> Administrative Tools --> Data Source(ODBC) --> This is ODBC Administation Console


        HANA ODBC 1.png

Step4: Create ODBC DSN for HANA as below.

           --Click on Finish after step 3 and enter Connection name, host and port details and test.

           Hint: To get your hostname and port, go to Systems view -> Right Click Properties -> Database User Logon -> Additional Properties


        HANA ODBC2.png     HANA ODBC 3.png


Step 5: After step 4, you will see one new entry in Data Source list of ODBC Administration Console. We are half way now. Close all the windows


Step 6: Map your system files/libraries to local machine ( This is in lay man understanding).

             Go to start and enter regedit  change the directory path as shown below.

              Value: %SystemRoot%\system32

          HANA ODBC 4.png


Step 7 : Now you just find the SAP HANA ODBC MDX Library which is nothing but "SAPNewDBMDXProvider.dll" . We just need to register this library so that office can access HANA DSN using this library.

             Go to : C:\Program Files (x86)\SAP\SAP HANA Database Client for MS Excel\SAPNewDBMDXProvider.dll


If you do not find above file  then you can re install the HANA Client and try again to find this file.


Step 8: You can register the MDX library manually without using software "HANA Client for Excel"

            Go to Start and Enter CMD . Right click and run as administrator

             HANA ODBC 5.png

          To register HANA MDX Library you need to execute windows registry with file name as below.


           C:\Windows\system32>regsvr32.exe "C:\Program Files (x86)\SAP\SAP HANA Database Client for MS Excel\SAPNewDBMDXProvider.dll"

       HANA ODBC 6.png



Step 9: Open excel and go to DATA ribbon and click on From Other Sources -> From Data Connection Wizard -->Others/Advanced ->


             HANA ODBC 7.png


Step 10: Connect to HANA  : Enter all the system details and login credentials and access your artifacts.


              HANA ODBC 8.png


I hope now you will be able to access HANA from your excel.


Thanks for visiting this blog and looking forward with your suggestions and comments.




Hello Everyone,


This blog explains how to create SAP NAtive application having CRUD(Create, Read, Update & Delete) operations using SAPUI5 and XSJS services. We will also see how to consume XSJS.



As all we know ,Data can be transferred over HTTP using either the Atom (XML) or the JSON (JavaScript) format. Here I am using JSON to pass the data.

Creating the User Interface

1.       Create a project

2.       Setup SAPUI5 bootstrap and required libraries

3.       Create view components

       4.     Implement XSJS Services

               4.1.    Read

               4.2.    Create

               4.3.    Update

               4.4.    Delete


     ·       You have to install the SAPUI5 Application Development Tool into your Eclipse

     ·       UI Development Toolkit for HTML5 Version SAPUI5 1.28

             1.    Create the XS project.


            Structure of your project will look like this:



      2.   Create a simple HTML Page ,say index.html



<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
<html xmlns="">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>
  <meta http-equiv="X-UA-Compatible" content="IE=edge"/>
  <title>CRUD Operation with XSJS</title>
  <script src="js/jquery.min.js"></script>
    <script src="" 
  var view = sap.ui.view({id:"id1", viewName:"views.sample", type:sap.ui.core.mvc.ViewType.JS});
                <div id="content" style="height: 400px;width: 400px; border: 1px solid #AAA"></div>


1.           3.       Create one view.js file , I named it as sample.view.js


sap.ui.jsview("views.sample", {
    /** Specifies the Controller belonging to this View.
     * In the case that it is not implemented, or that "null" is returned, this View does not have a Controller.
     * @memberOf views.Companies
    getControllerName: function() {
        return null;
    /** Is initially called once after the Controller has been instantiated. It is the place where the UI is constructed.
     * Since the Controller is given to this method, its event handlers can be attached right away.
     * @memberOf views.Companies
    createContent: function(oController)
        // create a simple matrix layout
        var oLayout = new sap.ui.commons.layout.MatrixLayout({
    id : "matrix1",
        columns : 5,
        width : "100%",
        widths : [ "15%", "30%","10%","30%","15%"],
     layoutFixed : false
        var fname="",lname="",id,dob,ph_no;
        var oLabel1 = new sap.ui.commons.Label("l1");
  oLabel1.setText("Employee ID");
  var oLabel2 = new sap.ui.commons.Label("l2");
  oLabel2.setText("First Name");
  var oLabel3 = new sap.ui.commons.Label("l3");
  oLabel3.setText("Last Name");
  var oLabel4 = new sap.ui.commons.Label("l4");
  var oLabel5 = new sap.ui.commons.Label("l5");
  oLabel5.setText("Phone Number");
  var oInput1= new sap.ui.commons.TextField('input1');
       // oInput1.setEditable(false);
            id= oInput1.getValue();
  var oInput2 = new sap.ui.commons.TextField('input2');
        oInput2.setTooltip("please enter first name");
            fname= oInput2.getValue();
    var oInput3 = new sap.ui.commons.TextField('input3');
        oInput3.setTooltip("please enter last name");
            lname= oInput3.getValue();
        var oInput4 = new sap.ui.commons.TextField('input4');
        oInput4.setTooltip("please enter Dob");
            dob= oInput4.getValue();
    var oInput5 = new sap.ui.commons.TextField('input5');
        oInput5.setTooltip("please enter phone number");
            ph_no= oInput5.getValue();
        var oButton1 = new sap.ui.commons.Button({
    text : "Read",
    tooltip : "This is a Read operation button",
         press : function() {
              var jurl="/services/read.xsjs";
    url: jurl,
              async :false,
              TYPE: 'POST' ,
              method: 'GET',
              dataType: 'JSON',
              success: function(data) {
        var oButton2 = new sap.ui.commons.Button({
    text : "Insert",
    tooltip : "This is a Insert operation button",
         press : function() {
          var data1=
                 ID:id ,
             var datavalue=JSON.stringify(data1);
             var jurl="/services/insert.xsjs";
    url: jurl,
              async :false,
              TYPE: 'POST' ,
              method: 'GET',
              dataType: 'text',
              success: function(data) {
          }); }
        var oButton3 = new sap.ui.commons.Button({
    text : "Update",
     tooltip : "This is a Update operation button",
        press : function() {
           var data1=
                 ID:id ,
             var datavalue=JSON.stringify(data1);
             var jurl="/services/update.xsjs";
      url: jurl,
              async :false,
              TYPE: 'POST' ,
              method: 'GET',
              dataType: 'text',
              success: function(data) {
        var oButton4 = new sap.ui.commons.Button({
    text : "Delete",
     tooltip : "This is a delete operation button",
        press : function() {
           var jurl="/services/delete.xsjs";
    url: jurl,
              async :false,
              TYPE: 'POST' ,
              method: 'GET',
              dataType: 'text',
              success: function(data) {
        oLayout.createRow("", oLabel1,"",oInput1,"" );
        oLayout.createRow("", oLabel2,"", oInput2,"" );
        oLayout.createRow("", oLabel3,"", oInput3,"" );
        oLayout.createRow("", oLabel4,"", oInput4,"" );
        oLayout.createRow("", oLabel5,"", oInput5,"" );
        oLayout.createRow("", oButton1,oButton2, oButton3,oButton4 );
// attach it to some element in the page

4 . Create XSJS :

  • Insert.xsjs


$.response.contentType = "application/text";
var body='';
var aCmd = $.request.parameters.get('dataobject');
var obj=JSON.parse(aCmd);
var id= obj.ID ;
var firstname=obj.FirstName;
var lastname=obj.LastName;
var dob=obj.DoB;
var phone=obj.Phone;var tx_data_query="";
function getTxtData()
    var connection = $.db.getConnection();
    var statement = null;
    var resultSet = null;
    tx_data_query ='INSERT INTO EMP_DETAILS (ID, FIRST_NAME, LAST_NAME, DOB, PHONE_NUMBER) VALUES (' + id + ',\'' + firstname + '\',\'' + lastname +
    '\',\'' + dob + '\',\'' + phone + '\')';
     statement = connection.prepareStatement(tx_data_query);
    resultSet= statement.executeQuery();
    } finally {
    return resultSet;
function doGet()
          $.response.contentType = "application/json";  $.response.contentType = "text/plain";  $.response.setBody(getTxtData());
          }  catch(err) {
             $.response.contentType = "text/plain";  $.response.setBody("Error while executing query: [" +err.message +"]");  $.response.returnCode = 200;

Before Insertion:






     After Insertion :



  • Update.xsjs


$.response.contentType = "application/text";
var body='';
var aCmd = $.request.parameters.get('dataobject');
var obj=JSON.parse(aCmd);
var id= obj.ID ;
var firstname=obj.FirstName;
var lastname=obj.LastName;
var dob=obj.DoB;
var phone=obj.Phone;
var tx_data_query = "";
function getTxtData()
    var connection = $.db.getConnection();
    var statement = null;
    var resultSet = null;
   // tx_data_query = 'UPDATE NEO_4M2TWKN5K877VBPYTUG3UE323"."s0014075794trial.hanaxs.CRUD::emp_table set FIRST_NAME=\'' + firstname + '\', LAST_NAME=\'' + lastname + '\', DOB=\'' + dob + '\', PHONE_NUMBER= \'' + phone + '\' where ID=\''+id+ '\'';
tx_data_query='delete * from NEO_4M2TWKN5K877VBPYTUG3UE323"."s0014075794trial.hanaxs.CRUD::emp_table where id='+id;
    statement = connection.prepareStatement(tx_data_query);
    } finally {
    return resultSet;
function doGet()
          $.response.contentType = "application/json";
                    $.response.contentType = "text/plain";
                    $.response.contentType = "text/plain";
                    $.response.setBody("Error while executing query: ["+tx_data_query +err.message +"]");
                    $.response.returnCode = 200;

After Update:



  • Delete.xsjs


$.response.contentType = "application/text";
var body='';
var id = $.request.parameters.get('ID');
var tx_data_query = "";
function getTxtData()
    var connection = $.db.getConnection();
    var statement = null;
    var resultSet = null;
    tx_data_query = 'DELETE from \"_150949\".\"EMP_DETAILS\"  where where ID='+id;
    statement = connection.prepareStatement(tx_data_query);
    } finally {
    return resultSet;
function doGet()
          $.response.contentType = "application/json";
                    $.response.contentType = "text/plain";
          }            catch(err)
                    $.response.contentType = "text/plain";
                    $.response.setBody("Error while executing query: [" +err.message +"]");
                    $.response.returnCode = 200;


  After Delete:


  • Read.xsjs


$.response.contentType = "application/text";
var body='';
var aCmd = $.request.parameters.get('ID');
var tx_data_query ='select from \"_150949\".\"EMP_DETAILS\" where ID='+aCmd;
function close(closables) {
          var closable;
          var i;
          for (i = 0; i < closables.length; i++) {
                    closable = closables[i];
                    if(closable) {
function getTxData(){
          var txlist = [];
          var connection = $.db.getConnection();
          var statement = null;
          var resultSet = null;
                    statement = connection.prepareStatement(tx_data_query);
                    resultSet = statement.executeQuery();
                    var txitem;
                    while ( {
                    txitem = {};
                    txitem.ID = resultSet.getString(1);
                    txitem.FirstName = resultSet.getString(2);
                    txitem.LastName = resultSet.getInteger(3);
                    txitem.DoB = resultSet.getString(4);
                    txitem.phone_no = resultSet.getString(5);
          } finally {
                    close([resultSet, statement, connection]);
          return txlist;
function doGet() {
                    $.response.contentType = "text/plain";
                    $.response.contentType = "text/plain";
                    $.response.setBody("Error while executing query: [" + err.message + "]");
                    $.response.returnCode = 200;


After Reading:





I hope you find this blog useful and helpful.



Happy Coding!



Kapil Jain


Filter Blog

By author:
By date:
By tag: