Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Robert_Russell
Contributor



**WARNING this blog is from 2013 and based on Lumira Personal Editions that ended at version 1.31 - It appears version 2.0 onwards does not support the JDBC based tunnel method. I still use this method as I have version 1.31 (and I do not have access to version 2) still and used this exact method to create all the charts for my blog analysis this site "(Unofficial) Analysis Of The Answers.sap.com Question Forums"

**HTTP ACCESS version 2 onwards - Thanks to Stephane for commenting on my blog I did successfully connect a trial version of Lumira 2.1 Discovery to my Trial HANA cloud account. See the comment section below to see how I did this. link to comment


Background


After I followed stoyan.manchev’s excellent blog on 8 Easy Steps to Develop an XS application on the SAP HANA Cloud Platform I was really impressed and curious if Lumira could be used with the HANA calculation view created in the blog. Stoyan’s blog covers, from start to finish, the process to create an XS application and I did complete this; however with Lumira (the free version) I was interested in creating and visualising HANA views only. I will say up front that during the process I did experience a couple of failures using features of Lumira that I wanted to use. Although as a way to use HANA for free it was quite inspiring and I found myself addicted to the platform. Stoyan’s blog does come with a disclaimer that the features are in beta for the trial accounts; I did interpret this “beta” statement for the overall intention of his blog however things may be changing on the platform soon.

The HANA cloud platform (HCP) offers a free developer account here. Also to note I have not used HANA or the HCP in my day job and would advise double checking anything with documentation/SCN but I will be happy to comment on any aspect of my blog. So my intention is to go through the steps I went through to use Lumira with HANA as part of the HCP.  However I start with information on the dataset I used in my tests.

Crime dataset


The data set I chose is about crime and policing in England, Wales and Northern Ireland and downloaded from http://data.police.uk/data/ . The data is made available under the UK Open Government Licence . In the end I loaded over 13 million records from August 2011 to October 2013. As I say, I was not entirely successful with using all parts of Lumira as I wanted but if I was going to fail then why not fail with a large dataset and at the speed of HANA.



Screen shot above of the Crime Count measure showing over 13 million records.

Connecting Lumira to HCP


At this point I will assume that the calculation view in Stoyen’s blog has been created. Also it does appear the SAPHCP @opensap course has used the same or similar XS application. Connecting Lumira to the HCP can be achieved by using the tunnel that comes as part of the HANA Client download. In the screen shot of my connection I will leave the connection details in plain view as the password does get reset at every logon.

Start the HCP tunnel

***edit 15.6.2015

Prompted by Venkadesh's comment below I realied that Stoyan's blog has been updated to connect via the cloud connection method, which means the HCP tunnel is no longer required for his blog. For this blog's method to connect Lumira to the SAPHCP then the tunnel is still required.


So I would suggest either the help page.


Setting Up the Console Client


or the SAP HANA Academy document http://scn.sap.com/docs/DOC-62450


These cover the SAPHCP tunnel requirements.


Also from Stoyen's blog in step 2 I would still add a "system" and not a "cloud system" as I would use an open tunnel's details for hostname, instance number, user and password for the  connection. These details from a connected HCP tunnel are the ones I use for Lumira as well.


****end of edit 15.6.2015




Start Lumira and select Acquire new dataset



Select “Connect to SAP HANA One” – we won’t be connecting to HANA One but HANA running on our local laptop/computer :wink:

Hit Next and complete the logon details



The connection details come from the tunnel output (in the screenshot below Start the HCP tunnel) and that the tunnel will be used to connect to the HCP.

Passwords will be reset every time the tunnel is open so it will have to be cut and pasted every time the tunnel is started. It is therefore essential to start the tunnel before connecting via Lumira.

Next locate your trial user account in the “Select a SAP Hana View” screen. I have scrolled down so my trial ID is at the top and as you can see there are a number of other users.



In this example I will select SO_CV which I had created from following Stoyan’s blog.



Hit the create button.

Here in the below screenshot I have selected measures and attributes to create a chart.


First Issue


The rank option in Lumira did not work





Selecting this option breaks Lumira (my version is1.13) and thereafter none of the charts/features work so I always had to close the session. After simply acquiring the dataset again I was able to continue. I did a quick search and found a similar hit in note http://service.sap.com/sap/support/notes/1927144 . Not the exact same error, so probably not the same solution, however I did move on to try my own data. If the data selected was less than the standard Lumira 10,000 limit I could achieve the same ranking feature by manually sorting and then filtering the data.

Loading my own dataset into an HCP table


After downloading the crime data mentioned previously I used some unix commands (I run Cygwin on my Windows 7 laptop) to collate all the police force files into one bigger file for each month. I also run a script to check how wide the columns needed to be as this can provide some headaches on uploading files. As these scripts are specific for the data I will not list them here but happy to share these if required.

Preference settings


There are two settings worth changing or experimenting with in the Modeler view of the preferences. If set to 0 it will try to auto calculate “Batch Size” , however I have set that to 10000, probably 0 would be fine however I have left it at 10000 and will cover data load times in the next section. The “Decision Maker Count” is related to how the HANA Studio will calculate data types and column width for data loads. I set this to the maximum due to the fact I had, let’s say, a few load failures due to data not fitting into the column width. Also I created a script to calculate this setting for each file.


Load data from a local file


From the HANA Studio menu -  file -> import



I created a crime table to load the data into.



I found it always helps to load files with a header row, as the mapping of columns is then a lot easier. Loading the data always into my NEO_ schema.

The next screen is where the “Decision Maker Count” comes into play and how HANA studio presents the data mapped to data type.

Here is an original screen shot of one file,



On loading subsequent files, I chose an existing table.



Using the drop down and if the file has matching headers it is a simple case of “Map By Name” otherwise it is a manual process to link the columns.



A screen shot of my final table settings.



Maybe not easy to tell but there are some big and not so big changes to the length.

I did find I was able to adjust the data type settings from the SQL Console if required. E.g.

     alter table "NEO_7YZMZC83MT9WX3OCFHA5HDEQX"."crime" alter ("Location" NVARCHAR(58));

468,140 records uploaded in 818 seconds




 

I did find that loading times did extend if I tried to load too many records at once.

Final table stats


The current runtime object of my table, over 640 mb and I loaded the files over a few weeks in my spare time.


Create an analytic view in HANA Studio


At this point I will state again this is my blog with the information based on self learning in regards to HANA view creation. Always better to check the information provided.

From the HANA Systems view I opened the hihanaxs package (created from Stoyen’s blog).

I created and Analytic view especially for this blog….



Enter a name for the View



Open the Catalog and NEO_xxx schema to locate the table crime and dragged the table to the “Data Foundation



Then selected the columns I wanted for the view.



I selected Crime_type twice so eventually I could set up one as a measure and the other as an attribute.

After selecting Semantics I then de-selected the “Enable Analytic Privilege



Next auto assign the allocation of measures and attributes,



I did change longitude and latitude to attributes

Also one of the Crime_type’s was renamed to crime_count and changed to a measure for Lumira to use.



I then hit the green arrow to save and activate my view. (you may need to refresh the view in HANA Studio to pick this up)

Even though the view is active Lumira would not be able pick up the new view, some settings still need to be made.


Authorisations for the view


At this point I will state again, this is my blog with the information based on self learning in regards to HANA authorisations. Always better to check the information provided

Alter model access file


From information from Stoyen’s blog I added my new views to the model_access.hdbrole  as per the following. I did originally make some errors following the blog and placed some views in the base package hihanaxs and altered the statements as below.

role p1248461150trial.hihanaxs.hihanaxs.roles::model_access

{ sql object p1248461150trial.hihanaxs:CANA1.analyticview : SELECT ;

sql object p1248461150trial.hihanaxs:TEST.analyticview : SELECT ;

sql object p1248461150trial.hihanaxs:MULT1.analyticview : SELECT;

sql object p1248461150trial.hihanaxs:CRIMEBLOG.analyticview : SELECT;

sql object p1248461150trial.hihanaxs:SO_CV.calculationview : SELECT;

sql object p1248461150trial.hihanaxs:CVIEW.calculationview : SELECT;

sql object p1248461150trial.hihanaxs.hihanaxs:CVIEW.attributeview : SELECT;

sql object p1248461150trial.hihanaxs.hihanaxs:JOIN.attributeview : SELECT;

}

Then I started SQL Console and ran the following SQL to activate the new authorisations

.

call "HCP"."HCP_GRANT_ROLE_TO_USER"('p1248461150trial.hihanaxs.hihanaxs.roles::model_access','p1248461150');

CALL "HCP"."HCP_GRANT_SELECT_ON_ACTIVATED_OBJECTS"

I could then connect successfully.

 


Second Issue


The main feature I wanted to use in Lumira with the dataset was the Geo charts. I was unable to proceed as the location details were unknown to Lumira and for some reason the Latitude and Longitude remained greyed out. No matter what setting measure or attribute (or even changing decimal places) I could not get this feature to work. Maybe it’s not clear in the screenshot below but the “Create a geographic hierarchy” the “By Names” selection is the only one available.


Simple analysis of the dataset


The main intention of my blog is to suggest there is a way to use the free copy of Lumira with HANA via the free trial account of the HCP. However as “Anti-social behaviour”  is the top crime I thought I would drill down into this crime.

Seasonal Anti Social behaviour


There does appear to be peaks and troughs through the seasons.



I was intending to name the place with the highest Anti-Social crime figures in the country over the time period in my dataset. However as I type I do not particularly want to do that here on my blog. Although I do now have a dataset that I am interested in and can explore more of Lumira and the SAP HCP. A dataset that I will use for future adventures into the HCP & Lumira world

.

13 Comments
Labels in this area