Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 

Introduction


Within the first part of the blog Handling of Applications with the BRF+ API (Part 1) we have done the first steps with the BRF+ API. They consisted of:

  • Create an application of storage type "master data" with a function and an expression. The function acts in functional mode and contains as top level expression a decision table. For the context and the result data objects we used existing data objects of a reusable S application

  • Learn how to get the function and the decision table using the ID of the application.

  • Find out how to get and set data from/into a decision table


 

So what are the topics of this second part? Within the next sections we will take a look at how the BRF+ API can support us in order to

  • Check for completeness a decision table

  • Check for overlaps in a decision table

  • Export and Import the data of the decision table into Excel format

  • Process the function

  • Enforce code generation

  • Delete the application and its sub-objects


 

So let us do some coding again!

 

Checks on Decision Tables


One very useful feature of decision tables is the check functionality you can execute on them namely the check for completeness and the check for overlaps. Thinking about our scenario with the integration of the decision table into a completely different application we certainly do not want to withhold this functionality to the end-user as they add so much additional value. So we add two new methods to the handler class ZCL_BRFPLUSAPI_FLEET that look like:



Within these methods we call the corresponding functionality of the BRF+ API of the decision table. For both cases this functionality is given by methods of the interface IF_FDT_DECISION_TABLE_SERVICE.

 

First let us take a look at the checks for overlaps and what to do in order to trigger them:



It is quite obvious that it is really easy to achieve the check:

  1. First we get an instance of the decision table object we created (how this can be done via the application ID is described in part 1 of the blog) and do some casting.

  2. Then we call the CHECK_OVERLAP method of the service interface that does the job for us and returns the corresponding messages in case overlapping conditions exist.


 

The check for completeness is nearly as straightforward but here we have the option to it in two ways:

  • We check for completeness and return the result that there are gaps which can be achieved by the method IF_FDT_DECISION_TABLE_SERVICE~CHECK_GAP.

  • We check for completeness and return the information what has to be done in order to fill the gaps in the table.


 

As the second option is more convenient for the user we use that one in our implementation:



So the following steps have to be taken:

  1. As in the other method we first fetch the decision table object via the BRF+ factory and do some casting.

  2. Then we call the method FILL_GAP_ROWS of the interface IF_FDT_DECISION_TABLE_SERVICE to get the proposal how to fill the gaps. This method internally calls the CHECK_GAP method.

  3. Last we fetch the result of the filling in textual form by calling the method GET_TABLE_DATA_TEXTS.


 

To conclude we are now also able to do these checks on a decision table without usage of the BRF+ workbench and can, hence, integrate them into other UIs.

 

Export/Import Functionality for Decision Tables


Another functionality that is very handy when it comes to decision tables is the possibility to maintain them in Excel. As the concrete call of the Excel depends on the UI technology you want to embed the coding no step-by-step example is given here but you will see the core methods to fetch the information of the decision tables in a compliant format. The code snippets are given by:



  1. The central class for the excel transformation is the class CL_FDT_DT_EXCEL, so you first have to create an object of it. Within that class you have basically two methods to handle the excel tasks

  2. The method CREATE_EXCEL_FROM_DECTAB creates an XSTRING that contains the decision table data

  3. The method MODIFY_DECTAB_FROM_EXCEL is the counterpart to the prior one and modifies the decision table using the input from the excel file i. e. the XSTRING.


 

If you want to take a look at the integartion of the Excel call for ABAP WebDynpro you should take a look at the component FDT_WD_DECISION_TABLE in the package SFDT_WD_EXPRESSIONS where the methods shown above are used for the Excel integration in the BRF+ workbench.

 

Process the Function


The coding for the processing of the function can be retrieved as usual via the code template in the BRF+. So you should create one “dummy” function for the scenario that we used here and then create the template using the BRF+ workbench or alternativly start the report FDT_TEMPLATE_FUNCTION_PROCESS.  After retrieving the coding you can delete the dummy function again and implement the call in the backend with the sample coding. The first  thing that has to be considered is that the ID of the function is not fix (as in the coding template) but has to be handed into the method of the handler as an importing parameter as shown in the following screenshot:



 

The second point that has to be thought about is the handling oftraces. If you need/want traces in your scenario is specific to the requirements you have and no sample solution can be given. However you decide the code template can be generated in a manner to support traces (more information on taces can be found e. g. in the document Tracing in SAP Decision Service Management).

 

No further tasks are necessary to expose that functionality except that you want to rewrite the proposed code tameplate in ABAP 7.40 style:



But this is indeed an optional task :cool:  

 

Enforce Code Generation


As additional functionality we want to implement a method that enforces the code generation of the BRF+ function. Now you might ask why one should do that as this is automatically triggered when the function is called for the first time (the very first execution of the function is in the interpretation mode). This type of execution can cause issues when the BRF+ function is called the first time from parallel processes which might be the case in our scenario. So to be on the safe side and to have the optimal performance of the function call right from the start we add the following method to the BRF+ API handler class:



 

The method contains a call of the function module FDT_CC_GENERATE_FUNCTION which triggers the code generation:



 

Please be aware that the method here contains an explicit COMMIT WORK that might be placed in another place when integrating it into the master policy application in order not to interfere with the commit handling implemented there.

 

Deletion of Applications


Up to now we have implemented the creation of the BRF+ objects, the update of the data in the BRF+ objects and some read functionality (and service functionality like the check for overlaps). So to complete the CRUD paradigm we add a delete method in our BRF+ API handler class:



 

The deletion functionality is already encapsulated in the class CL_FDT_DELETE_HANDLING. In analogy to the process in the BRF+ workbench the deletion of the objects via the BRF+ APIs consists of three steps:

  1. First we mark the complete application with its sub-objects for deletion by calling the method MARK_FOR_DELETE_VIA_JOB as shown in the green colored area. A prerequisite is to fill the importing parameter for the seletion appropriately as shown in the red colored area:

  2. Then we logically delete the jobs using the method DELETE_LOGICAL_VIA_JOB:

  3. The last step is the physical deletion of the objects that we trigger via the method DELETE_PHYSICAL_VIA_JOB:


As usual the error handling has to be adopted according to the specific needs.

 

The parameters handed over to the three calls of the class CL_FDT_DELETE_HANDLING are restricted to be used with a master data application (obvious in the very first screen shot when taking a look at the parameter LS_OBJECT_CATEGORY_SEL). So in case you use the API for the other storage types the concrete values have to be adopted.

 

One further hint when you play around with this functionality as I came across it several times especially when testing this functionality (It was some kind of Homer Simpson like experience … d'oh): The methods from above check if there is a lock on the objects that are to be deleted. So if you execute the coding and have the object open in the workbench in edit mode (which is default if you personalized your workbench as expert) the methods will return an error that they are locked. Do not ask how often that happened to me when I played around with the parameters of the methods as I wanted to check if they really are deleted :razz:    

 

Additional Information


As already mentioned this blog is intended to introduce you into the possibilities of the BRF+ API and is far away from describing all the possibilities and options the BRF+ API offers. So if you want to make a deeper dive into the topic one very good description is given in the book BRFplus – The Book ofcarsten.ziegler and thomas.albrecht

 

There is also plenty of information directly accessible directly in your system:

  • Many demo reports are located in the package SFDT_DEMO_OBJECTS that show the usage of the API in different scenarios:

  • Another source of knowledge is available via the reports that are grouped in the transaction FDT_HELPERS. Here you will certainly find useful coding snippets for your work with the BRF+ API:



 

Conclusion


Concluding this two-parted blog I hope I the information described in it gave you same insight into the BRF+ API and supports you in having a smooth start with the generation of BRF+ objects.

 

As soon as you start to work with the API you will certainly experience that its design is very well elaborated so after getting used to it you will find the methods you need at the place where you would expect them, i. e. all is very intuitive which is unfortunately not always the case for all APIs out there. Many thanks to the BRF+ team for that good job!

 

P.S. You certainly have become aware of is that the coding was done in Eclipse using the ABAP Development Tools. Working with these tools for some time nowI heavily recommend to use them as they will speed up your work. So I like to addtionaly state (although this is not part of the ADT area in SCN) that the team around thomasfiedler did a great job on those tools (and hopefully will continue to do so, but have no doubt on that).

 

So the BRF+ API together with ABAP in Eclipse is indeed developing like never before :grin:

6 Comments
Labels in this area