henry.nordstrm

10 Posts

h2. What is DI Construction Kit ?

 

DI Construction Kit is a versatile, lightweight open source toolkit that aims to make integration and migration tasks easier in a SAP B1 environment. In addition to integration and data migration/update scenarios, it can even be used as a building block in custom application development.

This is the first in a series of blogs explaining the purpose and use of DI Construction Kit. This blog serves as an introduction to the tool and the following blogs will dive more deeply in details of use.

 

You can download DI Construction Kit from here  (http://tinyurl.com/4pcsccr).

 

 

A related article about configuring DI Construction Kit is found in

SDN B1 Articles section

.</p>h2. **Why yet another integration tool ?*

In my opinion, the B1 market has been lacking an integration tool that has the following properties:

    • Fits to the budget of SMBs (no expensive license costs, simple scenarios must be fast to implement)

    • Enables doing small changes in the configuration quickly and easily

    • Is extensible so that the feature set of the toolkit does not become a bottleneck

    • Can execute jobs both from the command line (enabling scheduled and event-driven operation) and from a “traditional” visual UI.

    • Supports code reuse and transfer of knowledge in the form of “packaged” sample solutions

    • Comes with source code so that the tool can be extended, debugged and self-supported if needed.

 

The above is what DI Construction Kit is all about. There is some overlap with +Data Transfer Workbench +and Copy Express, as well as with the Electronic File Manager that is to be delivered with B1 8.8.1. There is also some overlap with the B1i integration toolkit. However, DI Construction Kit takes a very different approach to integration. There are lots of cases where the mentioned tools either cannot be used at all or they have serious limitations.

 

There are also commercial 3rd party integration solutions for B1 (such as iBOLT), but at least the ones I know are simply too costly for most B1 customers. 

*Where does DI Construction Kit come from? *

* * The history of DI Construction Kit dates back to 2003, when I started working in SAP B1 integration and implementation projects. Back then, I realized that most of the code we wrote was “plumbing” code. A lot of it was almost identical from project to project, as a result of heavy copy-paste operations. I had the wish that we could separate the most repeated parts of those one-off solutions and distill them into a common platform. This would enable us to concentrate on the actual “business logic” part of the implementation in each project. I also wanted to be able to do simple but significant changes via configuration, without having to write any code at all. These configurable elements would include such things as connection parameters, path information of files to read/write, character set information, SQL queries used for selection etc.

**

 *****The dream*

*** *</strong>The “execution engine” would only need to be installed once on the customer site. Any changes could either be done directly in the customer’s system or prepared and tested offsite and then simply copying the modified scripts to the customer site. Anyone who’s ever done a few rounds of the develop-build-test-deploy cycle with “normal” development tools can imagine what this would do to your productivity.</div><p><p>Having the business logic defined separately would result in the essence of each project being squeezed into a very small amount of compact code that is easy to read. This would not only make the solution easier to maintain and reuse, but it would also help to see the purpose and logic of the solution directly from the code. Another benefit of separating the plumbing code from the business logic would be that even when there’s a new B1 version (with new DI API), you would only need to recompile the execution engine and deploy it instead of having to recompile each of the projects separately. There would be no need to do anything for the configuration files that define the business logic.</p></p><p>**h3. First steps towards the goal

</strong></strong></p><p><p>So much for wishful thinking. In 2004, electronic invoicing was a hot topic in Finland. The local banks had recently agreed on a common invoicing message specification and were pushing this standard as a replacement for the vendor-specific, proprietary “standards”. Being in the need of a fun and meaningful project for my Master’s thesis in Computer Science, I saw that this was my chance of killing two birds with one stone: get my thesis done and put the fantasy of a common integration platform into practice. During the summer, I was in contact with SAP Finland and we were able to strike a deal with them. SAP would pay us for implementing a Finvoice export interface that would be made available to all the B1 customers in Finland.</p><p>After some hectic hacking, the first official version of ++DocEngine ++saw the light of day on December 15, 2004. Compared with the DI Construction Kit of today, the first generation DocEngine was very limited. Still, the most essential parts were already there. There was the export feature based on the NVelocity template engine that enabled you to separate the static and dynamic parts of a document. There was even a limited import feature that used an XML-based instruction syntax to express the import commands (see sample below) and a “conversion” feature that allowed you to call predefined SQL queries for value translation. </p><pre><Import></pre><pre><header></pre><pre><Instruction></pre><pre><memberName>CardCode</memberName></pre><pre><memberDataType>string</memberDataType></pre><pre><xPath>SellerPartyDetails/SellerPartyIdentifier</xPath></pre><pre><conversion>Supplier_CardCode_from_TaxID</conversion></pre><pre><mandatory>true</mandatory></pre><pre></Instruction></pre><pre><Instruction></pre><pre><memberName>DocDueDate</memberName></pre><pre><memberDataType>date</memberDataType></pre><pre><xPath>InvoiceDetails/PaymentTermsDetails/InvoiceDueDate</xPath></pre><pre><mandatory>true</mandatory></pre><pre></Instruction></pre><pre><Instruction></pre><pre><memberName>Comments</memberName></pre><pre><memberDataType>string</memberDataType></pre><pre><xPath>SellerInformationDetails/SellerWebaddressIdentifier</xPath></pre><pre><mandatory>false</mandatory></pre><pre></Instruction></pre><pre></header></pre><pre><rows></pre><pre><rowPath>InvoiceRow</rowPath></pre><pre><Instruction></pre><pre><memberName>ItemCode</memberName></pre><pre><memberDataType>string</memberDataType></pre><pre><xPath>BuyerArticleIdentifier</xPath></pre><pre><mandatory>true</mandatory></pre><pre></Instruction></pre><pre><Instruction></pre><pre><memberName>Price</memberName></pre><pre><memberDataType>double</memberDataType></pre><pre><xPath>UnitPriceAmount</xPath></pre><pre><mandatory>true</mandatory></pre><pre></Instruction></pre><pre><Instruction></pre><pre><memberName>Quantity</memberName></pre><pre><memberDataType>double</memberDataType></pre><pre><xPath>DeliveredQuantity</xPath></pre><pre><mandatory>true</mandatory></pre><pre></Instruction></pre><pre></rows></pre><pre></Import></pre><p>The connection parameters, path information, selection queries and SQL transformation functions were stored in a configuration file, so that you could change the behavior of the application without the need to modify application code. At that time, the tool could only export and import DI API objects of type “SAPbobsCOM.Documents” – in other words sales and purchase documents. Hence the name “DocEngine” seemed appropriate.</p><p>Compared with today, perhaps the most important missing piece in DocEngine was that there was no real support for scripting. You had to get by with the limited – although Turing complete – programming language features of NVelocity templates for export. The Import feature was even more limited with nothing but the XML-based “language” for imports. The XML instruction code was very painful to use, as you had to write a whole lot of “code” to do simple things. In this sense, it was much like working with XSLT transformations. An additional burden was that there was no support for proper looping, branching or conditional operations. </p><p>Later on, I also experimented with using the NVelocity template language for use in imports, ending up with syntax such as: </p><pre>$doc.set("header","NumAtCard",$imp,"MsgHeader/MsgNumber","string")</pre><pre>#set($header=$imp.SelectSingleNode("MsgHeader"))</pre><pre>#foreach($date in $header.selectNodes("MsgDate"))</pre><pre>#set($attr=$util.readAttribute($date,".","DateQualifier"))</pre><pre>#if($attr == "4")</pre><pre>$doc.set("header","DocDate",$date,".","date")</pre><pre>#elseif($attr == "2")</pre><pre>$doc.set("header","DocDueDate",$date,".","date")</pre><pre>#end</pre><pre>#end</pre><pre>#set($freetext = $header.selectNodes("Freetext"))</pre><p>While this was a big step ahead from the Xml-based “language”, I was still not happy with the power of expression that it provided.</p></p><p>**h3. Checking out the alternatives

</strong></strong></p><p><p>Somewhere around 2005, I needed to implement a rather demanding XML-oriented EDI solution for SAP B1. As part of the preparations for this project, I attended the course TB1400 (SAP B1 Integration Toolkit, aka ITK) at SAP headquarters in Walldorf, Germany. I wanted to know what integration solutions SAP provided for B1 and if they would be of any benefit in my project: I did not want to reinvent the wheel.</p><p>Being totally R/3 centered - not to mention outrageously confusing - the Integration Toolkit did not prove to be of any help in the kinds of scenarios I was dealing with. On the other hand, I was still happy to attend the workshop. It helped me understand that I should not stand around waiting for a decent integration solution from SAP. Instead, I should go ahead and build it myself. Had I decided to wait, I’m afraid I would still be waiting.</p></p><p>**h3. Scripting done right

</strong></strong></p><p><p>I kept being haunted by the idea of a scripting solution that would let me combine and harness the full power of the .NET framework and the DI API. Luckily, it turned out that there was a runtime compilation feature in .NET – that was just what I needed. Once the runtime compilation feature was embedded in DocEngine, I was finally able to do what I intended to in the first place: express business logic with compact scripts that run on a platform which is under my own control.</p></p><p>**h3. Maturing up

</strong></strong></p><p><p>The EDI project proved out to be a fantastic test bench for the concept of DocEngine. Needless to say, a lot of new features were added during that project. Overall, I had the feeling that DocEngine was making my work easier and much more fun. Whenever I faced a need for a new feature, I could simply add it myself. I have always hated those black box solutions that are hard to debug and impossible to fix. I believe most developers would agree with this.</p><p>Since then, the changes in DocEngine became somewhat less dramatic. More and more projects were implemented without touching the source code of DocEngine, which proved that the concept was working. However, I kept refactoring and adding new features every time I realized I need one, simultaneously trying to make these new features as flexible and open-ended as possible. In addition to incremental changes, I also made a couple of more dramatic rounds of refactoring: basically broke it all to pieces and put it back together in a more meaningful and better organized constellation.</p></p><p>**h3. Going public

</strong></strong></p><p><p>In 2007, the full source code of the toolkit was handed out to all the Finnish B1 resellers. Very little came out of this publication, apart from a copycat Finvoice implementation (which I appreciate a lot, as imitation is the sincerest form of flattery). Back then, I was already planning to release the tool on the SDN, but SAP had some legal worries about publishing 3rd party binaries and the whole thing lost momentum (the funny thing is that they did not have any such worries about DI Commander, which is still available for download from the SDN). Since then, I always felt that before going public, I need to add this feature, fix that part of the code, write some more documentation, prepare more sample solutions - the list goes on and on.</p><p>In late 2010, I attended another SAP workshop which was about the B1i integration framework. I had very little previous knowledge about B1i before the course and I wanted to see what it was all about. Live and learn, B1i was essentially a polished-up version of the “good old” Integration Toolkit. This revelation gave me that final boost I needed to go and publish DI Construction Kit.</p><p>During the workshop, I was kindly given the possibility to demonstrate DI Construction Kit to other participants and the feedback from them was very positive. I also noticed that some of the participants had come to the same conclusion as I did before: they had started building their own little “DocEngines”.</p></p><p>I came to the conclusion that even with all the flaws and missing pieces in DI Construction Kit, it still is a nice little toolkit that deserves to be published. With all the effort I put into it, it would have been a shame not to share it.</p><p>**h3. Was it all worth it?

***

 

 

I haven’t really kept track on the hours I’ve spent developing and using DI Construction Kit, but I’m sure it’s a four-digit figure.

The aim of the tool was to make my own job easier and more fun, which it did indeed. Additionally, it has also made it possible for me to implement demanding projects within a very short timeframe. How about implementing a fully working group consolidation scenario in half a day? An import interface from a salary system in a couple of hours? I’ve been able to offer my customers working solutions with a very attractive price tag and delivery time. Many of these solutions would’ve never seen daylight if I hadn’t had this tool in my back pocket. Perhaps one of the main strengths of DI Construction Kit is that it’s been built and enhanced by someone who’s actually been using it in real-life scenarios. Eat your own medicine!

My sincere hope is that you will download the tool, experiment and tinker with it and hopefully find it useful. Whichever way it turns out, I would like to hear what  you think about it.

How can DI Construction Kit be used ?

There are three distinctive ways in which you can use DI Construction Kit:

0.1. As an interactive Windows Forms application

0.2. As a commandline tool that can be executed in a scheduled or event-driven manner

0.3. As a DLL library that you can embed in your own applications

***

Interactive tool

</strong></strong></p><p>The most typical way to use DI Construction Kit is via the Windows client. You can use the client to execute predefined jobs that either import or export data to/from SAP B1. Additionally, the Windows client includes a tool called Python Console, that essentially replaces the DI Commander tool (mmthat was released in 2007.</p><p> </p></p><p>!https://weblogs.sdn.sap.com/weblogs/images/32520/login_winclient.jpg|height=450|alt=DI Construction Kit Winclient|width=595|src=https://weblogs.sdn.sap.com/weblogs/images/32520/login_winclient.jpg!</p><p><p>**h3. Commandline tool

***

 

 

 

Using the commandline tool, you can execute the same jobs as with the Windows client. However, the commandline tool makes it easy to define scheduled or event-based batch jobs that run in the background.

| !https://weblogs.sdn.sap.com/weblogs/images/32520/commandline.jpg|height=168|alt=image|width=557|src=https://weblogs.sdn.sap.com/weblogs/images/32520/commandline.jpg|border=0!</body>

Introduction: The reporting tools available for SAP Business One

Reporting is a fundamental element of any ERP system. Regarding SAP Business One, there are a number of different reporting alternatives available, such as:

  • The standard built-in reports inside the SAP Business One client
  • The SQL query tools inside the SAP Business One client
  • XL Reporter
  • A large number of 3rd party solutions provided by SAP partners

Lately, there's been a lot of discussion about SAP buying Business Objects and the potential that Crystal Reports would bring to SAP Business One customers. With all these tools, one might wonder if they all are really needed and/or if it makes any sense to keep adding new solutions to the list.

Built-in Standard reports

Surely, it makes sense to have certain essential reports (such as the profit-and-loss statement and balance reports) built in to the system so that they can be used even in an out-of-the-box installation. The nice thing about these reports is that most of them support such selection criteria as cost center, project and item/business partner properties. Also, the layouts of these reports can be modified, at least to a certain extent.

The trouble with the built-in standard reports is that they can not really be extended. Quite often you are faced with a situation where the standard report provides 90% of the things you need, but there's no way to get the missing 10% included on the report. As a workaround, you can of course export the standard report to Excel and then use your favorite hack to add the missing parts there. If you don't like workarounds and manually executed tricks, you need to roll your own reports, though.

User queries

User queries are great. Not only do they allow you to create your own parameterized reports, but they also allow you to access many of the features provided by the SQL Server database engine (such as dataabase views, user functions, temporary tables, T-SQL queries, queries to non-B1 databases and tables etc.). You even have the nice orange drill-in arrows that let you open masterdata objects or documents directly from the query results with a single click. It's also great to be able to export the results to Excel for further processing or build a layout for your query with the Print Layout Designer or Advanced Layout Designer.

Then again, there are also some problems with user queries. The menus through which you need to navigate to the reports are not very user-friendly to start with (they could've at least added a possibility to assign a user report directly to a function key). Over time, as the number of queries in the system grows, you will notice that it becomes harder and harder to find the query you're looking for. The fact that sometimes you need to define multiple queries that do almost the same thing (instead of being able to make one flexible query that the user could control via parameters) only makes this phenomenon worse.

There are also some technical issues. Sometimes certain queries might stop working after a version update because they're rendered differently. Also - especially when supplying datetime values as query parameters - you might find out that the results of a query either contain records that shouldn't be there or vice versa. Last but not least - you will need a SAP B1 client to use the reports. If a user only needs the reports, B1 is a pretty expensive report viewer.

XL Reporter

Well, then there's the beloved XL Reporter (XLR), invented by those wild Norsemen at iLytix. XLR looks really good - both on paper and in PowerPoint. The pre-defined report set even looks pretty good in a working demo system (if you manage to get it up and running, that is). Because of its good looks, XLR is an ultimate promiseware application. If you're a slick salesman and your bonus is only dependent on the closing of the contract, you might get away with promising "sure, we can do that with XLR". However, even for the demo, it's best to stick to PowerPoint slides.

Especially the earlier versions of XL Reporter had severe problems related to the installation and to the stability of the whole application (certainly, this is at least partly due to UI API). It took a lot of effort to get it installed and then to keep it up and running. This is not exactly what you would expect from a world-class reporting solution. However, these were just minor nuisances compared with the fact that the whole concept of XL Reporter is fundamentally flawed. It is a schoolbook example on how not to build an effective reporting solution (see the section on performance below).

XLR looks deceivingly simple and powerful. Because it is based on MS Excel, it has instant appeal for people with accounting/financial/managerial background (if you're educated to use a hammer, everything starts to look like a nail). In reality, XLR is neither simple nor powerful. Whatever your reporting needs are, there are better alternatives available. You have been warned.

Crystal Reports

While it is rather obvious that SAP didn't buy Business Objects in order to bundle Crystal Reports with B1, it seems that Download the free SAP Business One Integration Add-On is really starting to happen also on this front. Crystal Reports is a pretty decent reporting tool/layout editor that many 3rd party add-ons are already using as their reporting component. Personally, I prefer Perpetuumsoft's Report SharpShooter (especially because their solution allows the end-users to modify report layouts without purchasing a license, unlike Crystal Reports) for doing layouts, but that's a matter of taste. I see the main potential of Crystal Reports in the SAP B1 context as a replacement for the horrible Print Layout Designer. Surely, it can also be used to produce really nice reports. If the alternative was XLR, I would choose Crystal Reports without a second thought.

What about performance ?

The most important real-life lesson I've ever learned about reporting is that the number-crunching must be done as close to the data as possible. In order to produce a meaningful report, you normally need to join records from several tables. The end result might be just 50 records, but the records involved in the retrieval process with complex joins in a production database can easily add up to hundreds of thousands or millions of records. It's not so hard to imagine what the bottlenecks might be if you loaded all that data on the client. Together with badly designed queries, you will not be saved from awful performance no matter how much you invest in hardware. Obviously, you would wish to do the joining and summarizing on the server.

Among the tools discussed above, XL Reporter is clearly the worst and most obvious example of poor-performance-by-design. However, actually all the tools mentioned above are based on a traditional client-server (fat client) model. In each of these reporting solutions, at least the report rendering, but also a lot of the actual data processing is done by the client. This really doesn't matter if you're just playing around with a demo database that has a few thousand records in it. Once you're in a real live database that's been accumulating data for a while, it really does make a huge difference.

What's wrong with Excel as a reporting tool ?

Above, I was referring to Excel above in a bit negative sense, This doesn't mean that there's anything wrong with the product per se. Quite the opposite - for many purposes it is a fantastic tool. However, it is perhaps the most abused piece of software. As with any tool, MS Excel is no silver-bullet or panacea that could be successfully applied to any problem. I see Excel as "very smart paper". When it comes to calculations, it's certainly light years ahead from traditional paper. With a bit of knowledge of the formula language, you can perform small wonders. However, there are some tasks I would never do with Excel. One of them is word processing and the other is database-like transaction processing. Lately, I participated a one-day "advanced" Excel course that was directed to Financial Managers. As a "real-life" example, the trainer showed us how to build a simple invoicing system based on Excel, with invoice layouts and everything ;-) I was not impressed.

Excel is great in the final stages of the reporting-cycle, especially for distributing pre-prepared reports and doing simple one-time analysis tasks on the prepared data. It is also a worthwhile user interface for SQL Server Analysis Services. However, it doesn't score quite so well in rendering reports from raw data (as done in XL Reporter). If you try to use Excel for collecting the data used for serious reporting, you're even worse off. If you're serious about being a professional (regardless of being positioned either in the IT or business domain), for these scenarios you simply need a database. Period.

Summary

There are already many reporting tools available for SAP Business One and some of them are rather good. However, even with all these tools, there still is an obvious social demand for a tool that would have the following properties:

  • It must be easy to develop new reports and modify existing reports
  • The tool must not be limited by the object model of DI API (this is a major handicap of PLD and ALD).
  • The tool must be robust and reliable
  • The tool should be compatible with the technical infrastructure of SAP B1 (Windows operating system, MS SQL Server etc.)
  • The license should be relatively inexpensive
  • It should be possible to view the reports without the B1 client, preferably via a web browser.
  • It should be possible to easily export the reports into Excel.
  • The installation and configuration must be simple (something that can be done in a day).
  • We should be able to trust that the tool will be supported also in the foreseeable future.
  • Last but not least, the system should not have architectural issues that spoil the performance.

These are the elements that I would require from an excellent reporting solution. This certainly isn't an easy wishlist to fulfil. XLR surely wasn't the answer, and I'm afraid that neither is Crystal Reports. Doesn't it make you wonder if there even is such a product that would fulfil all of these requirements...?

Enter the SQL Server 2005 Business Intelligence stack

Along with MS SQL Server 2005, Microsoft published the Business Intelligence (BI) stack, which consists of three main components: Reporting Services (SSRS), Integration Services (SSIS) and Analysis Services (SSAS). SSRS and SSAS were already released as free components in SQL Server 2000. However, they were totally revamped in MSSQL 2005. SSIS is also a complete rewrite of an ancient ETL tool that was known as the Data Transformation Service (DTS) in MSSQL 2000. Only this time it's much more flexible and powerful.

It seems that the norm with Microsoft products is that the first release is a bit clumsy and limited, but then they quickly improve from version to version. The MSSQL 2005 BI stack is already a very impressive set of tools. Considering that SQL Server remains the de facto database platform for SAP Business One, it's truly amazing how silent the SAP Business One community has been about SSRS. I initially got interested in SSRS when a couple of my most enlightened clients first evaluated XLR (unsurprisingly, they were totally disappointed with it) and then soon opted for SSRS instead. After getting to know SSRS, I started to realize what a hidden gem it is.

While the other elements of the BI Stack are equally interesting, in this blog I will concentrate on SSRS.

The benefits of SSRS in a B1 context

Flexible delivery of reports

SSRS integrates with Internet Information Services (IIS) and thus the main user interface for SSRS is the web browser. In addition to viewing the reports with a web  browser, you can also export them to a multitude of formats: MS Excel (SSRS renders drill-in reports beautifully into pivot tables in Excel), PDF, CSV, XML etc. Additionally, you may opt to have the reports sent to selected persons or stored on a shared folder - all of this can of course be scheduled and automated.

Great performance
All the SSRS reports are rendered on the server. Compared with any of the B1 reporting tools mentioned above, the rendering process of SSRS is lightning-fast. If you have lots of users or are running very heavy reports, you can also use the excellent caching and scheduling features of SSRS to make it even faster.
Independent of B1

For some people, it might seem a handicap that the reports are not loaded from inside SAP B1. Personally, I see this as a virtue. The concept of embedded add-ons sounds good, but in practice the UI API add-ons have proven unreliable and clumsy. Even though the deployment procedure of add-ons to clients is sort of semi-automated, there is nevertheless a deployment needed. In proportion to the security level of your IT infrastructure, this might be a somewhat small problem or a major headache. Also, there are lots of users who don't really need the B1 client but just need to access the reports. It does not make much sense to buy an expensive B1 license for such users just to see the reports. Perhaps the only thing I'm missing in SSRS from inside the B1 client are the orange drill-in arrows.

Appealing and easy-to-navigate user interface

SSRS allows you to organize your reports in hierarchical folders. You can give a meaningful name each folder and report. Additionally you can specify a descriptive text for each folder and report. SSRS also contains a search funtion that helps you find the report you are looking for (it searches both the names and descriptions of the reports).

Integrated authentication and flexible user rights management

As SSRS runs on IIS, it can be configured so that the user is automatically authenticated with her Windows credentials. The user rights can be set separately for each folder and report. Each user will only see those folders and reports that she is allowed to access.

Inexpensive

If you have a license for Standard or Enterprise Edition of SQL Server 2005, you are automatically entitled to use all the components of the BI Stack. Unfortunately, if you bought the SQL license from SAP along with your B1 licenses, that does not entitle you to use SSRS. In that case, you would have to buy additional licenses from Microsoft. Even then, the price of the SQL Server license is pretty good value for money compared with many reporting solutions that offer similar features.

Simple and powerful development environment

The SSRS reports are developed in BI Development Studio, which is actually a stripped down version of Visual Studio 2005. Visual Studio is a familiar environment to most people who have done any development with the B1 SDK. It is also immediately recognizable to anyone familiar with MS Access. My first impression of the BI Development Studio was "they've taken MS Access and put it in a web browser". The development process of a report is very straightforward and intuitive:

  1. Define a dataset, which basically means one or several SQL queries (to be precise, the queries could also be multidimensional MDX queries, but that's another story).
  2. Define a layout, which can either be a table/matrix (in SSRS 2008, they actually have a hybrid called tablix ;-) and/or one or several charts. While developing the report, you can preview it whenever you like.
  3. Once you're satisfied with the report, you simply deploy it to the server with a single click. Then, you can access the server via a browser and either view reports or modify their properties such as user rights assignments. Redeployment of a modified report is just as easy - just two mouseclicks away.
Interactivity

As the main delivery channel of SSRS is the IIS, there's a hidden bonus: you can specify dynamic URLs to any cell in the report. This means that you could for instance take the docentry of any single record in your resultset and generate an URL that is pointing to a completely different webpage (such as a HTML preview page of an invoice or order).

In fact, my plan is to combine certain SSRS reports with an ASP.NET application that uses DI Server for modifying the data inside SAP B1. As a large part of what users actually do in SAP B1 is browsing and searching through documents and masterdata, SSRS would basically provide us with half of the functionality needed in a completely web-enabled SAP B1 solution. The other half - adding and updating data - would be handled via DI Server.

Summary

In my opinion - based on the findings above - SSRS is the ideal reporting platform for SAP Business One (or any other ERP running on MSSQL). However, I don't want you to take my word for it. I want you to try it out yourselves.

A sample solution with SSRS: Sales matrix by customer group/customer and item group/item

Start new project

In this sample scenario, we build a simple matrix report that shows the turnover for a given time period by item group and by customer group. Additionally, we are going to use the drill-in feature to allow us to view the values also on item and customer level.

Step1: Start a new Report Server Project

In Business Intelligence Development Studio, start a new project. Choose the type "Report Server project" as shown below.

New Project

Step 2: Define data source

After initializing the new project, add a new shared data source by right-clicking the "Shared Data Sources" folder as shown in the image below.

Add New data source

 

In the Shared Data Source form, click the Edit button and specify the connection properties for your database environment. Then click "Test Connection" to see that the connection works.

Define data source

 Step 3: Start creating a new report

In the Solution explorer, right click "Reports" and choose the function "Add new report".

Add new report

 

Step 4: Select data source

Choose the data source you wish to use in your report. If this is your first report, you will only see the datasource you just created in step 2.

Select data source

 

Step 5: Define the query

There is a nice visual query designer in BI Development Studio, but you can also simply type in your SQL Query (Often a good way to start working with the Query Builder is to copy your favorite user queries from SAP B1 and see how they can be enhanced in SSRS).

Define query

Here's the complete query

SELECT 
OCRD.CardCode,
OCRD.CardName,
SUM(INV1.Price * INV1.Quantity) AS turnover,
OITB.ItmsGrpNam,
OCRG.GroupName,
OITM.ItemCode, OITM.ItemName
FROM  INV1 
INNER JOIN OITM ON INV1.ItemCode = OITM.ItemCode
INNER JOIN OITB ON OITB.ItmsGrpCod = OITM.ItmsGrpCod
INNER JOIN OINV ON OINV.DocEntry = INV1.DocEntry
INNER JOIN OCRD ON OINV.CardCode = OCRD.CardCode
INNER JOIN OCRG ON OCRD.GroupCode = OCRG.GroupCode
GROUP BY 
OCRD.CardCode,
OCRD.CardName,
OITB.ItmsGrpNam,
OCRG.GroupName,
OITM.ItemCode,
OITM.ItemName
 
Step 6: Select the report type

There are two report types in SSRS 2005: tabular and matrix. The tabular report is the more 'traditional' report type, which nevertheless provides such nice features as drilldown. The matrix report type, on the other hand, provides a way to have drill-in functionality both on row and header level. In this specific case, we will select the matrix report type.

Select Report Type

 

Step 7: Specify the contents of the matrix

Here, you may choose which database columns will be assigned to row/column level and which database columns will provide the actual detail data. PLEASE NOTE: do not forget to check the "Enable drilldown" checkbox in the bottom left corner.

Set the contents of the matrix

Step 8: Choose style

In this step, we can choose from a selection of pre-specified report styles.

Choose matrix style

 

Step 9: Complete the wizard

Complete wizard

 

Step 10: First preview

As we can see, the initial version of the report is already pretty good looking. There are some issues such as the decimal count that call for further adjustment.

First preview

Step 11: Adjust layout formatting

One of the nice features of the layout designer is that you can use standard .NET type formatting strings for any field. For instance, to drop the non-wanted decimals (only leaving two), I can use the formatting string #0.00 (see the picture below).

Format number field

Step 12: Second preview

Now that we have adjusted the formatting of the detail cell and also changed column widths a bit, the report is already looking very good.

Second preview

Step 13: Add parameters to the query

In this report, we will use two datetime parameters: StartDate and EndDate. You can add new parameters while in the Data or Layout tab is active (not when you're in the preview tab, however) by choosing Report-Add Parameter from the top menu. See how the parameters are referred to in the query.

Add parametersAdd parameters

Here's the modified query:

SELECT 
OCRD.CardCode,
OCRD.CardName,
SUM(INV1.Price * INV1.Quantity) AS turnover,
OITB.ItmsGrpNam,
OCRG.GroupName,
OITM.ItemCode,
OITM.ItemName
FROM        
INV1
INNER JOIN OITM ON INV1.ItemCode = OITM.ItemCode
INNER JOIN OITB ON OITB.ItmsGrpCod = OITM.ItmsGrpCod
INNER JOIN OINV ON OINV.DocEntry = INV1.DocEntry
INNER JOIN OCRD ON OINV.CardCode = OCRD.CardCode
INNER JOIN OCRG ON OCRD.GroupCode = OCRG.GroupCode
WHERE (OINV.DocDate BETWEEN @StartDate AND @EndDate)
GROUP BY
OCRD.CardCode,
OCRD.CardName,
OITB.ItmsGrpNam,
OCRG.GroupName,
OITM.ItemCode,
OITM.ItemName

In fact, you can specify parameters simply by writing them into the query. However, by default these parameters will be of type String and this is not always what you want. The parameter configuration screen not only lets you decide the type of the parametes, but also assign available values, default values etc.

Step 14: Third preview

Now we are happy with the previewed results. It's time to deploy the report to production use.

Third preview

Step 15: Adjust the deployment settings

For each new project (however, not for each report inside a project), you need to specify the TargetServerURL and TargetReportFolder parameters before doing the deployment.

Deployment settings

 

Step 16: Deploy

Right click the name of the project (or individual report, if you only wish to deploy that specific report) and choose Deploy.

Deploy

 

Deployment successful

Step 17: See the report in SSRS

Here's a standard screen that you should expect to see once SSRS is properly installed and configured. Notice the search field which allows you to look for reports by name and description

Reportserver main screen

Step 18: The report seen through a web browser

Here's a glimpse of the kind of data you should see on the report after deployment.

Report via IE

 

Step 19: Export to Excel

Simply choose Excel from the target listbox and then click "Export" to initiate the export.

Select export format

 

Open in Excel

Step 20: The exported results in Excel

As you can see, the results look almost the same in MS Excel as in the web browser.

Results in Excel

In this scenario, we have barely scratched the surface of SSRS. We could have decorated the report with subtotals, additional charts etc. There are so many great features in SSRS that it is impossible to demonstrate in a single blog. Anyway, hopefully you've gotten a somewhat clear picture of the things you can accomplish with SSRS.  

Conclusions

The B1 community has been almost dead silent about SQL Server Reporting Services. This is a pity, because in addition to being instantly available to the majority of B1 customers, it is also one of the best reporting solutions relevant for this market. It's no wonder that Microsoft has defined MSSQL and SSRS as the key integration point between their various Dynamics ERP products. If you are looking for a reporting solution, you should definitely take a look as SSRS.

Introduction

There are lots of great features in SAP Business One (B1). Then again, there are also some features that are obviously based on great ideas, but implemented only halfway through. In this blog, I will discuss features that are related to modifying the user interface in B1 client and propose certain improvements that would help B1 live up to its full potential.

Modifying the fields and form layout

One of the most used modifications features in B1 is the user-defined field (UDF). UDFs are great especially on row level, where they function pretty much in the same way as the standard system fields. On header level, there is unfortunately much less room to maneuver. All the UDFs are put into a "sidebox", which is then positioned aside the main form. We can arrange these fields into groups and then show them one group at a time, but that's about it.  The system fields are equally limited on header level: they cannot be hidden or disabled nor can they be relocated any more than the UDFs.

Another issue with both system fields and UDFs is that we have very limited ways in which we can control what the user enters into those fields. Surely, the system does type-checking so that for instance datetime fields only contains datetime values etc. With UDFs, you can also declare a field as mandatory. However, this only means that you can specify that the field should always be filled, as well as specify the allowed values and default value for the field. You can't make the field conditionally mandatory, so that the field would for instance only be required when some other fields have specific values (or that the allowed values depend on the selection in another field). I'm sure this is one of those requirements that all seasoned B1 consultants have faced in several projects.

With some knowledge of T-SQL and B1 transaction notifications, you might be able to implement a kludge that acts sort of like conditionally mandatory fields. However, this is a feature that I would very much like to see in the core product.

Calculating field values by Formatted Search

I have discussed Formatted search in my My Christmas wishlist for Santa Claus, and I will try not to repeat myself too much here. Formatted search allows us to automatically calculate a value in a field based on values in other fields. Formatted search goes quite a long way as it is, but with a little improvement it would be so much better. Perhaps the biggest shortcoming of formatted search is that you can only attach one trigger to a field with formatted search. Sometimes this is enough, but quite often I've been left wishing for more flexibility. For instance, you might need to calculate a value in a certain field based on values in several other fields/columns. If you always proceed through all these fields in a left-to-right, top-down order, you could perhaps attach the trigger to the last field on which the query depends. However, what happens if the user decides to proceed through these fields in a random order or only modifies some of the fields? Sooner or later you will end up in a situation where the formatted search is not triggered and thus the value in the calculated field is not what is expected.

Another problem of formatted search queries is that they are not triggered when documents are added programmatically. In some cases they are also not triggered when values are copied from a base document. What we need is something consistent and something you can count on.

Without resorting to extrenal programming tools, there are basically no other ways to respond to user events than formatted search. There are even less tools for modifying the user interface, for instance to move a system field to another position or to hide it. Well, for programmers there's always the notorious User Interface API.

UI API - close, but no cigar

On paper, UI API looks really fantastic. It allows you to do all those things I was asking for above. You can modify existing forms (move, hide and add fields) and you can create new forms as well. You can add a listener to almost any form event both before and after the standard B1 business logic has been executed (bubble events). Sounds too good to be true? Unfortunately it is. 

Problem 1: Clumsy development 

Developing an UI API add-on is a rather complicated task. Just take a look at the SDK Forum. With the help of B1DE it perhaps becomes a bit easier, but if you're familiar with any modern integrated development environment, you will feel like the guy in the TV series Life on Mars with UI API. For a 21st century ERP system, it is simply unacceptable that you need to resort to programming and go through the whole code-build-test-deploy cycle just to do such trivial stuff as hide or move a field.

Problem 2: Stability and Reliability

If clumsy development was the only problem, I would still be happy with UI API. Unfortunately there's more. In fact, the biggest problem in UI API is stability and reliability. Each UI API add-on runs as a separate process and gobbles up about 80 Mb of RAM. After starting up, the add-on process will attempt to establish a connection with the B1 client process. If it succeeds, lucky you. If it doesn't, well that's just too bad. Even when your add-on succeeds in establishing the connection, it still is under constant risk of crashing. Quite often this happens when you're switching between two databases.

Problem 3: Performance

The process of connecting a single add-on to the B1 client process takes a significant amount of time. If the add-on modifies menus or forms in the initialization phase, this will also take time. Thus, the user needs to wait a while after logging in to B1 to make sure that the add-on has been initialized. If she is too quick, the form or menu will open in the "standard" mode so that none of the modifications made by the add-on will not be visible. With one add-on, this is perhaps not a major problem, but if you have more add-ons, the system becomes gradually slower and more shaky. Back in 2003, I was enthusiastically building our first B1 demo environment. I decided to install all the seven or eight add-ons found on the installation CD. Needless to say, getting the demo client started up the then to stay up and running was quite a challenge. It taught me an important lesson: the less UI API add-ons you have in your system, the better. I usually aim for zero, although I often need to settle for one (Payment Engine).

Learn from the Enemy - case Dynamics NAV

After all this nagging about the shortcomings of B1, I feel I'm obliged to make a proposal for changes that would fix the discussed problems.

What I would really want to see is a tool that could be used both to modify the layout and content of system forms as well as to create completely new forms. If we want to see an example of such a tool, we only need to look at some of the fiercest competitors. Here is a screenshot of the form designer in Microsoft Dynamics NAV 5.0:

Dynamics NAV form editor

With the form designer, you can easily edit any existing field, but also add completely new fields and position them wherever on the form. You can also create completely new forms. All of this is done in a WYSIWYG fashion (not unlike MS Access and other similar tools with form editors) inside the NAV client. You don't need to write any code to do these changes.

Just think about having this feature while you're trying to convince your potential customer about both the flexibility of the product and your capabilities to deliver a working solution. If you're able to produce even a mockup form for a feature the customer is asking in a few minutes on the spot while your sales clown is distracting them with some mumbojumbo and chitchat, your possibilities for winning that case just shot through the roof. 

What comes to responding to events, Dynamics NAV has also a rather neat solution to this. There is a big bunch of event handlers on which you can hook up your scripts. This editing can be done directly in the NAV client and the code will be executed when the event is triggered. Unlike the UI API (remember, 80 Mb per add-on!), these scripts are executed inside the same process as the client. If you wish to see more details on this feature, take a look at the Microsoft Dynamics NAV 5.00 Application Designer's Guide for details:
http://dynamicsuser.net/files/storage/extra/Doc/w1w1adg.pdf

The language they are using in NAV is a pretty old-fashioned procedural language called C/SIDE. However, it's not so important what the language is, it's much more important that it is executed inside the same process and it is stable. Naturally, it would be nice to see some modern language such as Java or Python in the code editor of B1 (if we will ever have one).

Summary 

My suggestion to the SAP B1 development team is that the following features should be added to SAP Business One:

  • Possibility to modify existing forms and create new forms directly in the B1 client, without any programming
  • Possibility to make fields conditionally mandatory (only mandatory under certain predefined circumstances).
  • Possibility to add multiple triggers to a single formatted search
  • Possibility to hook up scripts to relevant form events. These scripts should execute inside the same process as the B1 client and they should have access to all the header/row level data of the business object currently being modified, even if it is a new object that's not been added yet.

With these improvements, B1 would be much better equipped to face the competition. Certainly selling B1 would much easier and rewarding task to the resellers. Needless to say, it would also provide more value for money to the customer.

PLEASE NOTE!

DI Commander has been embedded to DI Construction Kit, which supports also MS SQL 2008 and SAP B1 8.8. You can download DI Construction Kit with documentation and full source code from these sites:

The initial version of DIC (previously known as the B1 Turbo Command Host) was released in July 2007. Our web server logs indicate that since then, DIC release 1.0 has been downloaded ca 400 times and DIC release 1.3 ca 600 times. Considering the (small) size of the global B1 developer community, I'm very happy with these figures. Based on the feedback and questions received, it seems that at least some people who downloaded DIC have found it useful.

There were lots of good development requests/ideas in the feedback to the previous releases. I've tried to respond to as many of the development requests as possible.

Changes in the user interface

There are not so many changes in the user interface of DIC 2.0 - it's almost the same as before. Perhaps the most visible change is the new cheat sheet tab that hopefully will make life a bit easier for occasional DIC users. The cheat sheet allows you to quickly look up the object shortcuts provided by DIC.

Cheat Sheet

Comments on the login process

The active sessions list now contains also the B1 company name, so that it is easier to keep track about which session is connected to which database (although it is always a good idea to name the session handle so that you recognize it).

Based on some of the questions I got, it seems I had not sufficiently explained the purpose of the "DB Direct" checkbox. If this field is checked, you can directly type the "physical" name of the SQL Server database. For instance, the physical name of the US Demo database is usually SBODemo_US.
The DB Direct feature often allows logging in even when DI API fails to retrieve the list of databases from the server. I suppose it's got something to do with Windows-level authentication but I don't know the details. Perhaps someone at SAP could explain this better.

New object manipulation functions

I initially thought I had all relevant document/object manipulation methods covered with add(), update(), and remove(). However, as I realized from the feedback, I had totally forgotten Cancel and Close. Well, the new release has these covered:

cancel(objectref)
close(objectref)

All the functions related to manipulating business objects (add(), update(), remove(), cancel(), close()) used to return a numeric flag: 0 for success, other values for failure. All of these now return a boolean value (true or false). This makes the scripts a bit more elegant, compact and certainly more readable.
For instance, if you wrote something like this in version 1.3:

if(add(myitem)==0):
  print "OK"

...it should now be written as:

if(add(myitem)):
  print "OK"

...or if you want to be verbose:

if(add(myitem)==True):
  print "OK"

Fault code handling

As the function calls no longer return a numeric error code, I also added helper functions for retrieving the last error code and error description.

geterror() - returns a string that contains both the error code and description
geterrorcode() - returns the error code
geterrordescription() - returns the error description

Transaction handling

Transaction handling was available in the previous versions of DIC, but it had to be called using the rather clumsy DI function calls.

Now, a new transaction can be initiated with a single function call:
starttransaction()

Ending a transaction can also be done with a single function call:
commit()  - commits the transaction
rollback() - rolls back the transaction

Enumeration shortcuts

DIC now has shorcut handles to all of the enumeration values defined in DI API.
For instance, SAPbobsCOM.YesNoEnum.tYES can now be referred to as tYES.
Likewise, SAPbobsCOM.BoAccountTypes.at_Expenses can be referred to as at_Expenses.

The shortcut handles have all the same quirks and typos as DI API does. For instance, a BoCardTypes object that points to Leads is called cLid.

These shortcuts have also been made available with lowercase and uppercase versions.
Thus, cLid, clid and CLID are all valid handles to the enum value SAPbobsCOM.BoCardTypes.cLid.

There is of course a slight risk that if you name your own variable with one of these enumeration names, the handle to the enumeration value will be overwritten.

For instance:

>> clid=2
...is a perfectly valid variable assignment in DIC, but then you can no longer use the "clid" handle to the cLid CardType object.

If you are familiar with DI API, you can check the documentation for all the enumeration types and their uses. You can also get a complete list of the available values directly in DIC by typing:

>>list(enumvalues)


Releasing COM objects

When calling any of the following functions: add(),close(), remove(), update(), cancel(), the call will also release the COM object. Additionally, you can call release(object) any time to release a COM object.

If you wish not to release the COM object, you can use the standard function calls, for instance i.Add() instead of add(i).

Important as .NET Automatic Garbage collection does not handle COM objects.

Enhanced browsing

The real workhorse of DIC is the browse() function, as it takes away a lot of the pain related to iteration with DI API. In DIC 2.0, the browsing feature has been further enhanced and some problems present in the earlier releases have been fixed.

Merging sql resultsets with DI API business objects

While enhancing the browse functions, I also ended up another function that might make life even easier in certain kinds of tasks. The merge() function can be used to match recordset rows with DI API business functions. In order for the trick to work, the column names in the recordset must match existing fields in the object being merged.

For demonstration's sake, I will present a script that will work in any B1 database. The script takes the 10 (alphabetically) first items in the database and creates a duplicate for each of those items. The duplicate has the same itemcode except there's an 'X' in the end. The item name will be the same as in the original item. No other data is copied.

q=query("select top 10 itemcode+'X' as ItemCode, itemname as ItemName from oitm")
for row in browse(q):
  i=get(ITEM)
  merge(i,row)
  if(add(i)):
    print "Item added successfully"
  else:
   print "Item add failed"

The column name checking process is case insensitive, so in this case the ItemCode field of Items object will match any of these column names in the recordset: "ItemCode", "Itemcode", "itemcode", "iTEMcODE" etc.

The merge functionality is actually quite close to what DTW does with a query-based import.

Variants of merge

INSERT

Attempts to add objects of the specified type to the database. The assumption is that they don't exist already. If they do, the function will simply return a False.

q="select 'myItem3' as ItemCode, 'MyItemName3' as ItemName union select 'myItem4' as ItemCode, 'myItemName4' as ItemName"
for row in browse(query(q)):
  insert(ITEM,row)
UPSERT

Upsert looks up an existing object based on the resultset. If the object exists, it is updated. If not, new objects will be added to the database.

For demonstration purpose, the query in the following script just generates a simple two-record resultset. In real situation, you would retrieve the resultset from a table.

q="select 'myITEM' as ItemCode, 'myItemName' as ItemName union select 'myItem2' as ItemCode, 'myItemName2' as ItemName"
for row in browse(query(q)):
  upsert(ITEM,row)


Other sample scripts

This script simply runs the sql query and displays the value of the single column:
q="select slpname from oslp"
for sp in browse(query(q)):
  sp

This one returns two columns:
q="select slpcode, slpname from oslp"
for sp in browse(query(q)):
  sp.Item("slpcode").Value
  sp.Item("slpname").Value
Or:
q="select slpcode, slpname from oslp"
for sp in browse(query(q)):
  for i in sp:
    i.Value


This one will browse items from oitm table:

q="select top 100 itemcode, itemname from oitm"
for i in browse(ITEM, query(q)):
   i.ItemName


This one will also show the warehouses linked to each item:


q="select top 100 itemcode, itemname from oitm"
for i in browse(ITEM, query(q)):
   i.ItemName
   for a in browse(i.WhsInfo):
     a.WarehouseCode

...and this one will show the items' prices from each pricelist

q="select top 100 itemcode, itemname from oitm"
for i in browse(ITEM, query(q)):
   i.ItemName
   for pl in browse(i.PriceList):
   pl.PriceListName
   pl.Price

Retrieving data from another database server (or server instance)

The following script would execute the query against another database server and based on the resultset create new payment terms objects in the target B1 database.


q="select description from OPENDATASOURCE('SQLOLEDB','Data Source=MyServerSQL2005;User id=sa;Password=;').SAMPLEDB.dbo.PayTerms "
for row in browse(query(q)):
  s=get(PAYMENTTERMSTYPE)
  s.PaymentTermsGroupName=row.Item(0).Value
  if add(s):
    print "#"   
  else:
    geterror()

This one would create new shipping types objects in the target B1 database.

q="select code, description from OPENDATASOURCE('SQLOLEDB','Data Source=MyServerSQL2005;User id=sa;Password=;').SAMPLEDB.dbo.ShipTypes"
for row in browse(query(q)):
  s=get(SHIPPINGTYPE)
  s.Name=row.Item(0).Value
  s.Website=row.Item(1).Value
  if add(s):
    print "#"   
  else:
    geterror()

This one would create only such item groups that did not already exist.

q="select left(description,20) from OPENDATASOURCE('SQLOLEDB','Data Source=MyServerSQL2005;User id=sa;Password=;').SAMPLEDB.dbo.ItemGroups where description collate database_default not in (select itmsgrpnam collate database_default as description from oitb) order by code"
for row in browse(query(q)):
  s=get(ITEMGROUP)
  s.GroupName=row.Item(0).Value
  if add(s):
    print "#"   
  else:
    geterror()


Further research

This will probably be the last major version of DIC that I'll be releasing. I will concentrate my research efforts to enhancing myBOLT and possibly to exploring the scripting possibilities that Windows PowerShell offers.

I'm therefore submitting DIC to SDN with full source code attached. Feel free to use it in any way you see suitable. 

Introduction: a look in the past

All the way up to SAP Business One Release 6.5, the license information was stored inside the SBO-Common database. This caused an interesting side-effect: by copying the SBO-COMMON database, you also got a copy of all the licenses in that database. It was no wonder that SAP soon fixed this loophole. Starting with B1 Release 2004, there was a new service called the License Manager. The License Manager brought us a whole new smorgasbord of pain points. Before Release 2004, if you had a working database connection, you were most likely also able to log into SAP B1.

The License Manager is a bit different animal, as it is tightly interwoven with Windows authentication. One of the first issues I remember resolving in the ramp-up for 2004 was connecting to the server from a B1 client on a workstation that didn't belong to the same Windows domain as the server. It turned out that either the client and the server need to be in the same Windows domain (or ActiveDirectory) or alternatively the userid used to log in to the workstation also needs to be found on the server as a local account, with exactly the same password on both accounts. Thus, instead of a two-level authentication (SQL Server and B1), there are now three levels: Windows network authentication, SQL Server authentication and B1 authentication.

Another issue came up with the security fixes in Windows Server 2003 service packs. Those service packs were so effective in closing the security holes that the clients once again could not log in to the server. The reason was that the License Service is dependent on DCOM and the security fixes made the DCOM Settings too strict. There are excellent SAP Notes (Nrs 833798 and 824976) on this issue, so I will let the readers who need this information look them up from the Service Marketplace.

In the rest of this blog, I will present some of the issues that I've witnessed during my four-year journey with the License Manager. I am assuming that everyone's already read the SAP Business One License Guide that comes with the B1 installation media. I will try not to repeat the issues documented there. Some of the issues discussed here are discussed in SAP Notes or in the Forums, but anyway I thought it would be useful to have a single document putting all these issues together.

License Manager FAQ  

Q: When exactly do we need the License Manager ?

A: The B1 clients (as well as the DI Server and all the DI API-based client applications) only need to connect to the License server when logging in. After the users have been successfully logged in, you could even restart or completely stop the License server without the logged-in users ever noticing it.

Q: How does the License Manager store information about which licenses have been allocated to which user ?

A: The following image is taken directly from the License Guide:

 License Manager architecture

The License Guide explains that "the license mechanism in SAP Business One is implemented using a License Service, a License File, and an external API". The license file is used to import new licenses to the License Service, but SAP does disclose where these imported licenses actually are located. Not that we need to know it, though. In fact, it' more useful to know the file that stores information about how those licenses are allocated to users. That file actually exists in the program folder for the License Manager. For B1 2005 SP1, the default installation folder is C:\Program Files\SAP\SAP Business One ServerTools\License. If the License Manager has been in use, you should be able to locate a file called B1Upf.xml. I'll return to this file regarding the issue of reclaiming 'lost' licenses.

Q: It Takes several minutes to log into SAP B1, even from a fast local network. How to fix this?

A: This issue is often a result of having had two active network interface cards in the server at the time of installing the license service. This is a known bug addressed in some SAP Notes. SAP has promised to fix this in some future release. Meanwhile, you can get the issue fixed by uninstalling the License Service, then making sure you only have one active NIC on the server and reinstalling the License Service. I've seen cases where the login time dropped down from 2-3 minutes to about 5 seconds after doing the fix.

Q: I have trouble importing the new license key. What to do?

A: There are a number of reasons that might be causing this. One particular thing which will always cause a license key import to fail is having the DI Server service started (with active sessions) while you're doing the import. It is always safest to stop all the other SAP B1 Services (except the License Service) when doing the license key import and also restart the License Service just prior to the import.

As a precaution, it might be a good idea to have all the users who need the system on that day already logged into B1 when doing the import. If anything goes wrong with the import, those users can continue using B1 without ever noticing the problem. That'll buy you some precious time.

Quite often, just trying the import once again will get the desired result. However, sometimes it seems that the keys sent by email are somehow corrupted. You might want to go to the Service Marketplace and download the key directly. This has fixed the problem for me several times.

Q: How to recover when all licenses are used up after a B1 client crashes?

A: If a B1 client crashes, it still reserves the "slot" on the license sserver that it reserved during login. As the License service limits the maximum number of simultaneous sessions for a single named user to two, you could still log in to B1 after recovering the client from the crash. However, another crash will leave both of your sessions reserved. What to do next? You could do as the License Guide instructs: wait for 3 minutes and then try connecting again so that the reserved connection has timed out. If you're in a hurry, you could also restart the License server, which actually clears up all the information about the users who have logged in. Restarting the License Manager might take a minute or two, but it is anyhow faster than waiting for timeout. Those users who are already logged in are not disturbed. Remember - License Service is only needed at login.

Q: My userid requires two licenses when using two databases. What is the problem ?

A: The funny thing about the License Service is that while B1 (or rather SQL Server) database is case insensitive regarding userid's (you cannot for instance create users 'User1' and 'user1' in the same B1 database as the database treats them as identical), the License Service isn't.

If you have several databases (I suppose everyone has at least a test database), you might easily end up in a situation where you have the same username in different databases but with different case (for instance, user1 against User1). If you wish to use both user accounts, they will gobble up two user licenses. From the B1 user interface, these usernames cannot be changed. Also, it is prohibited to manipulate the database directly.

Luckily, the DI API is permissive enough so that we *can* alter the userid as long as we are only modifying the case. For convenience, here's a simple DI Commander (version 2.0) snippet that shows how it can be done.

u=get(USER,1)
u.UserCode="ManageR"
if update(u):
  print "OK"
else:
  geterror()

Please note that the second parameter of get should in this case be the value that is found in the INTERNAL_K column of the OUSR table.

Q: I'm having problems with login - the wrong license server address keeps coming up. How to fix this ?

A: There is a short reference to the SLIC table in the License Guide: "The SAP Business One workstations read the name of the license service to which they connect from the SLIC table in the SBO-Common database."

The problem is that this doesn't tell anything about the practical implications of this table. 

Try running this query from B1 Query Builder or from MS SQL Query Analyzer to get an idea about how it works:

select lsrv from [SBO-COMMON].dbo.slic

It is usually a good idea not to provide the license server address and port when logging in (this applies to B1 clients, DI API and DI Server) unless you absolutely have to. SLIC-table is the place where B1 server stores information about the default address of the license server. Guess how it gets updated? Whenever you specify the License Manager address at login. Well, what's so wrong about that?
Let's say that we have one user who specifies the address as "192.168.220.20". Another user then logs in with "sap-server1", which happens to be the valid hostname for the server in IP address 192.168.220.20. Both are correct, of course. Except that it could be that some of the workstations may only recognize the IP address but not the hostname (for a variety of reasons that I'm not diving into deeper here). Once the SLIC table contains an address that the B1 client cannot resolve, the user can no longer log in.

The good thing is that once the user types in the correct address for the license server, she can log in. The bad thing is that the UI API add-ons don't follow: they will stubbornly refuse to connect until you can once again log in to B1 without manually giving the License Manager address. This is just one of the many reasons why I hate UI API add-ons so much.

Lessons from the field

For a long time, one of my customers had the license address problem reoccurring each time they rebooted the server. They were able to get rid of the problem by restarting the License Manager and then logging into B1 directly from the server (this procedure was repeated always after a reboot - luckily the reboot interval was pretty long). Eventually I found out what was causing this problem: A DI Server application that was installed on their server had the License Manager address specified in a properties file. Guess what address had been specified? How about 127.0.0.1:30000 ?-) Needless to say, the DI Server application was working just fine. It was just the B1 clients installed on other machines except the server that had problems, as for them the License Service was not located in localhost address :-)

Another similar situation occurred in our own production environment. As I had a License Manager running locally on my laptop, I figured out it would be nice to use this License Manager instance instead of the one located on the server also when logging in to the production server. It worked very nicely, I didn't have to care about all that Domain authentication nonsense. But guess what happened to all those users of the production environment who didn't have a local License Manager ?

Q: I renamed the server that's running the license service. Now the license manager says that I don't have any licenses. What to do ?

A: The Windows hostname of the server is used as one of the ingredients of the hardware key. If you rename the server, the hardware key will change as well. If you can't change the name back, you need to apply for a new license key with the changed hardware key.

Q: I don't see as many licenses in the license management screen as I've bought. What is the problem ?

A: A not so uncommon situation that I've run into with some B1 production environments: a B1 license is assigned to a user that does not exist in any of the current B1 databases. However, this does not automatically liberate the license in License Service, it it was once assigned there.

Typically this would happen if a temporary user was created in a test database, given a license and then the whole database deleted before unassigning the license. As the username is no longer present in B1, you can't use the license management dialog to remove the license from the user.

How to reclaim this 'lost' license to active use? There are several possible approaches:

Method 1:"Raise the dead"

If you know the username that the license has been assigned to, you could of course create it in some test database and then use the license management dialog. Don't do it in a production database, though, as you would end up stuck with a user record that you can't completely delete from the database. 

Method 2:"Genocide"

To liberate all the licenses currently assigned in the License Server, you could simply delete the whole file and then restart the License Server.
However, after this you would need to assign all the licenses again. This is perhaps not a problem if you have just a couple of users, but with say 50 users and five add-ons, it would be quite a nuisance.

Method 3:"Seek and destroy"

As the B1Upf.xml file is an XML file, you can stab it with any text editor such as Notepad. While it's not so difficult if you're at least a bit familiar with XML, it's still a bit awkward. Luckily, the XML Notepad 2007, which is available free from Microsoft renders the license assignment file pretty nicely. It allows WYSIWYG-type editing of the user tree. This is my favourite method as it is powerful but still pretty safe.

The mission: implement decent collection handling for DI API objects

The Data Interface API (DI API) offers a safe and powerful way to retrieve and manipulate data in a SAP Business One database. Currently, DI API is still implemented as a COM object. I'm quite certain that most people doing development with DI API are using some version of Visual Studio.NET and thus the .NET framework as their development platform. There might be a handful of VB6 developers out there, but they are certainly a shrinking minority. Therefore I've been hoping that the Dev Team would provide us with a native .NET implementation of DI API, but it seems we will have to get by with the COM implementation.

There's one quirk in DI API that especially calls for improvement: lack of support for collection handling in the way that it is normally done in .NET development. For instance, in order to iterate through the lines of an invoice, I need to check the number of lines in the Document_Lines instance and then increment the line index in a loop with SetCurrentLine until all lines have been processed: 

Documents doc=session.GetBusinessObject(BoObjectTypes.oInvoices)
doc.GetByKey(myInvoiceDocEntry);
for(int index=0; index<doc.Lines.Count; index++)
{
 doc.Lines.SetCurrentLine(index); 
 // Do something with each row
}

Instead, I would like to be able to write something like this:

Documents doc=session.GetBusinessObject(BoObjectTypes.oInvoices)
doc.GetByKey(myInvoiceDocEntry);
foreach(Document_Lines line in doc.Lines.Browse())
{
 // Do something with each row
}

It's not a huge difference in LOC (in this particular case, 6 against 7). However, mentally the difference is much bigger as you currently have to constantly switch your mindset on collection handling between .NET and DI API. Also, please keep in mind that the foreach loop is just one of the powerful operations that can be done on native .NET collections.

Similarly, I would like to be able to iterate through a set of business objects that have been selected with a query, such as:

Items items=session.GetBusinessObject(BoObjectTypes.oItems);
foreach(Items item in items.Browse("select itemcode from oitm where invntitem='Y'"))
{
  // do something with the item
}

The solution: .NET 3.5 and extension methods

I recently started to study some of the interesting new features in .NET 3.5 and Visual Studio 2008. These include such things as language integrated query (LINQ) as well as extension methods. It turned out that they could be of real benefit also in in the context of DI API development. For instance, the kind of enumeration support that I described above can rather easily be retrofitted to DI API as extension methods.

In order to put my money where my mouth is, I decided to implement the said improvements. All that is needed is actually one source-code file that has to be included in each Visual Studio 2008 project. When it is included in your project, all the DI API classes that either have a property "Browser" or a method called "SetCurrentLine" will have a new method called "Browse". As a nice side-effect of adding the enumeration support, you can now also use LINQ for querying DI API objects.

I wanted to make the solution as open as possible as well as to guarantee maximum compatibility with previous, current and future releases of DI API. Thus, I implemented my solution as a code generator. The tool is called DI Class Extender (DICE). The full package containing binaries plus full source code can be downloaded here. You can download the binaries also from the DICE download page maintained by Profiz.
If you wish to get a quick start, you will also find a pre-generated extension file made from the Interop file for SBO 2005 SP1 PL31 on the download page. This file should be compatible with any DI API starting from 2005 SP1 PL0. However, in case there are new objects in more recent DI API versions, those will not have the extension methods unless you generate a new extension file with DICE.

 

DICE UI after startup 

Image 1.The DICE user interface immediately after startup 

 

DICE UI after generating source code

Image 2. The DICE user interface after selecting the DLL and generating the extension source code

 

When starting up, you only need to specify the path to the DI API Interop file (usually named "Interop.SAPbobsCOM.dll") you wish to use. DICE reads the file, slices and dices it and then generates a single C# class file that contains the required extension methods. When you include this file to your project, Visual Studio will take care of adding the new extension methods to each relevant class. These extension methods will even show up in the IntelliSense dialogs, as you can see from the images below.

 

Intellisense

 

 

Intellisense 

Images 3 and 4. The extension methods shown in the Visual Studio 2008 Intellisense dialog.

You have a choice of setting the namespace for the source file when doing the generation. If you specify the same namespace as what you are using in your solution, it will work with no additions. I you use the default namespace (DICE) or something else other than the namespace of your solution, you will need to add the "using" declaration in the beginning of each source file where you wish to use the extension methods:

 

using DICE;

 

As a bonus, I also implemented another extension method - GetLine - in the classes Documents and JournalEntries. Currently, DI API only allows to get a row by index. The problem is that the index is most often the same as line number, but not always. This is because it is possible to delete, say, an order row after booking the order. Let's say that the order originally had rows with linenumbers 0,1 and 2. If you delete the middle row, you end up with line numbers 0 and 2. SetCurrentLine(1) would get you the line with linenumber 2. In order to determine the linenumber you need, you could of course iterate through all the lines and search for the row with the current line. But without a good support for enumeration, that'll be quite painful.

Thus, I added command that would do this for me with a single line of code:

 

Document_Lines lines=doc.GetLine(2);

 

...this will give you the line with linenumber 2 (if it exists, otherwise it will give you null) regardless of its current index.

This was only added to the classes Documents and JournalEntries because the DI API contains no metadata that would reveal the unique key field in each object. Therefore, I was unable to use generic classes in the way I did for the Browse method and had to hardcode these two methods. I chose these specific classes because they are - at least for me - the most often used classes that involve browsing through lines.

Discussion and indications for further research

If you simply want to be able to use these methods, this blogs contains all you need to know. Please note that the solution only works on Visual Studio 2008 and .NET Framework 3.5 - it will not work with earlier .NET versions.

If you want to take a deeper look into how this solution actually works or perhaps extend it for your own purposes, just download the source code. The solution is simple and straightforward. The class that does the code generation contains less than 50 lines of code. The core of the code generation solution is built on a couple of LINQ queries and a template engine called StringTemplate. The template file is less than 250 lines of code. Compare these figures with the generated extension file that is more than 2300 lines of code! If you take a look at the template file (DICETemplate.st), you will easily get a grasp on how it works and how you could add your own extensions. The generated extension file contains a couple of generic classes and a whole bunch of static wrapper methods.

I could imagine that a similar solution pattern could be useful in improving the notorious UI API. If you wish to do that, please go ahead. However, in my opinion the only decent thing to do with the UI API would be to toss it and implement a completely new solution for UI development, preferably one based on Windows Presentation Foundation.

Dear Santa,

Me and my colleagues have been really hardworking this year, struggling to make the ends meet in our customers' B1 implementation projects. You know, it's not always easy, but somehow we get by.

I've learned to live with the idea that year on year we are getting less and less new bells and whistles in B1. That's understandable now that you've got the new ByDesign family and all that stuff. Therefore, I will make my wishlist real short. I'm also only including wishes that have a general appeal to the B1 community at large.

Wish 1: Make the item group and business partner group fields longer

I'm starting with a real no-brainer, just to get you warmed up.

Surely I'm not the only one who's seen their customers suffer about the current maximum length of item group names and business partner group names. I mean, come on man, 20 characters? I'm sure you can do better than that.

While you're at it, it would be nice to have a separate key field for the item group code as well. Currently there's just the system-assigned internal key that starts from 101. Many customers are used to a certain numbering system for the item groups in their previous system. When migrating to B1, quite a few of our customers have ended up adding this code in front of the item group name. Using this workaround, they can get the item group names listed in the order they want, but on the other hand they are forced to make do with even less characters for the actual name. Let's start with something simple like this:

Item Group Code: 837
Item Group Name: Twisted Pair Cables

To make that fit into 20 chars, I would use something like this: "837 Twisted Pr Cbls"

But what should the customer call an item group such as "Heavy duty Zirconium-coated platinum tweezers" ? You think I'm exaggerating? Maybe a little.

Let's move on business partner groups. There are many relevant ways to group customers and suppliers, but one of the most usual ways would be to group them by industry. For instance, SAP expects me to select an industry of the customer from a list each time I issue an order for software licenses. I just checked the industry classification of one our my favourite customers in the SAP SMB portal:

Automotive
Industrial Machinery
Industrial Trucks, Tractors, Trailers, and Stackers

As we know, the business partner grouping system in B1 is flat. For simplicity, let's just forget the two higher-level groupings. Let's concentrate on "Industrial Trucks, Tractors, Trailers, and Stackers". That's 51 characters. Now, try squeeze that to 20 characters !!


Wish 2: Provide us with structured address information in documents

In the business partner card, it is possible to specify several billing and/or shipping addresses. Each one of these addresses are structural in the sense that there are separate fields for street, zipcode, city etc.

When creating a document such as an order or invoice, this address information is used as the default value for billing and shipping address on the document. It is also possible to manually modify or completely replace either of these addresses. That's essential, as it could be that the customer wants the goods shipped to another location just this once (and thus it would be overkill to create a new address in the BP record). However, the addresses that are stored with the document are no longer structural. Each of the addresses is stored in a 254 character alphanumeric field, lines separated with carriage returns symbols. Well, if you only need to print that information on a piece of paper, that's pretty ok (even though the 254 chars space is less than the total characters possible in the street, block, city and zipcode fields, that's usually enough). The trouble starts when you need to pass the address information forward electronically.

We have lots of customers who are either using EDI solutions or sending XML-based electronic invoices to their customers. The standards that these solutions are based on require that the addresses are structural in the same manner as in the BP records. Ok, in some cases you could manage by just using those addresses specified in the BP record, but in quite many cases the customer wants to be able to specify the address each time they create a new order. We've learned to circumvent this problem by creating our own structured address fields on document header level and using those instead of the standard ones. It's ugly and kludgy, but it works. The default address is then retrieved to these fields by using formatted search, which brings me to my next topic.

Wish 3: Fix the formatted search

Formatted search is a feature that looks really fantastic at first sight. With a good grasp of SQL and a bit of imagination, there's some really fancy stuff you can do with it. However, it also has some really annoying shortcomings. One is that you can only attach a single trigger to each formatted search. Also, a row-level trigger is only fired when the user exits a field, not when the value is changed. Let's say that I want to calculate a value in a certain row-level field each time either quantity or price is changed. I just can't. Well, I could attach the trigger to a header-level field, say DocTotal and specify that it is "refreshed regularly". Sometimes that works, sometimes it doesnt.

Another annoying property of formatted search is that it changes the focus of a field. If I have lots of columns in the document and a formatted search is triggered in some of the fields in the east end of the grid (that I normally need to scroll to in order to see them), the whole grid is focused so that I no longer see the most important fields that I want to see (such as itemcode and quantity) when adding a new row.

The worst thing about formatted search is that I can never trust that it will work the same way after an upgrade. The biggest problems have been related to how formatted search is triggered in fields that are either hidden or non-active. They used to work fine. Then, at some point around SBO 2004 PL21 they stopped working. By popular demand, they came back a few patch levels forward. Now, we have again faced the same problem in SBO 2005 SP1 PL31. A few days back we heard from the always helpful SAP Support Helpdesk that the "system works as specified" as formatted search is "not officially supported in hidden or inactive fields". That just about eliminates half of the solutions that B1 partners worldwide have built with formatted search in order to overcome shortcomings in the standard functionality of B1. Recently I have also been told that formatted search is not officially supported in the system fields with orange arrows. Now that's another bummer that renders the formatted search pretty useless in many cases. I mean, what's wrong with you guys? I thought formatted seach was one of those killer features that made B1 so powerful. I've also heard that one of the greatest features of B1 is that the upgrade is a breeze and everything will work the same after an upgrade. What a lot of baloney !

Please Santa, could you at least try to convince the dev team of the importance of formatted search functioning consistently between different patch levels (and fix the current mess with the hidden/inactive fields ASAP) ?

Fix 4: Stop consolidating lines in journal entries based on other documents

So far this has mainly been a nuisance for our existing customers, but this year we lost one big case (with plans on rolling out B1 to multiple locations worldwide) to Dynamics NAV solely because B1 only allows two financial reporting dimensions (project and cost center) to be automatically copied from sales and purchase documents to journal entries. To make things worse, B1 always consolidates the automatically posted journal entry lines per project, cost center and tax group. If B1 would generate separate lines for each source document line and store the information of base document and line number, there wouldn't be any problem as we could always trace back from the journal entry line to the document line. Currently, we can't.

It can't be so hard to add a possibility to have B1 add a separate line in the journal entry for each document line, can it? This can't be a database space conservation issue, as B1 is rather wasteful in many other less important places.

Conclusion: get your act together

I really hope that you will get your act together in 2008. I hate to say this, but otherwise I'm afraid that next Christmas I'll be addressing my wishlist to the Big Bad Santa that lives in Redmond with his Dynamics family.

Introduction: The 'secret' of SAP Business One

Compared with other ERP products in the market, the version upgrade procedure of B1 is among the smoothest and most reliable. Quite often the actual upgrade procedure can be done in a matter of hours.

The 'secret' behind the simplicity of upgrading SAP Business One is based on some rather nice design choices. The most important of these is not allowing the resellers or customers to mess with the source code of the core product. Instead, most of the customization is done with combining the simple yet powerful built-in features of Business One. These include:

  • User-Defined Fields (UDF)
  • User-Defined Tables (UDT)
  • User-Defined Objects(UDO)
  • Parameterized queries
  • Formatted Search (event-triggered parameterized queries)

Customizations based on the above mentioned features persist from version to version so there's no need to reimplement them after an upgrade.

With a bit of creativity, you can build amazing stuff with the above mentioned built-in features of B1, and you can do it lightning fast. It's a real jaw-dropper in customer demos when you put together the functionality the customer is asking for in a matter of minutes. However, sometimes you will need to go a bit futher than that. For more complex customizations, there are two programming interfaces that can be used to extend the functionality of the system. The Data Interface API allows creating and modifying data in the database in a controlled fashion (direct manipulation of the database is not permitted). On the other hand, the User Interface API allows manipulation and extension of the B1 client's user interface. The API-based approach makes the management of even these more complex extensions quite easy and straightforward. When a new major version is released, usually all that has to be done is to recompile the customization modules against the new API. Even this is not always required, sometimes the old add-ons work directly with the new version.

The worst upgrade scenarios are usually related to those ERP products in which the source code of the core product can be directly manipulated to cater for the needs of individual customers. With such products, the upgrade project may cause as much work as the initial implementation project.

To upgrade or not to upgrade - that is the question

Because of the considerable ease of upgrading SAP Business One, one might sometimes be tempted to do 'hot' upgrades directly in the customer's system without lengthy preparations. However, upgrading a live ERP system should never be done without proper preparations nor without a good reason. As each customer has a bit different configuration and different processes, no amount of generic testing at the software vendor will completely erase the possibility of problems arising in some of those production environments. While most upgrades will work like an angel, sooner or later you will run into one that's from a bit further down South.

Let me introduce Michael Jackson's laws of code optimization (Jackson, 1975):

  • The First Rule of Program Optimization: Don't do it.
  • The Second Rule of Program Optimization ---For experts only: Don't do it yet.

Today - more than 30 years later - these rules are still valid for code optimization, perhaps even more so than they were back then. More importantly for us, these laws also apply to the art of upgrading ERP systems:

  • The First Rule of ERP Upgrades: Don't do it.
  • The Second Rule of ERP Upgrades ---For experts only: Don't do it yet.

In this context, the 'yet' means that you should not do it without a real reason, in a hurry, or without proper preparations.

The Road to Success and Longevity: Four Rules of Upgrading SAP Business One

A telltale sign of someone being an IT rookie is their burning over-enthusiasm on installing the latest release/patch/beta of whatever software gizmo they are using or supporting. Over time, together with accumulating experience, this over-enthusiasm usually gives way to a more mature stance on new products and versions. This doesn't have to mean becoming an old **** who's completely lost touch with the latest technologies. There's nothing wrong with keeping your knowledge up-to date, but there's a time and place for everything. The customer's production environment is not the place to try out the latest and greatest gizmos.

Unless you really fancy those irregular extra heartbeats and crave to be constantly at the true bleeding edge, you will find life much easier (and customers happier) by obeying the following rules.

Rule 1: Do not upgrade a production environment at all without a decent reason

Above all, be honest to yourself about the real reason for doing the upgrade. I have only found three truly valid reasons for upgrading.

Valid reasons for upgrading
A significant new functionality that helps the customer enhance their business processes

Hopefully the reason of your customer implementing the ERP system in the first place was to enhance their business processes. If you can provide the customer with more value, that's good news for both of you.

A major flaw in the currently installed version that you know being fixed in the new version

Before proceeding, make sure it really is fixed. While you're at it, make sure the new version is not introducing new flaws.

The currently installed version is no longer supported

I guess there's no arguing with that.

Dubious reasons
"Let's see what it can do" : pure curiosity

This is something you should do in your own sandbox, not in the production environment. Curiosity killed the cat.

"We need the bloody money" : greed or financial pressure

Running short of cash? To the uninitiated, offering to upgrade the customer's system might sound like an easy way to squeeze some extra turnover from your existing customer base. However, if the upgrade was recommended by you and it goes all wrong, who do you think will do the cleaning up? Will you be able to invoice all those wasted hours from the customer? There goes your margin, down the drain. What will you respond when the customer asks: "whose idea was this in the first place?". In ERP business, reputation and customer satisfaction is everything. You wouldn't want to sell your reputation for a couple of lousy grand, would you?

The only way to do it in style is not to offer upgrades for upgrades' sake. If you can't show the customer a genuine business benefit as a result of the upgrade, it's better to forget it. Never forget to inform the customer that neglecting a proper testing procedure introduces a small but real chance of a potentially catastrophic end result.

"They told me so" : blind obedience to SAP Support Desk

The most popular panacea in IT helpdesks all over the world is "just reboot it and try again". Quite often this miracle mantra really works. A somewhat similar - though sadly much less effective - mantra in the SAP Support desk is: "Please upgrade to the latest patch xx". They may suggest upgrading the system even without being able to verify that it's a version-related problem (it seems to me that sometimes they do it just to keep you busy for a while and to make their performance statistics look better). Always demand verification that they have been able to reproduce your problem in and that the problem does not manifest itself in the new version. In any case, test it yourself in a test environment before upgrading the live environment.

It will be you as a reseller having to explain to the customer if the upgrade turns sour. The guy at the Support Desk will just shrug and say "this will be fixed in the patch that'll be released after 30 days". That's not what your customer wants to hear.

Rule 2: Wait for a few patches before upgrading to a new release of service pack level

Let the others do the guinea pig stuff.

Rule 3: Don't take your live customers to rampup

A rampup may be justified if you haven't yet gone live with the system and the new version includes some features which are important for the customer. With a live system, steer clear from rampup.

Last but not least, don't count too much on the 'extended support'.

Rule 4: Test, test, test

Whatever the version/patch level, never underestimate the importance of testing. By doing a blind upgrade without testing, you risk introducing new problems that are far worse than the one you were trying to cure.

To make things more complicated, the most vicious problems often emerge only after the upgraded system has already been in use for a few days. With even a moderate volume of transactions, there's no turning back. Unless you're a true sadomasochist, that's not the kind of situation where you would want to end up.

Proper testing will buy you time to react and pre-emptively tackle the identified issues: you get to decide when to deal with them, instead of going cold turkey with the furious customer watching behind your back.

Upgrade Approval Testing Procedure

If you're serious about implementing ERP, approval testing is an important step in your implementation projects. In the approval test, the customer should go through all their main processes making sure that the system works as expected. As a preparation step for upgrade, the test procedure should include all the same tests as were done during approval testing in the initial project. Thus, constructing good test cases during the implementation will make your life easier during upgrades as well.

The test environment

With bigger ERP systems such as R/3, there are typically three server environments: one for production, another for testing/quality assurance and a third for development. While this would be obvious overkill for most B1 customers, it would be advisable to have two servers: one for production and one for testing. The test server can also act as a backup server that can be quickly taken into use should there be a technical failure on the production server.

Getting the users involved in testing

One of the most usual mistakes I've seen is that a test environment is properly set up for the customer and the main users or end users are told to 'test at will'. In practice, everyone always has some 'real work' to do, and that will always be seen as more important than testing. As a result, the testing procedure is only followed on paper. The real testing is then done only after the upgrade. With this kind of test procedure, we might just as well save some time and money by going cold turkey, right? The saddest thing is that while the customer's employees actually blew it themselves, you will be the scapegoat doing the cleaning up in the aftermath.

If you can sell the customer the benefits of having a real test session, life will become much easier for everyone. If the test can be arranged outside the normal work environment of the users, that is even better. In their normal environment, they may still be more attached to their daily activities than to doing the testing. It might also be a good idea to have some kind of a measure for the testing (such as a number of documents completed) to get the people more involved.

Recording  and processing test results

While sometimes the testing session can simultaneously act as an end-user training session, the mindset should be a bit different. While normal training usually concentrates on drilling the users in on the most familiar cases, the testing session should try to cover as much ground as possible, including all the exceptions to the rule that can be foreseen. For instance, if the test case is about processing an invoice, you should think about all the possible cases that are different from the standard case but relevant to the customer's business process. Here are just some examples to give you an idea:

  • Item Price is different from the delivery/order on which the invoice was based
  • The invoice is based on multiple documents 
  • The invoice is in foreign currency and the rate has changed between the delivery date and invoice date.
  • There are extra costs related to the invoice
  • The payment of the invoice is to be done in three instalments
  • etc.

During testing, all exceptions - no matter how small - from the expected functioning of the system should be recorded for analysis. Another classic user mistake is assuming "Well, this is only the test system. It will work in the production system". I've seen this happen more than once. If it doesn't work in the test environment, expect the worst from the production environment.

Based on the results, a list of found issues should be constructed and each of these issues should be processed before the final decision to go live is made. Please note: the whole idea of upgrade testing is to gain information for making the go/no go decision. If you end up deciding to postpone the upgrade based on the results of the test, it actually saved you from a lot of trouble. Instead, if you go through the testing with the mindset that you're going to upgrade no matter what the test results are, you're just wasting time.

APPENDIX A: SAP Business One Upgrade Dictionary

Release version

A new major version such as 6.5, 2004, 2005, 2007. Normally, these are supposed to introduce significant new features (although versions 2005 and 2007 were somewhat disappointing in this sense). 

Service Pack

The unwritten law of software upgrades says that you should always wait at least until the first service pack is released before even considering the update.

It is good to keep in mind that SAP seems to have has a bit different interpretation of the term "service pack". B1 service packs often have more new functionalities than the "real" releases. The reason seems to  be that the release deadlines for major releases are dictated by the commercial people. The techies then need to cut corners to make these deadlines and the easiest way to do it is drop some of the planned functionalities from the initial release. These are then introduced in the so-called service pack, which may actually introduce more new stuff than the initial release version. It's better to treat the service packs with the same caution as for marjo releases. Thus, it's safer to wait a couple of rounds of patching on top of the service pack before upgrading.

Patch Upgrade

These used to be called Emergency Fixes (even the version number had the acronym EF## instead of PL##). It doesn't matter what they call them, that's exactly what they are.

There is no fool-proof way to identify the good patches from the bad ones before trying them out. Conventional wisdom would suggest that later patches have less problems than the earlier ones. Most often this is true, which may increase the temptation to upgrade without testing when it's a late patch. However, sometimes even a late patch can introduce nasty regression bugs.

Before, SAP portal only provided the last patch. Because of this, we used to store locally all the installation packages for those versions we had installed. I would rather use a patch that I know has worked well with other customers instead of going for the latest patch, no matter what they say at Support Desk. Luckily, the older patches are now also available in the Service Marketplace so we no longer need to store them by ourselves.

It would be really handy if SAP provided an Amazon.com -style rating and comment mechanism to partners for rating individual patches. Currently there's only the info file (which does not list all the changes) and the SAP Notes. I'm sure this would also benefit the Support Helpdesk on the long run.

What do you think ?

Ramp-up

When it comes to B1, rampup is just a fancy name for beta testing in production environment (they actually require you to upgrade the production system in rampup). If you think getting the first released version is not adventurous enough for you, apply for rampup. Otherwise, steer clear from them.

Preview version

These are provided to resellers only to show what's coming in the future release. It's not allowed to install these in a production environment at all (not that some hothead rookie wouldn't try just do that - it has happened in my organization ;-).


APPENDIX B: Potential upgrade trouble spots to look out for

The general rule of bugs and problems is that they will turn up in the places where you least expect them to and when you least expect them to. This is why thorough testing is so important.

That said, there are a couple of places that I've found more vulnerable to version changes than others.  

Add-ons based on UI API

While I'm in general quite fond of Business One, UI API is my pet hate. Why? Well, it's not only clumsy, it's flimsy too. You only need to have a quick glance at the B1 SDK Forum to realize that 90% of the issues discussed are problems related to or caused by the UI API.

As UI API is used to modify the user interface, vulnerability to versions is sort of built-in. For instance, if you are moving a certain field to another, currently unoccupied place on the form, there's no guarantee that the same spot will be unoccupied in the new version. The good thing about these kinds of problems is that they're quite easy to test by the developer (you don't need to bother the end user with them). Things get more complicated if it's a 3rd party add-on and you don't have access to the source code.

The more annoying feature of UI API is the constant stability problems. During the five years I've been working with SAP Business One, I haven't seen one UI API add-on that is truly stable and reliable. It's not that the add-ons are badly programmed, it's the whole foundation of UI API that is flawed. My general rule for UI API add-ons is: the less you have them, the better. I have gone to great lengths to avoid having to use UI API, although sometimes that's the only way.

I once attended training for a fairly extensive SAP-certified B1 add-on solution that works both as a standalone client (that uses only DI API) and as an embedded version inside B1 (using both UI API and DI API). I noticed that their representative was always using the standalone client and asked why he's not using it from B1. He whispered: "you know, the embedded version is just for marketing purposes (wink, wink). The standalone client is much more stable". He sure was right about that.

Applications based on DI API and DI Server usually work fine when upgrading (and between the upgrades as well ;-). For instance, I have been maintaining a DI API-based reference payments processing tool for SAP Finland since B1 6.2. The only time I've had to change the code that handles the core functionality of the application was with B1 2005 SP1 patch levels 23-29, during which there was a slight change in the way the DI API processed payments. Since pl 31, DI API was fixed and the old version of the add-on was working once again. Otherwise I've only done a rebuild against the new DI API for major new releases.

Formatted search

In the past, there has been quite a lot of variation in how the formatted search events are triggered after upgrades (even patch level upgrades). Sometimes an event started to trigger when not supposed to or vice versa after an upgrade. However, usually these could be easily fixed by fine-tuning the formatted search settings. It's been getting better all the time - also the performance of formatted search is nowadays much better. Anyway, if your solution uses formatted search extensively, it is advisable to test them thoroughly.

There used to be problems with the way B1 parsed some of the more complex user queries (that are used in formatted search) after upgrading. However, I haven't run into these problems lately. The best way to inoculate your solution against these kinds of problems is to hide the more complex queries behind MS SQL views or user-defined functions. This way you're only calling a very simple query from B1 and letting MSSQL take care of the rest.

Layout designer

In a couple of occasions, a release-level upgrade caused some of the formattings in the customer's layouts to look a bit silly. While this was no major problem, it took the customer some hours to get them fixed.

While SAP does not encourage using system variables in layouts, sometimes they are handy. However, there's no guarantee that the same system variables will work in new versions.

User form settings

While a rather minor problem, it has quite often happened after a version upgrade that the user's individual form settings are lost and have to be configured again. The people at Support Desk have denied that this would be a result of upgrading. However, I've witnessed it quite a number of times - especially with major version upgrades.

References

Jackson, Michael A. Principles of Program Design.
Academic Press, London and New York, 1975

Get it done fast'n'easy

[UPDATE February 13, 2011]

PLEASE NOTE!

DI Commander has been embedded to DI Construction Kit, which supports also MS SQL 2008 and SAP B1 8.8.

You can download DI Construction Kit with documentation and full source code from these sites:

http://www.mediafire.com/di_construction_kit

http://tinyurl.com/4pcsccr

Please Note! A new patch version (released on September 21, 2007) of DIC is now available on the DIC download page. The new version fixes the login problem with B1 2007 & MSSQL 2005 as well as introduces a new helper object.

If you have been involved with SAP Business One implementation projects, I'm sure you have at some point faced a situation where you needed to quickly modify a large amount of records in the customer's database. Here are some real-world examples from my past projects:

  • Change the default warehouse for 2,000 items in the database
  • Remove 4,000 items that were imported with an incorrect item number
  • Remove a few thousand items (from a total item count of 50,000) that have not been used in any transaction
  • Copy the account settings from one itemgroup to all the rest of the item groups (about 200 of them)

Being familiar with relational databases, I've pretty often found myself mumbling something like this:
"Dude, if they just would allow me to run a simple SQL update query in the database. This task would be completed in a few seconds."
Well, this is prohibited by SAP for rather obvious reasons. We need to get by with somewhat less powerful tools.

The most usual alternatives available to the consultant (ordered by hairyness factor, not by preference) to handle these tasks are:

  • The Monkey Method: brute force
      Just do it manually using the B1 client. Don't get me wrong: despite the name, this method should not be overlooked. If the number of records to modify is small enough, the Monkey Method may well give the best effort/result ratio. I use it all the time, with good results. Especially during the implementation project, typing in a bunch of customer or item records might be an excellent excercise for the users of the new system.
  • The Layman Method I: B1 Import
      SAP Business One client includes a rough but easy-to-use tool for importing some of the basic stuff such as items, business partners etc. Performancewise, this is by far the fastest method as it seems to bypass the DI API and import the data directly into the database. I haven't used this tool so much, but I've found it rather useful in doing massive pricelist updates. The main problem with B1 import is that there are so few objects it can manipulate and even for those objects, a limited subset of fields are exposed.
  • The Layman Method II: Data Transfer Workbench
      DTW is most often used in the initial master data migration, but it can also be used to modify data in a production database. One appealing feature of DTW is that the import files can be built with spreadsheet tools such as Open Office Calc or MS Excel, which are familiar to both end users and ERP consultants. While DTW is more flexible than B1 import, there are still lots of  areas it leaves uncovered. Perhaps the most notable limitation is that while you can add and modify records with DTW, you cannot delete records. Sometimes doing a reimport also results in kinky stuff that you really want to avoid in a production environment (such as double invoicing addresses in a customer record).
  • The Geek Method: DI API
      When it comes to handling masterdata or transactions in the B1 database, the Data Interface API can do 'almost' everything that can be done via the B1 client. It is implemented as a COM object and thus can be accessed with any language and development tool that supports COM. The most typical languages used are Visual Basic and C#, but you could even use VBA from inside MS Office.
      Unlike the notorious UI API, DI API is actually a rather solid and stable piece of software. Interestingly enough, Data Transfer Workbench is actually built on top of DI API and just adds the possibility to map CSV files or SQL resultsets to the fields of DI API objects. This means that anything that can be done in DTW can also be done in DI API, but not necessarily vice versa.
      With DI API, the biggest problem is not the API itself, but the fact that it's pretty much targeted only for people with a developer background - that is, the 'geeks'. To use DI API directly, you need to have a good grasp of elementary programming concepts and you need to be able to read API documentations. The tools required (such as Visual Studio or Delphi) for writing and debugging code are not always available (not installed and/or no license bought) in the customer's production environment.
      For a single-shot data modification task, there's quite an overhead having to do the full cycle of writing, compiling, testing, deploying and finally running the code in the customer's environment. There is also quite a lot of 'plumbing code' involved in order for such basic stuff as initiating the B1 session. Of course, the basic plumbing stuff can be copied from another project, but it's still a bit messy.
          
      For those things that DI API cannot do, one might try some even hairier kludges such as the SendKeys method in either the UI API or from .NET Framework. However, these are even less approachable to the wider audience than the DI API.

None of the methods discussed above are intrinsically superior over each other. Sometimes a Caterpillar is better than a shovel and a shovel is better than a spoon, but not always. The selection of the most optimal tools depends on several variables, such as:

  • The number of records that need to be modified
  • The kind of modification required (create/change/delete)
  • The overall complexity of the task
  • The skills of the people available for the job
  • The tools available in the customer's environment
  • Recurrence: will this task be executed just once or perhaps repeated later ?

For instance, if you only need to delete 20 items once, the fastest method is obviously just to do it manually. However, when the number of items rises or the task is more complex and needs to be repeated over time (for instance, monthly contract invoicing), quite soon we are in a situation where taking the time to write a small application to do the task might save you valuable time and effort. That is, of course, if you have a developer available. While there are a big number of B1 consultants out there that do have a developer background, I think it's safe to assume that most of them don't.

Introducing DI Commander

Wouldn't it be nice to have a tool that had the full power of DI API but was about as easy to use as the Data Transfer Workbench or SQL queries ? Something that could be learned and used by a non-developer who is somewhat familiar with such tools as spreadsheet formulas/macros, SQL queries and/or command line shells. This is where DI Commander (DIC) comes into picture. DIC makes it easier, faster and more flexible to execute data modification tasks such as those mentioned above, but simultaneously remain within the safe boundaries provided by DI API. While the main target audience is consultants who are not developers, I think there are lots of developers who also might find some use for DIC.

Get straight to the business

With DIC, you don't need to care about initiating B1 sessions, as DIC does it for you. DIC supports multiple simultaneous sessions, so that you can for example read stuff from one B1 instance and write to another. The most interesting part of DIC is however the command host. My aim was to hide the complexities of "traditional" languages such as C# and VB in order to provide a more business logic -oriented user experience. While you can't totally avoid writing code when using DIC, there are obvious quantitative and qualitative differences. In the quantitative sense, there is a lot less code to be written in order to get a job done. While the quantity of the code has been reduced, the code is also much more transparent, revealing what is happening on the business logic level. This should make it much more approachable for non-developers. DIC has a code editor window, but it also has a shell window that imitates a command prompt. For many potential users, the command prompt may be more familiar than a developer-oriented IDE.

I am not endorsing DIC as a replacement for real task-specific business applications. However, if you just want a job done quick'n'dirty, it will provide you with the tools of the trade.

DIC = IronPython with extensions 

When I was reflecting on all the stuff I had previously done with C# and DI API, I realized that it was pretty much always variations from the same theme:

  1. Get a handle of one or several B1 business objects 
  2. Run a query or read a file
  3. Based on the results of step 2 and using the handle retrieved in step 1, either create new records or modify/delete existing ones (or perhaps transfer the retrieved. data into another system).
  4. Repeat your selected mixture of steps 1-3 until you're done.

Based on these findings, I decided to equip DIC with some extras in order to get the much needed performance boost for the mentioned tasks. To be more precise, I added the following functions to DIC by modifying the embedded IronPython engine:

get()

The get function is used for retrieving existing objects from the database or handles for creating new objects. It has four different variants:

  • get(objecttype) ==> this will retrieve a handle for the specified objecttype from the default session.
  • get(objecttype, key) ==> this will retrieve the instance of the specified objecttype with the specified key, if it exists in the database.
  • get(objecttype, session) ==> this will retrieve a handle for the specified objecttype from the specified session.
  • get(objecttype, key, session) ==> this will retrieve the instance of the specified objecttype with the specified key from the specified session, if it exists in the database.

For instance, to get a handle to the order nr. 50 that exists in the database, all you need to type is:

myOrder=get(ORDER, 50)

Now that you've got the handle, you could access the values like this:

myOrder.CardCode

...or perhaps update some value in the order:

myOrder.Comments="Just testing"
update(myOrder)

The objecttype parameter is a member of the BoObjectTypes enumeration. DIC generates a helper constants for each BoObjectType in DI API as follows:

  • oInvoices ==> INVOICE
  • oItems ==> ITEM
  • oBusinessPartners ==> BUSINESSPARTNER
  • oAccountSegmentationCategories ==> ACCOUNTSEGMENTATIONCATEGORY
  • etc.

If you are familiar with DI API documentation or DTW, you perhaps noticed that the helper constants are without the 'o' prefix and in singular form for clarity. If you prefer to use the original DI API notation for BoObjectTypes, you can do that as well. Just remember to type the whole namespace (for instance, SAPbobsCOM.BoObjectTypes.oInvoices instead of INVOICE).

browse()

By far the most important of the added functions is browse. Because DI API has been implemented as a COM object, its enumeration support in .NET environment leaves a lot of room for improvement. The browse function is used to create a multipurpose .NET enumeration wrapper for a variety of B1 objects:

  • Objects that have the Browser property that can be mapped to a resultset
  • "Line" objects that have the property Count and method SetCurrentLine. For Instance, Document_Lines and BPAddresses.
  • Resultsets

With the enumeration interface in place, these object collections can be treated as any list in Python. With remarkable ease, that is.

There are five variants of browse():

  • browse(objectinstance) ==> The specified object instance must have the property Count and the method SetCurrentLine (for instance, Document_Lines and BPAddresses). It will return an enumeration of all the line instances of the given object instance.
  • browse(objecttype, recordset) ==> Gets a handle of the specified object type and returns an enumeration of the object instances specified by the recordset (using the default session).
  • browse(objecttype, recordset, session) ==> As above, but uses the specified session instead of the default session.
  • browse(recordset) ==> Returns an enumeration of the first column in each line contained in the recordset.
  • browse(recordset, columnname) ==> Returns an enumeration of the specified column in each line contained in the recordset.

For instance, to iterate the items in an order, you might type:

myOrder=get(ORDER, 50)
for myline in browse(myOrder):
  print myline.ItemCode
  print ","
query() 

The query function is used to retrieve a Recordset from DI API according to the specified query. It has two variants:

  • query(querystring) ==> this will retrieve a Recordset object from the default session using the specified query.
  • query(querystring, session) ==> this will retrieve a Recordset object from the specified session using the specified query.

The Resultset retrieved by the query function can then be enumerated using the browse function.

add()

The only reason for the existence of the add function is syntactical consistency with the other new functions. All is does is call the Add() method of the specified object instance):

  • add(objectinstance)
update()

As with the add function, the only reason for the existence of the update function is syntactical consistency with the other new functions. All is does is call the Update() method of the specified object instance):

  • update(objectinstance)
remove()

The remove method can be used to removing any object that has the Remove() method. Naturally, the object can only be deleted if the consistency rules of B1 allows it to be deleted.

There are three variants of remove:

  • remove(objecttype, key) ==> Removes the object instance of the specified type if found by the key using the default session.
  • remove(objecttype, key, session) ==> Removes the object instance of the specified type if found by the key using the specified session.
  • remove(objectinstance) ==> Added only for syntactical consistency. Simply calls the Remove() method of the given object instance.

The DIC user interface  

image

As you can see from the image above, the current user interface of DIC is rather spartan.

Top of the screen
Shell tab

The shell window imitates a typical command prompt. It includes such functions as command history that can be accessed using the arrow keys. Currently, the shell window does not support statements that span several lines. However, you can combine multiple commands on a single line by separating them with semicolons.

Trace tab

The shell window will only show a short error message when an internal exception is caught. The Trace window will contain more details (stack traces etc). Normally you should not need to use the trace tab at all.

Bottom of the screen
B1 Sessions tab

DIC can do the standard Python tricks even without logging in. However, in order to use the DI API, you need to initiate at least one B1 session by using the login screen (To be precise, you could do it straight from the code, but why bother?). DIC also supports the promiscuous mode, in which you can initiate multiple simultaneous sessions into different databases and do cross-database operations. For each session you initiate, you need to select a handle that can be used to refer to the session from the code. If you only have one session open, you don't need to use the handle (as the single session is automatically defined as the default session), but even then you need to give the handle when logging in. The system suggests "session1" as the handle, but you can change it if you wish.

Code tab

This is an alternative to the Shell window. It works like a typical text editor: in addition to typing text, you can for instance copy and paste stuff around. Executing the code works a bit like the Query Analyzer window in MS SQL 2000: if nothing is selected, the application tries to execute every bit of code on the screen. If something is selected, the application will only execute the selected commands. Even when using the code window, the executed commands will be added to the command history of the shell.

Compared with the shell, the code window has a couple of bonuses. First of all, it supports multiple-line statements (just remember the indentation). Second, it also provides colour coding for the standard Python commands as well as for the functions added in DIC. Sorry folks, there's currently no IntelliSense function in DIC. However, in the Shell window you can get a list of the available fields and methods of any object by using the dir() function (please see the image below). I guess you could call that poor man's IntelliSense.

Using the dir() function

Working with Python

I will only scratch the surface of Python here. If you wish to get more thorough overview of the language, check out the Python.org website for a description of Python.

I chose Python as the language of DIC because Python has a very clear, human-readable syntax and it also lets you interact with live objects as you build your code.

As there are loads of excellent documents on Python out there in the web, I don't want to write another Python tutorial here. Instead, I will just give a couple of pointers to some of the available tutorials:

If you prefer real books, I warmly recommend Learning Python by Mark Lutz and David Ascher (publiched by O'Reilly).

Learning to program Python is not really required in order to benefit from DIC. Basically you just need to know the new functions added in DIC and some very basic commands in Python to get by. That's definitely not any harder than learning SQL.

I hope to be able to establish a public library of well-written DIC snippets in the future. This would make DIC even more approachable by the Layman.

Playing around with DIC

Before diving deeper into the sample scripts, there's a few issues and conventions you should be aware of.

First of all, indentation (spaces and tabs) matter a lot in Python. They are used for identifying blocks of code that span several lines but belong to a single statement. These include for loops, function and class definitions etc. When you see an indentation in the sample code, it is there for a purpose. Omitting it will cause the code to fail (Please see the following image for an example on incorrect and correct indentation)

IndentationIndentation

If you are a seasoned Python developer, you should also notice that there are a couple of Python conventions that are not currently supported by DIC (although they are supported in IronPython):

  • While line spanning is supported in def and class statements, DIC does not currently support spanning lines ending with backslash (/)
  • The 'open pairs' rule is not supported. Thus:
MyList = ["A","B","C"]  

...is valid but the following isn't (although it is valid according to Python conventions):

MyList = ["A",
"B",
"C"]

I apologize for these limitations. I was forced to skip some cleaning up in order to get DIC out in a decent amount of time. After all, remember that I'm distributing DIC for free and doing it just for fun.

Adding stuff

Let's add a new order (with two lines) to the system:

myorder=get(ORDER)
myorder.CardCode='CUSTOMER1'
myorder.DocDate=System.DateTime.Now
myorder.DocDueDate=System.DateTime.Now
myorder.Lines.ItemCode='DEMOITEM1'
myorder.Lines.Quantity=10
myorder.Lines.UnitPrice=5.5
add(myorder.Lines)
myorder.Lines.ItemCode='DEMOITEM2'
myorder.Lines.Quantity=8
myorder.Lines.UnitPrice=3.7
add(myorder)

Modifying existing stuff

To update a couple of fields for a single Business Partner with the CardCode 'CUSTOMER1', you might type something like this:

bp1=get(BUSINESSPARTNER, 'CUSTOMER1')
bp1.Cellular="+358 50 4324 1332"
bp1.Notes="Just testing."
update(bp1)

To do a similar change for all the business partners who belong to the default customer group:

for customer in browse(BUSINESSPARTNER, query("select cardcode from ocrd where groupcode=100")):  
  customer.Notes="Testing batch." 
  customer.Freetext="Testing batch."
  update(customer)

To change the password for all users in the database (for instance after making a copy of the production database for testing purposes):

for user in browse(USER, query("select user_code from ousr")):  
  user.UserPassword="test"
  if update(user)==0:
    print user.UserName + ":OK "
  else:
    print user.UserName + ":Update failed "

To lock the warehouse 01 on all items in the database:

for itm in browse(ITEM, query("select itemcode from oitm")):
  for whs in browse(itm.WhsInfo):
    if whs.WarehouseCode=="01":
      print " Locking warehouse 01 for item "+itm.ItemCode+":"
      whs.Locked=SAPbobsCOM.BoYesNoEnum.tYES
  update(itm)

Deleting stuff

The following examples focus on deleting items from the database. This is something you cannot do with the DTW.

To delete a single item with ItemCode 'DEMOITEM1' from the database, just type:

remove(ITEM,'DEMOITEM1')

...you may also build the list of items by defining a list such as this:

for itemcode in['A123','B124','C125']:
  remove(ITEM,itemcode)

The above samples are nice for testing, but in a real world scenario you might perhaps want to read the itemcodes from a textfile with one itemcode per line:

for itemcode in open("c:\\itemstodelete.txt"):
  remove(ITEM, itemcode)

...OR you might wish to use a resultset from a query as a source for the item codes:

rs=query("select itemcode from oitm where qrygroup64='Y'")
for itemcode in browse(rs):
   remove(ITEM, itemcode)

Function calls can be nested for a more compact expression:

for itemcode in browse(query("select itemcode from oitm where qrygroup64='Y'")):  remove(ITEM, itemcode)

(As the above can be expressed on a single line, you can use it in the shell window)

Naturally, any of the above samples could just as well have been done for any DI API object types (BusinessPartners, Quotation documents etc). Simply replace the value of the objecttype parameter with the one you need.  of the box (as long as the objecttype in question has the Remove() method in DI API).

For instance:

for cardcode in browse(query("select cardcode from ocrd where cardtype='L'")):
   remove(BUSINESSPARTNER, cardcode)

...this will remove all the BP's with type 'Lead' from your database (Please note! The same limitations apply as for the Remove() method in DI API. Thus, if there are transactions linked to a BP, it cannot be deleted).

Installing DIC

Click here to get the latest version (R1.0 PL 3) of DIC. Currently only the binary version is available. I have not yet decided whether to publish the source code.

Installation is about as quick'n'dirty as you might expect: just unzip it and you're ready to rock'n'roll.

System Prerequisites

In order to get DIC up and running, you need to have a correct version of DI API installed. DIC was compiled against DI API 2005 SP1 PL23, but hopefully it will work fine against any version of DI API 2005.

In addition to DI API, you also need to have version 2.0 of the .NET Framework installed.

UPDATE: Are there safety issues with DIC ?

I've recently received comments and warnings from several people who've assumed that DIC is poking the database directly using SQL queries. I thought it would become clear from all that's written above, but obviously it wasn't: ALL THE DATA MANIPULATION OPERATIONS DONE WITH DIC ARE EXECUTED VIA THE DI API CALLING THE Update(), Add() and Remove() METHODS PROVIDED BY THE DI API OBJECTS. Thus, DIC is safe as milk. Or at least as safe as the DI API.

UPDATE 2: The name is lame, but the tool works all the same (Jan 18, 2008)

Recently, I was contacted by the moderators of this portal. It turned out that some people had somehow associated the acronym formed from the previous name (B1 Turbo Command Host) with some kind of an insult. Well, I apologize for that.

Still, if we start going down that road, it quickly becomes a slippery slope. There are simply too many words with several meanings and too many acronyms that may give wrong impressions when pronounced as a word. Try replacing "1" with "i" and pronouncing some of the most common acronyms used in the B1 Forum, then look up those words in Wikipedia. You might be surprised by what you find.

Anyway, I decided that it's better to change the name than to be totally censored. Thus, B1 Turbo Command Host is now DI Commander. I sincerely hope that this new name does not aggravate anyone. If it does, then the Beauty is certainly in the Eye of the Beholder.

Disclaimer

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

DI Server has been available to SBO developers ever since SBO 2004 was released. I remember being pretty excited when I first read about DI Server, as it would to enable me to write robust web-based applications with ASP.NET (or whatever other platform I would choose) - something that wasn't really possible to do with plain old DI API.



 

My first encounter with DI Server was somewhat frustrating. I had the service up and running, but it was difficult to figure out how to create my first "Hello World" application on top of it. I was expecting DI Server to provide its own http layer for handling the SOAP requests. It seems that quite many people I've met have had the same misconception. In fact, DI Server is just another COM object that accepts your SOAP requests as string parameters through the Interact method and also returns the response messages as strings. You need to implement your own network access layer on top of it, using whatever protocol you choose. It doesn't necessarily have to be http.



Sure enough, there was the DI Server help file already with the first release. However, the help document pretty much presumes that you already are past the point of making your first successful SOAP call to the DI Server. Luckily, in late 2004 I got a chance to participate a developer meeting in Norway with some SBO Dev Team people from Israel. There I finally got the missing pieces I needed to get up and running with DI Server (thank you, Gali :-). All it took was three simple files with less than 30 lines of script that together provided a quick-and-dirty setup for accessing the DI server from a web browser.



It seems that the product has so far not really taken off as expected among SBO developers. Anyway, my personal experiences have been quite different from what many commentators describe in the SAP Business One discussion forum. Even though there are some limitations and bugs, the product has proven successful for the uses I originally conceived for it. During the past 18 months, I have successfully developed several web applications with ASP.NET, C# and DI Server. The functionality in these applications ranges from business partner data maintenance to booking new sales orders, reviewing order/delivery/invoice history etc. The feedback from customers has been rather positive. How many other ERP vendors provide you with tools to build a production-grade web user interface to some of the key functionalities in about 10 days ?



 

So, what can DI Server do that you couldn't program yourself on top of DI API? First of all, opening a new connection in DI API takes a long time, way too long to be usable in a web application context. DI Server has a connection pool which effectively eliminates this problem (well, you could implement your own connection pool, but is it really worth it?). DI Server is also much more stable in the load conditions of a typical web application (lots of simultaneous requests coming in). Last but not least, DI Server has a nice per-CPU licensing scheme with no per-user limits for the number of active sessions.</p>

 

Installation and configuration gotchas

 

 

Here are some issues I've run into in real-life setups.



    • DI Server is more version-dependent than the other server tools (license service, backup service and messaging service). For instance, it doesn't usually matter which patch level your license server is in SBO 2004. You could easily run SBO 2004 pl 18 with license server from the pl 5 distribution without ever noticing the difference. Because of this, it is quite easy to simply ignore upgrading the server tools when upgrading the databases to a new patch level. Once you start using DI Server, this may cause you some nonnecessary headaches. I once spent several hours trying to figure out why the customer's DI Server installation was returning "General error" for my SOAP requests. The reason turned out to be an out-of-sync version of DI Server.


    • If you are upgrading or making a fresh Server Tools installation on Windows Server 2003, you will need to change the DCOM configuration for SBODI_Server with dcomcnfg. The procedure is explained for License server in SAP Note 833798. The same steps are required for DI Server as well, simply replace "SBOLicense" with "SBODI_Server" and follow the same steps.


    • DI Server does not necessarily have to be installed on the same computer where your SBO Databases are located. If you are developing an application with IIS as your application server, there might be some other conflicting IIS application already in use on the SBO server. Especially Microsoft Sharepoint Server is known to be a troublesome application considering coexistence in an IIS environment.


    • While you probably don't want to use the quick'n'dirty test setup as part of your production application, it is very useful in checking that the infrastructure is properly configured when you are installing your application in a new system environment.


    • Be aware that when DI Server connects to the license server, the address you are using for license server connection will be recorded in the SLIC table in SBO-Common. All SBO clients (including DI Server) that connect to the database will be using this address as default, unless the license server is separately specified (in that case, the SLIC table is once again modified). Because of this, you should avoid such things as using a localhost address (127.0.0.1) for license server, even though that might otherwise seem logical (as DI Server and License server are typically installed on the same machine). To be sure, you should always use the same license server address for all connections. I once spent several hours trying to figure out why DI Server gave me the dreadful -2147023174 error when logging in (even though I had all the DCOM access rights settings done correctly). The reason was that after restarting the DI Server, another DIS application had used a different IP address (public IP address, while I was using a private IP address) for connecting. The problem only occurred when I gave the license server address in the login message. Once I omitted the license server element from the message, I was once again able to log in.


 

Debugging in DI Server

 

 

Once you are able to get the SOAP messages going back and forth between your application and the DI Server, debugging is rather straightforward. To see all the requests and responses in the log file, simply set the log level to "Debug" as instructed in the DI Server help file. After changing the configuration file, you will need to restart the DI Server. Whenever you restart the DI Server, you will need to restart the license service as well. Once your application is in production, you should set the logging level back to "Error", otherwise your log file will quickly become huge.



 

In principle, you could browse the log file with any text editor. However, the trick in log watching is that you are typically most interested in the last entries of the file. A normal text editor would then require you to keep constantly reloading the file to get the latest results. A more sophisticated solution is to use a log tailing tool. Tail is originally a UN*X command, but there are dozens of tail implementations for Windows. Having tried out a whole bunch of them, I personally prefer Baretail . It's compact (186k), it does everything a nice tail application should (word wrapping, color coding etc.) and it's free.</p>

 

The quick'n'dirty test setup

 

 

These three files contain everything you need to start sending messages to your DI Server.



Global.asa contains the code for creating the SBODI_Server.Node object and storing it in the session.

default.htm produces the simple input form for receiving your SOAP requests.
WebSAPManageSoap.asp handles the request sent from default.htm and shows the results received from DI Server.

Simply save the files in your IIS root directory and you're ready to go.

PLEASE NOTE! These files form a "traditional" ASP application, not an ASP.NET application. Support for ASP pages (in addition to ASP.NET) must be selected separately when installing/upgrading the IIS/Application Server component in Windows Server environment.

 

file 1:global.asa

 



Sub Session_OnStart

Session.Timeout = 1

Set Session("SBODI_Server") = Server.CreateObject ("SBODI_Server.Node")

End Sub



Sub Session_OnEnd

set Session("SBODI_Server") = Nothing

End Sub

 

file 2:default.htm

 




















 

file 3:WebSAPManageSoap.asp

 

<p> <%<br> dim requestStr<br> dim outStr<br> dim obj<br> requestStr = Request.Form.Item("HTTP_SOAPACTION")<br> set obj = Session("SBODI_Server")<br> outStr = obj.Interact(requestStr)<br> Response.Write outStr<br> Response.End<br> %></p>

Actions

Filter Blog

By date:
By tag: