1 2 3 74 Previous Next


1,107 Posts
Community User

SAP TechEd Recap, Day 1

Posted by Community User Sep 18, 2011

Mark Yolton and Courtney Bjorlin were kind enough to sit down with me for a few minutes at the end of the first day of TechEd Las Vegas, and share their thoughts on how the conference had been going so far, as well as what to expect next. We also got a great HANA surprise from Mark, so don't miss it!

You can watch the embedded video below, or you can download it at the "Download Media" link above, or subscribe via iTunes.

I had planned to do more of these at TechEd, but camera difficulties prevailed, unfortunately. Of course there is a large amount of video content available, in case you missed the show, at SAP TechEd Online, as well as from other SAP Mentors and bloggers, such as the excellent content from Jon Reed and Dennis Howlett, and plenty of others as well.

Which way to steer your ship?

Have you ever taken database performance metrics, downloaded them to Excel, and created charts and graphs so that trends became more clear?

Of course we all know management likes pictures to convey information, but it helps us techies as well to be able to visualize how performance is trending over time. Sometimes we get so caught up in the daily operation of our systems that, even if you're watching the system daily, small increases over time can add up and become issues before you know it.

Not to mention, sometimes it isn't until you look at data on a graph that patterns become visible.

(I'm not being specific about *which* data to look for, because every system and database platform is unique, and you really need to understand yours and how they're used to determine which data are the most important to keep an eye on.)


The treasure you can find

SAP have developed Solution Manager with the capability of storing data in BI cubes, and this is the perfect place to report on the performance of systems in your landscape. And with the Database Performance Warehouse, part of the DBA Cockpit, you can get detailed data and charts to support reporting on the database performance metrics that are important to you.

Enabling this functionality is simple, and I'll show you how to do this now.


The map to guide you

First step is to log in to Solution Manager, and go to transaction 'solman_setup.' From there, click on Managed System Configuration > Configure Technical System.

Managed System Configuration



Select a database from the drop-down menu, and validate the populated information, or change:

Wizard Input Screen 1


Enter any information requested, then click Next:

Wizard Input Screen 2


The configuration should now be deployed automatically, including the structures in the target database, and the collectors to retrieve the data and populate the BI cubes in the Solution Manager system. If all goes well, you should see the following screen, as I did:

Wizard Input Screen 3


The next step is to let the collectors collect some data so that you can display it. Once that happens, we'll open up the DBA Cockpit again with transaction 'dbacockpit.' Select the database you've just deployed the configuration to, make sure you get connected, then navigate to Performance > Performance Warehouse > Reporting.

DBA Cockpit Navigation Screen 


In my system with NetWeaver 7.0 Enhancement Package 2, I get the following screen (NW 7.0 EhP1 will open a new browser window, but EhP2 opens in the sapgui for me):

Buffer Pool Quality


So from here on out, all of the graphs are based on embedded BI queries that may or may not exist for you, depending on your database platform. The good news is, if you don't see the charts you want to see, you can create your own simply by using the standard BI query development tools to create a new query, or by modifying an existing query.

Here's one for Database I/O read and write times:

DB I/O Read and Write times


And of course, as these are embedded BEx queries, all of the standard filter and drill-down capabilities exist by right-clicking on the data:

Context Menu


Query Properties:

Query Properties


We have reached our destination, but this is only the beginning of the journey

So that's basically it. For those on an Oracle platform, these charts may be very similar to what Oracle Enterprise Manager offers, though OEM did not, last I worked with it, offer any sort of analytical capabilities on that data. I am on the DB2 platform now though, and IBM does not offer any free server-based tools like OEM that are always watching your databases and monitoring performance. I do not know what SQL Server or MaxDB have to offer, so I cannot comment there.

The good thing about Solution Manager's database performance warehouse is, if you support multiple database platforms, whether as a consultant for various clients, or whether you have multiple platforms in your customer environment, this allows some standardization across those platforms for database performance reporting.

So that's it, I hope this helps, and please comment below if you have experiences with database performance warehouse to share, or questions you need answers to.



Database Performance Warehouse Webinar presented by SAP Labs

As you can see here, Direct Read Requests, which was truncated as "38066" is show here as READDIRCNT = 12,438,066. And Sequential Read Requests, truncated as "11105" is shown here as READSEQCNT = 60,611,105.

So there you have it - the data is there, it's just knowing how to get it. :)

Of course, this isn't the only function module that can read stat files, but this is the one I'm used to using, so I thought I would share with everyone.


Community User

How to Cover Your Basis

Posted by Community User Aug 16, 2011

I am excited to announce the first of what will hopefully be a long series of podcasts dedicated to SAP technologies, with an emphasis on administration and infrastructure. The podcast is called Cover Your Basis, and will only be successful with the contribution of many in the community.

In the first episode, available in both video and audio formats via iTunes or at the Cover Your Basis website, I am joined by Courtney Bjorlin of ASUG News to talk about the great content ASUG News has to offer to anyone interested in SAP, not just ASUG members.

Next, I talk with Jon Reed of JonERP.com, an SAP Mentor and long-time SAP community member to discuss some things Basis Administrators can do to be successful, as well as what to look for at the upcoming SAP TechEd conference.

Lastly, I welcome Chris Kernaghan of Capgemini Consulting. Chris brings many years of Basis experience to the conversation and talks about some of his recent projects, including the combined upgrade and unicode conversion (check out his brilliant blog series on that topic Architecting an SAP Upgrade and Unicode conversion (CUUC Series Part 1)), as well as efforts to utilize the Amazon public cloud. (On a side note, I just found out Chris is a freshly minted New SAP Mentor Cubs Summer 2011 - congratulations, Chris!)

You can watch the video podcast here:


Or, feel free to subscribe in iTunes to the audio or video podcasts.

And please visit Cover Your Basis for Basis-related content, including additional podcasts coming in the near future!

A manager's job is a multi-tasked one. As a manger you have to organise, monitor, motivate and support your staff.

However, one of the most important roles of a manager is to delegate. Delegation is an essential skill if a manager is to be successful and wants to advance. Successful delegation shows that you are willing and have the ability to take on greater and more important tasks. However, like any skill delegation needs to be worked at.

If you want to be delegate successfully follow these 7 rules below:

1. Effective and clear communication is essential for being able to delegate effectively. When delegating work to an employee be specific as possible. For example, tell them exactly what needs to done, how you want it and when you want it by. Ensure the person understands what you have said and ask them if they have any queries.

2. When delegating a task to a team member it is important that to advise them why it needs to done and how it will affect things. If there are any possible implications that may arise out doing this work then, then you need to know about this as well.

3.It is important when delegating a task that you communicate clearly the standards and quality that is required to complete the task. Obviously these standards need to be realistic and achievable based on the skill and experience of the team member.

4.It is important to trust your team and give them a level of autonomy to take responsibility for the task. This also means that you should give them a level of authority to get the job done without creating any obstacles.

5. There may be certain tasks that cannot be successfully completed within the current resources available. Therefore, you need to identify and provide the support needed to ensure the employee can get the job done. This support could be more training, increased budget or greater access to you.

6.When you have spoken to your team member about the task make sure they understand what is expected of them and that you have their commitment to do the task.

7.It is important that you give them as much free reign as possible to have the task completed however, this does not mean that you disappear. Make it clear that they can have access to you if they have any questions, or advice to overcome an obstacle. Providing support is important even if it is just an email.

When we installed CA Wily Introscope for the first time in our SAP environment, it was installed by a consultant, and it would not even run when he left. Someone determined it was due to a lack of memory, so they doubled the amount of allocated RAM. It still wouldn't run, so it was doubled again. Lather, rinse, repeat.


To make a (very) long story short, we finally found the right sizing process, and went through a sizing to determine how much we actually needed. But the sizing made many assumptions that we could not verify and had to use ballpark numbers for. As a result, we found we had to watch the system very closely to see if it was performing as expected.


What we found was that, on occasion, we would see gaps in metrics. This was most noticeable when looking at graphs with a 15-second interval, and looks something like the following, where the gaps in between the dots indicate that there are dots (metrics) missing. It's sort of like connect-the-dots - if a dot is missing, you skip over that one and connect the next two:


Graph of HTTP Users
In fact, all of our graphs during this period of time had gaps in the metrics just as this one did.


So, on advice from someone in our company knowledgeable about Introscope, we found that there are metrics one can look at to determine the health of the Introscope server itself. These metrics will let you know if Introscope has enough resources to do its job correctly. And it became obvious that ours did not, as evidenced by the following graph of Harvest Duration which is a metric that should mostly be below 3,000 ms, and ours was averaging over 20,000 ms:
Harvest Duration
So, we restarted Introscope (not a "fix", but that's what was done in this case), and things returned to normal, as you can see in the following graph. Again, this is Harvest Duration, with no gaps in metrics, and only one spike over 8,000 ms, which is acceptable.


New Harvest Duration
 All of the recommended values to watch out for with respect to the health of Introscope can be found in the Introscope Investigator. And here is how to get to them, along with the general rules to go by.
  1. Open an Investigator window
  2. Navigate to *SuperDomain*>Custom Metric Host (Virtual)>Custom Metric Process (Virtual)>Custom Metric Agent (Virtual)(*SuperDomain*)>Enterprise Manager
  3. Metrics you want to look at, and you will likely want to look at a 24-hour or 1-week window of time to see if historically the EM has been within acceptable working parameters.
  1. Health>Harvest Capacity (%) - Recommended is <=75%; in trouble if it’s constantly > 75%.  Spikes are ok.
  2. Health>Heap Capacity (%) - Recommended is <=75%; in trouble if it’s constantly > 75%.  Spikes are ok.
  3. Health>Incoming Data Capacity (%) - Recommended is <=75%; in trouble if it’s constantly > 75%.  Spikes are ok.
  4. Health>SmartStor Capacity (%) – Recommended is <=75%; in trouble if it’s constantly > 75%.  Spikes are ok.
  5. Tasks>Harvest Duration (ms) – Recommended is < 3000ms; in trouble is > 7500ms
  6. Tasks>SmartStor Duration (ms) –  Recommended is <3500ms; in trouble is >= 15000ms
  7. Overall capacity (%) – Recommended is <=75%; in trouble if it’s constantly > 75%.  Spikes are ok.
Community User

Back to Basics

Posted by Community User May 5, 2011

Back in the mid-nineties when I was in consulting, our average SAP implementation took three years, with hundreds of consultants on two or more continents. These projects were very large and very complex and difficult to manage as it was. This is why every project manager dreaded the term: scope creep.

It was simple, relatively speaking, to lay out the deliverables in the beginning of a project and get agreement as to each party's responsibilities. However, over the life of a project, issues come up constantly and, in an effort to make progress under the pressure of project deadlines, people jump in and tackle them, often without regard to whether the issue was "in scope" or not.

I'm sure most of you are quite familiar with this scenario and how quickly it can spiral out of control. I don't need to go into the horror stories. Even if it's fun sometimes. :)

What I think most people don't consider is that it's really no different in the internals of a customer organization. Just as in a client-consultant relationship, teams have initial expectations of each other that may be well-defined early on, but over time and successive implementations, the landscape grows in size and complexity, and the responsibilities of teams often creep out of control without regard to what may be the optimal way to grow.

Recently, after a few reorganizations, my team - the SAP Infrastructure team (some call us Basis, some call us Architects, some call us... well, we won't go there in public) - came under new management not familiar with SAP, and in trying to explain what our team did and why, I realized that a lot of it was hard to explain and there was no clear rationale other than "that's just the way we've come to do things over time." And since I came in well after the group was established, even I didn't understand it all.

So I embarked on a process to get back to basics - to start over and document what we do and why - and it was very enlightening!

Having been indoctrinated with ITIL, I set out to design a service portfolio. I met with our group's stakeholders and asked them, not being confined by what we had done in the past, what services did it make sense for our group to provide going forward? What did we do well? What did we not do well? In what additional areas could we add value, and where were we just getting in the way?

This process is still getting started. We have gathered requirements and put together a service portfolio, and are in the process of presenting that to senior management. But the feedback we have already been getting has been overwhelmingly positive. Relationships formerly bordering on adversarial are turning around quickly. When you open yourselves up to discussion with stakeholders about what it is that you can do to deliver more value to them, they are eager to help. You don't have to know anything about ITIL to know that being customer-focused is the key to delivering value.

Of course, this can't be the end of the process. Unlike in consulting where, once the implementation was done I got to leave, this has to be seen through, and a service management practices must be implemented to ensure quality and consistency, not to mention improvement.

I plan to report back on our progress but, in the meantime, if you want to hear more about our experiences, and you'll be attending the SAPPHIRE NOW / ASUG Annual Conference in Orlando, you can attend my session "Most Valuable Functions in Solution Manager" where I will discuss how to use Application Lifecycle Management practices to deliver value to your stakeholders. I hope to see you there or, at the very least, back here. :)






Based on SAP EAM Benchmarking, companies (across all industries) with average performance have around 60% Weekly Maintenance Schedule Attainment Rate, whereas the Top 25% companies have close to 87% Weekly Schedule Attainment Rate.



Source: SAP EAM Benchmarking

Participants in SAP’s EAM Benchmarking program can have access to industry specific values for this metric.




  • Maintenance management process is streamlined through the automation of monitoring, notification, and maintenance problem resolution processes
  • A majority of maintenance work orders are generated from the preventive and predictive maintenance inspections
  • Resource level planning is integrated with work plans, equipment and materials for efficient workload analysis
  • Maintenance planning takes into account skill required, material required, tools required, and specific job instructions



  • Wrench Time (in %)

Are you fed up of hanging queues in CRM because your colleagues changed some customizing and now your e.g. materials won't arrive any more.

Well then you may have already sheduled a regular download of the customizing objects. But still.... Usualy you do the download twice a day, but someone was quicker and now there is a hanging queue.

What would you say about a litle report, that will restart the queues once the download of your customizing objects was done.

This will probably save you some time and the anoying calls, that CRM is bulls*** and that it's your fault that there are hanging queues. You shurely now, it's not your fault :-)

So what you will have to do is put the report in your CRM and put a second step in your customizing download job. After you have fired "smof_download" execute this report and from now on, you should have much less trouble with hanging queues.


The usage is simple, the first parameter is the load objects, like you entered them in smof_download, the second parameter is the queue name (use a generic one, e.g. "R3AD_MATERI") and the third parameter is the time of retrys the report waits for the download objects beeing finished. It's always 5 seconds wait, so lets use something like 10.</p><p>So this is not realy genious, but sometimes the little things are helpful.</p><p>So have fun, and now your colleagues owe you a coffee</p><p><textarea cols="80" rows="20">***********************************************************************



  • Restart QRFC Queue after initial Download


  • <Kurzbeschreibung>


  • DATUM      NAME                            PROGR.-AUFG.     TRANSPORT

  •            BESCHREIBUNG

  • 2011-02-23 Rolf Mueller Ciber AG          

  •            Initiale Version



REPORT  zcauk_queue_restart.



TABLES: smofdstat, smofobject.                              "#EC *

TABLES: trfcqin, trfcqstate, trfcqdata.                     "#EC *



SELECT-OPTIONS s_object FOR smofobject-objname.

PARAMETERS: "  p_object LIKE smofobject-objname,

                p_qname  LIKE trfcqin-qname OBLIGATORY,

                p_time(3) TYPE n DEFAULT 10.


DATA: gt_smofobject LIKE smofobject OCCURS 0 WITH HEADER LINE,

      gt_smofobj_tmp LIKE smofobject OCCURS 0 WITH HEADER LINE,

            gt_smofdstat LIKE smofdstat OCCURS 0 WITH HEADER LINE.


DATA: exeluw LIKE sy-index,

      astate LIKE trfcqin-qstate.








                  ID 'S_ADMI_FCD' FIELD 'NADM'.

  IF sy-subrc <> 0.

    MESSAGE e149(00) WITH 'S_ADMI_FCD / NADM'(ath).








  • select objects based on user entry

  SELECT * INTO TABLE gt_smofobject FROM smofobject

                                  WHERE objname IN s_object

                                  AND inactive EQ space.

  IF sy-subrc = 4.

    REFRESH gt_smofobject.




  gt_smofobj_tmp[] = gt_smofobject[].




      i_abort       = space

      i_waiting     = 'X'

      i_running     = 'X'

      i_done        = 'X'


      ti_smofobject = gt_smofobject

      to_smofdstat  = gt_smofdstat.


  LOOP AT gt_smofdstat.

    READ TABLE gt_smofobj_tmp WITH KEY objname = gt_smofdstat-objname


    IF sy-subrc = 0.

      DELETE gt_smofobj_tmp INDEX sy-tabix.




  IF gt_smofobj_tmp[] IS NOT INITIAL.

    LOOP AT gt_smofobj_tmp.

      WRITE:/ 'Object:', gt_smofobj_tmp-objname, 'nerver downloaded!'.





  DO p_time TIMES.


    gt_smofobj_tmp[] = gt_smofobject[].




        i_abort       = space

        i_waiting     = space

        i_running     = space

        i_done        = 'X'


        ti_smofobject = gt_smofobject

        to_smofdstat  = gt_smofdstat.


    LOOP AT gt_smofdstat.

      READ TABLE gt_smofobj_tmp WITH KEY objname = gt_smofdstat-objname


      IF sy-subrc = 0.

        DELETE gt_smofobj_tmp INDEX sy-tabix.




    IF gt_smofobj_tmp[] IS INITIAL.

      WRITE:/ 'Loop:', sy-index.

      PERFORM restart_queue.








*&      Form  RESTART_QUEUE


  •       text



FORM restart_queue .




      qname                      = p_qname


     exeluw                     = exeluw

     astate                     = astate


      invalid_parameter          = 1

      system_failed              = 2

      communication_failed       = 3.



  WRITE: /2 'QUEUE NAME    : ', p_qname.


  SKIP 1.



  CASE sy-subrc.

    WHEN 0.

      WRITE: /2 'No. of actived LUWs:', exeluw.

      WRITE: /2 'LUW-State          : ', astate.

    WHEN 1.

      WRITE: /2 'ERROR:  System Failure'.

    WHEN 2.

      WRITE: /2 'ERROR:  Communication Failure'.

    WHEN 3.

      WRITE: /2 'ERROR:  Invalid Queue Name'.



ENDFORM.                    " RESTART_QUEUE



1) Alerting to BI Launch Pad.   Moving alerts from the report level to the User Interface

2) Charting tooltips in Webi

3) Join multiple data sources in new UNX design tool at a Data Services Layer

4) The "Data Preview" pane in new UNX design tool that allows you to see your data as you test and object query

5) Text Analysis moved into Data Services

6) Schedule report packages of Crystal and Webi Reports

7) For SAP Crystal Reports, you can embed the content of onereport in an email.

8) For Web Intelligence documents, you can embed thecontent of one report tab in an email.

9) Export reports to the Excel 2007 workbook

10) How new versions of enteprise software creates real jobs

Continuing into the Universe Designer deep-dive I rediscovered a few good parameters to be reminded of

.    Check out JOIN_BY_SQL article by Dave Rathbun http://www.dagira.com/2009/07/14/what-does-join_by_sql-do/

A few new paramters in the 4.0 release docs are interesting

  AUTO_UPDATE_QUERY -  What happens when an object in a query is not available to a user profile?

BACK_QUOTE_SUPPORTED - If the SQL uses back quotes to enclose table or column names containing spaces or special characters

 SMART_AGGREGATE - Allows for overriding the handing of Aggregate table selection logic.  This one needs a separate article in itself......


Another item is that the PRM file parameters are stored in an easier to follow section in the universe design document.  There is still database specific options in the separate and probably rarely read "Data Access Guide"

Since the PRM files are "still used for parameters that are database specific" they should reviewed at least once in a life time against your universe design requirements.

Items like LIGHT_DETECT_CARDINALITY would have helped me check my cardinality on that one universe a few years ago if only I had known, that everything is negotiable.

Two new @VARIABLE options as part of the expanded multi language support



The fill doc in its 594 page glory is on the SAP web site  http://help.sap.com/businessobject/product_guides/boexir4/en/xi4_universe_design_tool_en.pdf


Kevin McManus


Community User

Universe Designer 4.0

Posted by Community User Feb 5, 2011

I reviewed the Universe Designer 4.0 documentation for those of us not going to the new Information Design Tool and was reminded of the importance of correctly configuring the Universe parameters for your database.  this topic focuses on the use of ANSI92-SQL and GROUP BY settings and also applies to the 3.X platform.

While its still from the last century, I love ANSI-92 SQL. It took a while for me to get it back then, but  now I hate it when its not setup on a Universe to use it.   I am somewhat still amazed we are still mostly using ANSI89-SQL which was defined when Madanna and Prince were still on the top the charts.

In reviewing the settings I was reminded that to take advantage of ANSI 92 there are multiple settings you need to be knowledgeable.

The first if of course ANSI92 which is set to NO by default and needs to get an override of YES.

FILTER_IN_FROM.  This feature that allows the outer join filter to be in the FROM clause can be set at the join properties level.  However it can also be set to YES in the universe parameters so that it defaults to "All objects in FROM".

The second is INNERJOIN_IN_WHERE which has to be added if you want to use it.  Allows you to force the system to generate SQL syntax with all the inner joins in the WHERE clause whenANSI92 is set to yes.  This is sometimes a nice way to show that  query is not using any outer joins as all the joins being innter will revert to being in the Where clause.

SELFJOINS_IN_WHERE can also override the ON clause to move back to the WHERE clause even if ANI92 is set to YES.

If you dont have Aggregate values (i.e.e SUM, SVG) in your measure objects that will force a GROUPBY on your queries then you better consider theDISTINCT_VALUES parameter.  The key thing to remember is that its only invoked if and when the option "Do not retrieve duplicate rows" isactive in your report.  If you want GROUP BY instead of DISTINCT then you have to override the default it in the Universe Paramters.

The last thing I will mention is that there are other settings that are database specific stil managed inthe PRM files.  I think that many new designers who never had to manage the PRM file in 6.X and earlier may not even know where that file is and what it does. 

But thats for another post.

Kevin McManus









Based on SAP EAM Benchmarking, companies (across all industries) with average performance have around 73% OEE, whereas the Top 25% companies have close to 89% OEE.


Source: SAP EAM Benchmarking

Participants in SAP’s EAM Benchmarking program can have access to industry specific values for this metric.



  • Operational equipment effectiveness is maximized, breakdowns minimized and maintenance costs reduced through predictive and preventive maintenance
  • Operating asset performance can be monitored in real time and can be analyzed to identify trends as well as risks, to increase asset reliability and safety
  • Sophisticated reliability based maintenance procedures and tools are utilized on a regular basis to increase asset availability


  • Weekly Maintenance Schedule Attainment Rate (in %)

AET is the starting point

If you are thinking about enhancing the CRM Web-Ui (of CRM7.0) you will have heard about the appliacation enhancement toolset (AET). It offers a flexible and easy way to add custom fields to the Web-UI screens. The tool itself even runs within the Web-UI, so it's even some kind of WYSIWIG.

Extending CRM is nice, but you will shurely come to the point, where you need to exchange those fields with the ERP backend system.
This is what we will do for the salesorder head (vbak), as a sample. It works pretty much the same way for the item level (vbap).
Please note that other objects are implemented totaly different in the CRM's "R/3-Adaptor", so the solution shown here will not work for each and every object.

Checking the box, that's it!

When reading through the docu, you might get the impression, that all you need to do, to exchange a field with the ERP-backend is checking the checkbox in AET.
Well, this is a good starting point (you have to check the checkbox, yes!) but as you will notice, your data will not appear in ERP.

So there is something more to do, so from now on Step by step...


0. Understanding the Salesorder exchange object

As the salesorder in CRM is a "one order" document, it is transfered trough the "BUS_TRANS_MESSAGE" Bdoc. As this bdoc is not only used to transfer salesdocuments, SAP used a BADI "CRM_BUS20001_R3A" to offer flexible and different upload functionality. The implementation "CRM_SALSESDOCU_R3A" is the one in charge for uploading sales documents, so whenever you have some trouble with the upload, this Badi is a good starting point for debugging.

For the mapping inbetween the bdoc and bapimtcs structures there is another BAPI you should know the "CRM_DATAEXCHG_BADI".
There is a implementation "CRM_BTX_FIELDEXT" doing the mapping for the fields you added in AET AND!! ERP, this is what I will show. (If this implementation is not active, activate it, otherwise nothing will work!)
Beside that there is a sample implementation "CRM_BTX_EEW_DATAEX" coming from the "easy enhancement workbench" (this was never easy :-)) You can use this for your own mapping implementations, or if you have restrictions so that you cannot use the generic mapper. Please implement note 1458476 before copying the sample code, as it contains errors :-)



You should get familiar with those two function modules.
So please read trough the documentation of the BAPI, especialy the part where the enhancement of VBAK, VBAP and VBEP is described, because this is what the CRM-Adapter uses in order to hand over the fields to ERP!

The "CRS_SEND_TO_SERVER" is the generic outbound module, when data is send from ERP to CRM, so whenever you are missing some data on the CRM side, check if it has been packed into the BAPIMTCS container by setting a breakpoint here.


1. Extend the ERP Structures

If you have read through the docu of the "BAPI_SALESORDER_CHANGE" it should be pretty clear. You will have to do the following:

- Add the field added with AET to a customer structure (z-structure), keep the generated names and the order of the fields!!

- Append your structure to the table vbak

- Append your structure to the structure VBAKKOZ

- Append your structure to the structure BAPE_VBAK

- Create a second structure with the same name of your first structure followed by an "x". This is the checkbox structure. Add the fields once again, keep the order of the fileds, and use a Character 1 data type for every field.

- Append the X-Structure to the structures VBAKKOZX and BAPE_VBAKX


2. Extend the CRM-Structures

Usualy you don't have to do anything here, but at least you should check the structures "CRMT_BTX_EEW_BAPE_VBAK" and "CRMT_BTX_EEW_BAPE_VBAKX"
Your fields should be there as well, if not, did you realy check the R/3 checkbox when creating the field in AET??

Well anyway you could still append some other custom fields here, that you haven't added with AET, but maybe manualy. They will be exchanged as well, but of course they have to exist within the bdoc!


3. Implement note 578653

YES! I'm not joking, you may say that this note is not valid any more, but in fact it is, read through it, check the tables and the download object and you will see!


4. Et voila

This is it, now you should see your data beeing exchanged between CRM and ERP.


5. Troubleshooting

Ok, if it still doesn't work, some hints for troubleshooting.

First you should check the direction causing the trouble, is it from ERP to CRM, then:
Put a breakpoint in "CRS_SEND_TO_SERVER". Change a custom field using the transaction (va02), check if there is a "BAPIPAREX" line in the BAPIMTCS container table. If not, check the structures as in 1 described carefuly. If BAPIPAREX is there check if it still exists after the call of the filter module, if it disapears, go back to 3.
This is usualy about it.

If there is some trouble when you change the field in CRM, saying it is not downloaded to ERP, please check the BDOC in SMW01.
If it is filled correctly there, place a breakpoint in method CRM_DATAEXCH_AFTER_BAPI_FILL of the class CL_IM_CRM_BTX_FIELDEXT. Well the coding is difficult to read, as it is very generic, but still you should see, that your fields are getting mapped into ct_bapiparex table. If not, check 2.

Community User


Posted by Community User Jan 7, 2011

We have started late last year a Roadmap SIG as part of ASUG BusinessObjects Strategic SIGs that is working directly with the SAP BusinessObjects Solution and Project managers to make sure that the most updated and accurate information is presented on SDN. 

We have started a work group and are creating some presentations and webinars over the next few months.  You will start to see more communications about this and posts on SCN and other forums and can drive your questions so that roadmaps are at the right levels for your needs.



Kevin McManus


Filter Blog

By date:
By tag: