1 2 3 4 Previous Next

SAP Master Data Governance

58 Posts

The purpose of this blog is to give you a quick overview on the consulting solution which is available now to govern retail article data using MDG.

 

MDG as a solution has been validated to work with the IS Retail system for more than a year now, but so far the content (data model, UI and logic) available on MDG framework, was only for  customers, suppliers, BP, Finance objects and materials.Customers could of course build their own article data model and UI on top of the MDG framework, but were left with a lot of open questions around the architecture to follow, how the UI should be modelled, etc. and had to deal with complexities of bringing the retail business logic into MDG. On top of this, the effort and time involved to build a model as complex as the one in SAP IS Retail was  too much and not really suitable for customers who wanted to get it running quickly.

 

Now with the new consulting solution which is offered, we will be able to govern also retail articles using MDG’s powerful data governance and enrichment platform.

 

The solution, which is in its second version now, not only provides the article data model  and  UI  for staging and governing retail articles, but also benefits from  its tight integration with the IS Retail system to provide the business logic and customization settings already done on the retail system. This way, the data quality is enforced exactly when data is captured in the governance process , ensuring that data governed in MDG, is fully compliant to your retail business rules when its replicated to the IS Retail system.


What is the architecture?

 

For those of you who are familiar with the MDG architecture , MDG for Articles uses a ‘reuse’ architecture, which means the solution is able to fully leverage the IS Retail customizing logic including concepts like reference handling, which is an integral part of the IS retail system and is automatically able to  replicate into your retail tables after governance.


In short the architecture is kept aligned to the architecture of the other MDG domains like MDG for Materials , Suppliers etc.

 

The solution uses the underlying MDG framework to provide the governance functionalities like change documents, workflow, customizable UI , tight
integration into the business rules framework for additional validations and defaulting  etc. And of course the architecture is extensible, so you are able to extend the pre-delivered content with your landscape specific extensions on the data model and the UI


Feature Highlights:

   

  • Data governance for structured and unstructured Retail Article types including Single, Generic, Set, Prepack and Display
  • Supports the reference handling logic of the IS Retail system
  • Support for Full Products , Empties and BoM (for structure articles)
  • Characteristics handling and variant creation.
  • Data views for Basic, Purchasing, Listing and Logistics (more views to come)

 

Article Detailed Screen.png
Figure 1 : Showing the Article Change Request screen with some of the fields and the article image on the side panel.

 

 

The overall idea of this consulting solution is to make it easier for customers to use MDG for their Retail master data and decrease the time to value.  The solution attempts to cover all the complexities around designing the architecture and core retail functionality on top of the MDG framework, so that you can focus on your business specific requirements and not worry about bringing in the core retail logic.


Hope this information was useful and if you have questions feel free to post them on the blog or write to me with your queries.

The need for data governance which actually means establishing control on data source to obsolescence has become a need for many organizations. I would be surprised if any CIO is not mentioning establishing data governance or enhancing data governance as one his primary IT investment objectives in the short term or long term. Lot of organizations have a baggage of multitude of IT systems each of which serves a specific business need and designed to serve it by sourcing the needed data all by itself. All organizations always keep evaluating the conflicting need to rationalize IT systems or building more robustness in the interfacing part so that the hold on to the best of the breed IT systems even though collectively they create lot of weak links. So if an organization is able to rationalize IT systems, then data governance is taken care by the rationalization itself but in case of the need to maintain the best of breed IT systems, then data governance becomes paramount as it can really break the weak links and exponentially increase the pain of maintaining such heterogeneity.

 

So the context of data governance is to arraign the sources of proliferation of data elements and data values and bring in much needed control. Basically strengthen the weak links in the heterogeneous IT systems landscape which I had mentioned earlier. As anybody can guess this is easier said than done. Additionally data governance needs harmonization as pre-requisite which can also turn out to be tough to crack. So achieving one source of truth and one value across all the IT systems is a tall order, but then is it not the fundamental requirement to be achieved through Data Governance. It is here my topic bears significance of having flexibility in data governance. I would say "Flexible Data Governance" is indeed an oxymoron but it is a practical need too. Let me explain with an example.

 

In a recent data governance implementation project we came across the field "Division" having available values as 10, 20, 30 & 40 in one SAP system and in other SAP systems there were multiple values additional like 60, 15 and even alpha numeric of two character length. Keep in mind all the systems involved are SAP, so it should be a piece of cake to harmonize this and that is how it started. We standardized the values as 10, 20, 30 & 40 in the data governance system and mapped the additional values available across all systems to these 4 values. But then we found case of hard coding in interface programs, middleware program, enhancement programs and even values in this field being used in user exits to execute specific activities etc. which ruled out harmonization of one value of after another. So what to do? Continuing with harmonization means costly program changes, elaborate testing efforts, risk of introducing production issues etc. Here comes the concept of “Flexible data governance” where-in we introduced scalable data governance where-in within a master data object we allowed values to be harmonized and controlled on a certain fields while in other fields it is allowed to have different values in different systems. So the data object is part of data governance but not all fields in it are controlled by the data governance tool.

 

I am sure each of us would have seen such conflicting requirements, but in a data governance project where-in the fundamental need to conformity, flexibility is a bad word but then life thrives in the gray. Please share such examples in your project experience.


The propose of this post is to give the details steps how to configure the Import sceniro mention in the SAP doc and it will help to answer some of the question ask frequently.

Data Transfer:

Setting up Data Transfer i.e. Export and import configuration.

 

  1. The first step of a data transfer is the export of the master data to an
    IDoc-XML file, which you can save locally or on your application server.
  2. In the second step, you import the data into your central MDG system.

Data Export from the source
System, configure the logical system
for IDoc-XML export to the application server file system

  1. Create an XML-file port: Use transaction WE21 to create an XML-file
    port for IDoc processing. Ensure that you have network access from your local
    client to the directory configured in the XML-file port.

Untitled.png

Enter the function module EDI_PATH_CREATE_CLIENT_DOCNUM. On the Outbound Trigger tab enter the RFC
destination
LOCAL_EXEC.

Untitled1.png

2.Create Logical System

Open transaction SALE and then go to
Basic Settings -> Logical->Systems->to create a new logical system.

Untitled3.png

3. Maintain Distribution Model
Open transaction SALE and then go to->Modeling and Implementing Business Processes->Maintain Distribution Model and Distribute Views->You can also use transaction BD64 for this.

a. Switch to change mode and choose Create Model View to create a new entry. Enter a short text and a technical identifier.
b. Choose Add Message Type for the newly created model. Enter a logical source system name and a destination system name and choose the message types MATMAS and CLFMAS

Untitled4.png

4.Create Partner Profile

Run transaction SALE and then go to->Partner Profiles->Generate Partner Profiles

>Alternatively you can use transaction BD82.

a.Select the newly created model using the input help for the technical name and then select the logical destination system.

b.Enter the authorized user and the following values:

 Version: 3

 Pack.Size: 100

 Output Mode: Immediate Transfer

 Inbound. Processing: Immediately

c.Choose Execute. You can ignore the port error that appears.

 

Untitled5.png

5.Call transaction WE20 and make the following settings:

a.Open the Partner Type LS folder and select the partner profile you created above.

b.Update the message types MATMAS and CLFMAS in the Outbound Parameters section. The Receiver Port is the XML-file port from the first step above. In the Basic Type field enter MATMAS05 for MATMAS and CLFMAS02 for CLFMAS.

Untitled6.png

6.The Receiver Port is the XML-file port from the first step above. In the Basic Type
field enter
MATMAS05 for MATMAS and CLFMAS02 for CLFMAS.

Untitled7.png

7.Test creation of IDOC XML

  1. a.Generate the
    IDoc-XML for material using the transaction BD10.

Untitled8.png

Untitled9.png

 

 

8.Check the newly generated IDocs using transaction WE02 or BD87. You can use the
receiver port as the filter criteria in the Partner Port field.

Untitled10.png

 

9.Use transaction AL11 to find the XML files on the directory of your XML-file

port.

10.To download the file for analysis purposes to local directory use transaction CG3Y.

Untitled11.png

Data Import into Target System (MDG Hub)

1. To be able to import IDoc-XML files the following set up activities need to be carried out:

a.Use transaction IDX1 to create two ports in the IDoc adapter, one for sending and the other for receiving. Enter the port, client, description, and RFC destination for each port. Both ports should have the RFC destination of the MDG hub. Check that the port names match the names in your IDoc-XML file for the SNDPOR and RCVPOR.

Note:That the sender and receiver Port Name should match the incoming XML file,otherwise the
file will not be picked up(shown in XML File below).

Untitled12.png

Untitled13.png

 

 

2.In transaction WE21 enter the receiver XML port using the same name as in step 1 above.
Enter the port name under the folder XML File, and enter a description and a
physical directory. In the function module field enter
EDI_PATH_CREATE_CLIENT_DOCNUM. On the Outbound:Trigger tab, in
the RFC destination field, enter
LOCAL_EXEC.

Untitled14.png

3.In transaction FILE create the logical file path name. Enter a Logical
File
and a Name. In the Physical File field enter
<PARAM_1>. In the data format field enter BIN. In the Application Area field
enter
CA. In the Logical Path field enter the logical file path.

Note:Physical File field some is not reconize due which the file will not be picked up so check properly.

Untitled15.png

4.Open the Configuration activity  General Settings->Data Transfer->Define File Source and Archive Directories for Data Transferand assign your new logical file path name as a directory for data transfer.

Untitled16.png

Untitled17.png

5.In transaction AL11 make sure that the IDoc-XML files are stored under the logical path and that there are no other files stored in that directory. Double-click on the path to view the existing IDoc-XML file. You can use transaction CG3Z to copy a local IDoc-XML file to the path

Untitled11.png

6.Open the MDG workcenter and provide the details with CR if you want to govern the data as shown below.

Untitled18.png

select the Show Directory Content to see the file has come in the path.

Untitled19.png

check the logs to see whether the files are picked up or not.

Untitled19.png

go back to MDG workcenter and see the you can see the CR if the file is picked Up

Untitled20.png

click on the CR and activate

Untitled21.png

check the status of CR it should Final check approved and go MM03 to check the material

Untitled22.png

 

Hope this will help.......

Hi


Introduction

SAP delivers with SAP MDG a so called “vendor like UI” which hides the complexity of the Business Partner model from the end users. Especially customer who do not use or know the SAP Business Partner concept are used to a simple creation of a vendor user transaction xk01. Vendor Like UI looks pretty similar and simplifies the creation of a ERP vendor via SAP MDG. For more information please visit http://help.sap.com/mdg70

 

Link to the click through demo

http://demo.tdc.sap.com/SpeedDemo/841101508357468b



Storyboard

VendorLikeUIStory.jpg


Key Points:

  • Create Vendor using " Vendor Like UI"


Screenshots

In these screenshots you can see that the complexity of BP is hidden from the end user.

VendorLikeUIScreenshot1.jpg

VendorLikeUIScreenshot2.jpg


Best regards

Steffen

Hi,

 

Introduction

SAP delivers with SAP MDG a so called “customer like UI” which hides the complexity of the Business Partner model from the end users. Especially customer who do not use or know the SAP Business Partner concept are used to a simple creation of a customer user transaction xd01. Customer Like UI looks pretty similar and simplifies the creation of a ERP customer via SAP MDG.

For more information please visit http://help.sap.com/mdg70

 

Link to the click through demo

http://demo.tdc.sap.com/SpeedDemo/93cb9d046af1e202


Storyboard

CustomerLikeUIStory.jpg

 

Key Points:

  • Create Customer using "Customer Like UI"
  • One Step Contact Person creation

 

Screenshots

Int hese screenshots you can see that the complexity of BP is hidden from the end user.

CustomerLikeUIScreenshot1.jpg

 

CustomerLikeUIScreenshot2.jpg

 

Best regards

Steffen

Predefined master data models delivered by standard

  • Material — MM
  • Customer — BP
  • Supplier — BP
  • Financials — 0G
    • Profit Center
    • Cost Center
    • GL Account
    • Cost Element

Custom master data model

For any client business use case that demands governance of master data objects that are not delivered by Standard, SAP provides a framework to govern the same.

Sample business cases that give rise to the creation of custom data model:

  • To govern the creation of reference data, e.g., Bank Codes, Plants, and Storage Location
  • To governs the creation of Master Data, e.g., Services, Employee data, and WBS Elements

Client Requirement is to govern Service Master and the following sections outline the development process.

 

Note: Below development process is developed and Tested in MDG 6.1.

 

Technical solution

Key design considerations for building a custom data model

  • Data modeling
  • Mode
  • Replication strategy

 

Data modelling

Relating to designing an application, the first step in building custom model is to decide on the underlying tables/fields and the relations between them.

In MDG, the entities, attributes, and relationships for a data model are defined, which are in turn used by system uses to generate database tables in which the master data can be stored.

 

Mode

Active Area: Holds data that is ready to be used by applications

Staging: Holds the data that is not yet approved; current change request

  • Reuse Mode (Reuse Active Area)
    • Existing data structures (i.e., data base tables) of applications are reused. For example, MDG-M makes use of MARA tables in ECC to store the data of activated CR.
  • Flex Mode (Generated active area)
    • Tables as defined in the MDG (Data Model) are used to store both the active and inactive data. For example, MDG-F makes use of Flex Mode.

Replication strategy

Deployment options:

a.   Hub-based

b.   Co-deployment

 

If Hub-based:

  1. All the data models, irrespective of Reuse or Flex, need to be replicated to operational ECC.

If Co-deployment:

  1. Only data models that are being operated in Flex mode need to be replicated, so that data gets saved in respective operational database ECC Tables for transactional use.

 

Client requirement

As-is process

  • Microsoft Excel templates and email communication to create and maintain Service data (Transactions: AC01/AC02 in ECC)
  • Master Data Control Group to manually create/update the data in the ECC system

To-be process

  • Requests with custom validations/requirements to create/update service master data in reuse mode
  • Requests to be routed through custom workflow
  • Automatic data replication to ECC upon approval

 

Solution implementation

Prerequisites

  • Relevant Master Data configuration should be in place. For Service Master, all the relevant configuration in SPRO-> Material Management -> Purchasing -> External Services Management -> Service Master should be in place
  • If replication is needed, then all relevant IDOC configurations should be in place, like Ports, Partner profiles, Distribution Models, etc.

 

Step 1: Data Model Creation

  • Start SAP GUI and login to transaction ‘MDGIMG’.
  • Navigate to the path MDGIMG->General Settings->Data Modeling->Edit Data Model.
  • Click a new data model by clicking the ‘New Entries’ option.
  • Enter the data Model name as ‘ZS’ and description as ‘SERVICE MASTER’.
  • Navigate to data model and mention the active area as ‘ZSERVICE’.*

1.png

  • Click on entity types to add new entity types

Entity Type

Storage Type

Data Element

Key Assignment

Attachment

Search Help

Active Area

Deletion Allowed

Temporary Keys

LANGUCODE

3

SPRAST

1

 

 

 

Deletion allowed

 

SERVICE

1

ASNUM

4

Yes

ASMDT

ZSERVICE*

Deletion allowed

LEISTUNG

SER_LTXT

4

 

1

 

 

 

Deletion allowed

 

 

  • Storage Type:
  • 1 — Changeable via Change Request; Generated Database Tables
  • 3 — Not Changeable via MDG; No Generated Tables        
  • 4 — Changeable via Other Entity Type; Generated Database Tables
  • Key Assignment:
  • 1 — Key Cannot Be Changed; No Internal Key Assignment
  • 4 — Key Can Be Changed; Internal Key Assignment is Possible

 

  • Attributes:

Entity Type

Attribute

Data Element

Required Entry

Search Help

Description

SERVICE

ASKTX

ASKTX

Yes

 

Short Text Description

SERVICE

ASTYP

ASTYP

 

 

Service Category

SERVICE

BEGRU

BEGRU

 

 

Authorization Group

SERVICE

BKLAS

BKLAS

 

 

Valuation Class

SERVICE

GLACC

CHAR10

 

SAKO

GL Account

SERVICE

LVORM

LVORM

 

 

Deletion Indicator - Service Master

SERVICE

MATKL

MATKL

 

H_T023

Commodity Service Code

SERVICE

MEINS

MEINS

 

H_T006

Unit of Measure

SERVICE

SPART

SPART

 

 

Davison

LANGUCODE

N/A

N/A

 

N/A

N/A

SER_LTXT

NOTE_SLTX

SYSTRING

 

 

Service Long Text

  • Relationships

From-Entity Type

Relationship

To-Entity Type

Relationship Type

Cardinality

Description

LANGUCODE

LANG_LTXT

SER_LTXT

Qualifying

1 : N

Qualifying Language Code for Service Long Text

SERVICE

SER_LTXT

SER_LTXT

Leading

1 : N

Leading Service for Long Text

  • Access Class — ZSMCL9_MDG_ACCESS_CLASS_SER
  • Create an access class implementing the interface IF_USMD_PP_ACCESS.
  • Save and Activate the data model
  • Navigate to the path MDGIMG->General settings->UI Modeling->Edit UI Configuration.
  • Create a new UI ZSM_USMD_ENTITY_VALUE2 by doing a deep copy of USMD_ENTITY_VALUE2 and arrange the attributes on UI as per the requirements.

 

Step 2: UI Creation

  • Navigate to the path MDGIMG->General settings->UI Modeling->Edit UI Configuration.
  • Create a new UI ZSM_USMD_ENTITY_VALUE2 by doing a deep copy of USMD_ENTITY_VALUE2 and arrange the attributes on UI as per the requirements.

 

Step 3: Business Objects

  • Start transaction MDGIMG.
  • Navigate to path MDGIMG->General Settings->Data Replication->Enhance default settings for outbound implementations->Define business objects and object identifiers->Define Business Objects.
  • Create a new Business Object ‘ZS_BO’.
  • Assign the Business Object Type to the Data Model created in Step 1.

 

Step 4: Business Activities

As per the function needs, create business activities.

Navigate to transaction MDGIMG and navigate to the path MDGIMG->General Settings->Process Modeling->Business Activities->Create Business Activity.

Business Activity

Description

Data Model

Business Object

Logical Action

ZSCR

ZS: Create Service Master

ZS

ZS_BO

CREATE

ZSUP

ZS: Update Service Master

ZS

ZS_BO

CHANGE

 

 

Step 5: Create Change Request

Any workflow that suits the requirement can be assigned to the change request.

  • Navigate to the path MDGIMG->General Settings->Process Modeling->Change Requests->Create Change Request type.
  1. No.

Request

Request Type

Create/ Update

Workflow

UI Entity

Business Activities

1

Create Service Master

ZSMCRE01

Create

WSXXXX

ZSM_USMD_ENTITY_VALUE2

ZSCR

2

Update Service Master

ZSMUPD01

Update

WSXXXX

ZSM_USMD_ENTITY_VALUE2

ZSUP

 

 

Step 6: Access Class*

Access class ‘ZSMCL9_MDG_ACCESS_CLASS_SER’ is created implementing the interface IF_USMD_PP_ACCESS

This interface must be implemented for integrating a reuse active area in the custom data model. This interface can be used for the following activities:

  • Reading of (active) data from the reuse active area
  • Writing of (active) data from a change request to the reuse active area
  • Consistency checks that are specific for the reuse active area
  • Data derivations that are specific for the reuse active area
  • Authorization checks for data in the reuse active area
  • Locking of data in the reuse active area
  • Reading of change documents from the reuse active area

Implementations in the reuse active area (for example, BAPIs) typically already exist for these activities, which can be reused in the custom implementations.

For Service Master Data, BAPI ‘BAPI_SERVICE_CREATE’ and ‘BAPI_SERVICE_UPDATE’ can be used to create/update the data.

 

Step 7: Data Replication Framework

  • Define Object Identifiers
    • Path: Transaction DRFIMG-> Enhance Default Settings for Outbound Implementations Define Business Objects and Object Identifiers -> Define Object Identifiers

 

  • Define Business Objects
    • Path: Transaction DRFIMG-> Enhance Default Settings for Outbound Implementations Define Business Objects and Object Identifiers -> Define Business Objects

 

  • Define Object Nodes
    • Path: Transaction MDGIMG-> General Settings -> Key Mapping -> Enhance Key Mapping Content -> Define Object Nodes

 

  • Assign key structures to Identifiers
    • Path: Transaction DRFIMG-> Enhance Default Settings for Outbound Implementations -> Define Business Objects and Object Identifiers -> Assign Key Structures to Object Identifiers

 

  • Create Outbound Implementation
    • Path: Transaction DRFIMG-> Enhance Default Settings for Outbound Implementations -> Define Outbound Implementations

 

  • Mention the outbound Interface Model name, Outbound Implementation class, and communication channel (Options that can be used: IDOC, File, RFC, or Web services).
  • Generic Outbound Implementation class CL_MDG_OIF_DRF_OUTBOUND_IMPL can be used.
  • Create Filter Object
    • Path: Transaction DRFIMG-> Enhance Default Settings for Outbound Implementations -> Define Filter Objects
    • Assign this filter to Outbound Implementation in the above step.

 

  • Define Replication Model
    • Path: Transaction DRFIMG-> Define Custom Settings for Data Replication -> Define Replication Models.
    • Create a Replication Model using Outbound Implementation from the above step and activate the replication model.

 

 

  • Define Interface Model
    • Path: Transaction DRFIMG-> Enhance Default Settings for Outbound Implementations -> Define Outbound Interface Models
    • This interface model will create a Function Module in background that has all the necessary Staging data as importing parameters.
    • This FM should host the code for posting the data to ECC system. It can be a RFC call or IDOC posting, whichever is suitable for client needs.

 

* Note: This step is optional. If the requirement is to directly replicate the data to ECC operational box on activation of change request (i.e., Flex Mode), this step need not be performed.

 

Challenges faced

a.   Framework does not allow to assign any feeder class for the UI; hence, any UI requirements (which are not achievable by BADIs) should be implemented using post exits of the feeder class - CL_USMD_ENTITY_VALUE2_PROXY.

b.   Data replication framework only helps in tracking the status of CR w.r.t replication to ECC; however, all the replication logic needs to be implemented in the Interface Model for any custom data model apart from mentioning the communication channel.

 

Skill set needed

Resource

Skill set/primary activities

Basis

Installing MDG, downloading the latest software package, and implementing SAP Notes.

Functional

Configuration regarding the Service Master (SPRO-> External Services Management->Service Master).

Technical

Custom development for using BAPIs and transferring the IDOC. Resource need to be aware of FPM (Floor Plan Manager) and Workflows.

Hi,

as promised: A click-through demo for the new MDG 7.0 function "Governance of Business Partner". As written in the blog from Markus Kuppe: "Create new Business Partner with MDG-BP. With MDG 7.0 "there are out-of-the-box governance processes for the generic SAP Business Partner. That means in addition to Supplier and Customer that were provided earlier, companies can now also govern the central data of different types of SAP Business Partners".

 

 

Goal of this demo

At the end of this demo you are able to demo the new feature Business Partner Governance to customers.

Key Points:

- Create simple Business Partner only via MDG (w/o customer or supplier )

- Generate IBAN out of bank details

- One Step Contact Person creation

 

Process Flow

The following graphic shows the process flow.

 

MDG-BP Demo Process.jpg

 

1. Requestor: Claire Thomas creates a new MDG Change Request and submits the request

2. Approver: Mike will review and approve the request. The system will activate the new Business Partner

3. Business/ERP User: In this demo Mike will also check the existence of the new Business Partner .

 

Link to the Demo:

http://demo.tdc.sap.com/SpeedDemo/c26b1d8d40066b8f

 

 

Some statements to the scenario from technical perspective

1. End users will see the following entry point navigation item, if you assign a user to the corresponding new pfcg role

04-03-2014 11-45-30.jpg

2.  With MDG 7.0 IBAN conversion is possible in "both" directions:

04-03-2014 11-47-45.jpg

3. With MDG 7.0 its possible to add a second ( ore even more ) Business Partner record into the same Change Request via the "One Step Contact Person Creation" function. Its Standard!

   a. how to add a new contact person

04-03-2014 11-49-17.jpg

 

   b. how it looks in "My Change Request":

 

04-03-2014 11-49-42.jpg

 

 

Best Regards

Steffen

SAP Master Data Governance ensures that master data is accurate and consistent already at the point of creation. As such, it facilitates the creation and maintenance of a single point of truth, which also supports SAP® Business Suite powered by SAP HANA® software in leading the way to new realtime business practices. On the other hand, with the latest release (7.0, currently in Ramp-up shipment), SAP Master Data Governance can use SAP HANA search capabilities (multi-attribute drill down search as well as fuzzy search) that help fuel the master data governance process itself.

 

You can find out more about these aspects in a new SAP Master Data Governance factbook that is part of the Suite on HANA Fact Book series at www.factbook.saphana.com under the Technology section.

 

Best regards,

 

Markus Ganser

Dear followers of the scn-community,

I am new in the community and this is my first BLOG. So I want to present you my first customer solution in MDG!

The topic of my post is how to change the user interface of the worklist. There are just a few steps to follow and I promise that you won't regret a closer look on the pictures.

Let's get started!


The first screenshot below shows the worklist view of the standard. Especially the "Subject"-column is full of information and looks much overloaded. Therefore the adaptation of this view is getting more and more a part of the customer's requirements.  The second picture you can see below is one example of an adapted view which is clear and limited to the essential information.


 

Standard:

 

 

Adapted:

 


But how can you adapt this?
The following st
eps should help you, how you can handle this:


Agenda

  1. Basics
  2. POWER List Design
  3. POWER List Implementation
  4. POWER List Registration
  5. POWER List UI configuration
  6. Role assignment
  7. Test it



1.    Basics

1.1    Basic Concept

The POWER List is a framework which can list business objects and allows specific actions based on these objects. The central concept is that all properties can be specified within on central class...the feeder class. The feeder class is handling between the database and the POWER List. So this is the most important place while developing and modifying a POWER List.


1.1    Role Dependency

The roles are the access point to all the POWER Lists in the system. A so called APPLID determines which POWER List will be called. Therefore the assignment of APPLIDs to a role determines which POWER List will be available for the role.



2.    POWER List Design

2.1    Creating a structure

Create a new structure within the transaction [SE11] and include the standard structure of the POWL. You can also add addional components too.


structure.gif


2.1    Creating a table type

Copy the standard table type of the POWL within the transaction [SE11] and replace the line type with your recently created structure.


tabletype.gif



3.    POWER List Implementation

3.1    Creating a feeder class

Copy the feeder class of the standard within the transaction [SE24]. To adapt the design of the POWL you have to change the reference to the table type and structure of the standard. So just fix the two method implementations you can see in the following animation.



copyfeeder.gif

 

 

3.2    Adapt Content

Coding of method: IF_POWL_FEEDER~GET_OBJECTS 

Within the method ~getObjects you can adapt the content of the POWL. This implementation is an example how to change the columns and their content. Just delete the existing coding and replace it with my short demo implementation. It might helps you to understand how this method works.


  METHOD IF_POWL_FEEDER~GET_OBJECTS.

* Structures
 
DATA ls_workitem      TYPE ibo_s_wf_facade_inbox_wi.
 
DATA ls_result        TYPE ibo_s_inbox_workitem.
 
DATA ld_crequest      TYPE usmd_crequest.
 
DATA ls_crequest_ext  TYPE usmd_s_crequest_ext.
 
DATA ls_creq_type_txt TYPE usmd110t.
 
DATA ls_creq_stat_txt TYPE usmd202t.

* Objects
 
DATA lo_model        TYPE REF TO if_usmd_model.
 
DATA lo_crequest_util TYPE REF TO cl_usmd_crequest_util.

* Field Symbols
 
FIELD-SYMBOLS: <ls_wi> TYPE zmdg_s_crequest_powl.

* Table
 
DATA lt_creq_type_txt TYPE STANDARD TABLE OF usmd110t.
 
DATA lt_creq_stat_txt TYPE STANDARD TABLE OF usmd130t.


*--------------------------------------------------------------------*
* Select Action Texts
*--------------------------------------------------------------------*
 
SELECT * INTO CORRESPONDING FIELDS OF TABLE lt_creq_stat_txt
     
FROM usmd130t
     
WHERE langu = sy-langu.

*--------------------------------------------------------------------*
* Get C/R Type texts
*--------------------------------------------------------------------*

 
SELECT * INTO CORRESPONDING FIELDS OF TABLE lt_creq_type_txt
   
FROM usmd110t
   
WHERE langu = sy-langu.


*--------------------------------------------------------------------*
* Get POWL parameters
*--------------------------------------------------------------------*

 
CALL METHOD super->if_powl_feeder~get_objects
   
EXPORTING
      i_username             
= i_username
      i_applid               
= i_applid
      i_type                 
= i_type
      i_selcrit_values       
= i_selcrit_values
      i_langu               
= i_langu
      i_visible_fields       
= i_visible_fields
   
IMPORTING
      e_results             
= e_results
      e_messages             
= e_messages
      e_workflow_result_count
= e_workflow_result_count.



*--------------------------------------------------------------------*
* Get instance of MDG Data Model
*--------------------------------------------------------------------*

 
CALL METHOD cl_usmd_model=>get_instance
   
EXPORTING
      i_usmd_model
= space
   
IMPORTING
      eo_instance 
= lo_model.


*--------------------------------------------------------------------*
* Get Workflow Data
*--------------------------------------------------------------------*

 
CREATE OBJECT lo_crequest_util.
 
LOOP AT e_results ASSIGNING <ls_wi>.
   
CLEAR ld_crequest.
   
CLEAR ls_crequest_ext.
   
CALL METHOD cl_usmd_wf_service=>get_wi_crequest
     
EXPORTING
        id_wi_id   
= <ls_wi>-wi_id
     
IMPORTING
        ed_crequest
= ld_crequest.

*--------------------------------------------------------------------*
* Get C/R header data
*--------------------------------------------------------------------*

   
IF ld_crequest IS NOT INITIAL.
     
CALL METHOD lo_crequest_util->get_crequest
       
EXPORTING
          id_crequest   
= ld_crequest
          io_model       
= lo_model
       
IMPORTING
          es_crequest_ext
= ls_crequest_ext.
   
ENDIF.


*--------------------------------------------------------------------*
* Assign/Overwrite values of the target structure
*--------------------------------------------------------------------*

   
READ TABLE lt_creq_type_txt INTO ls_creq_type_txt
   
WITH KEY usmd_creq_type = ls_crequest_ext-usmd_creq_type.

   
MOVE-CORRESPONDING ls_crequest_ext TO <ls_wi>.

* Assign C/R information to POWL fields:
    <ls_wi>
-wi_text            = ls_crequest_ext-usmd_creq_text.            "Subject
    <ls_wi>
-wi_created_by      = ls_crequest_ext-usmd_created_by.            "User-ID
    <ls_wi>
-wi_end_ts          = ls_crequest_ext-usmd_created_at.            "Sent on
    <ls_wi>
-usmd_status_descr  = ls_crequest_ext-usmd_creq_status_text.      "Status

    <ls_wi>
-crequest          = ls_crequest_ext-usmd_crequest.


 
endloop.

 
ENDMETHOD.


3.3    Fixing action links

Within the methods HANDLE_SPECIAL_ACTION and ~GET_ACTIONS you have to adapt the action links. The following coding is handling

the action linking like it is in the standard.


Coding of method: IF_POWL_FEEDER~GET_ACTIONS

 

METHOD if_powl_feeder~get_actions.

*--------------------------------------------------------------------*
* Adapt Link Actions of own POWL.
*--------------------------------------------------------------------*
 
DATA lv_applid TYPE powl_applid_ty.
 
DATA lv_type TYPE powl_type_ty.

 
IF i_applid = 'ZMDG_CREQUEST_POWL_APPL' AND i_type = 'ZMDG_CREQUEST_WI'.
    lv_applid
= 'FIN_MDM_CREQUEST'.
    lv_type
= 'USMD_CREQUEST_WI'.
 
ENDIF.
*--------------------------------------------------------------------*


 
CALL METHOD super->if_powl_feeder~get_actions
   
EXPORTING
      i_username       
= i_username
      i_applid         
= lv_applid
      i_type           
= lv_type
      i_selcrit_para   
= i_selcrit_para
*    i_langu          = SY-LANGU
*  IMPORTING
*    e_actions_changed =
   
CHANGING
      c_action_defs   
= c_action_defs
     
.
ENDMETHOD.


4.    POWER List Registration

4.1    Creating an application Id

Create a new application id within the transaction [FPB_MAINTAIN_HIER].


applid.gif


4.2    Creating POWL type

Create a new POWL type within the transaction [POWL_TYPE].  Fill in the feeder class field with you recently created feeder class.


powltype.gif

4.3    Creating POWL query

Create a new POWL query within the transaction [POWL_QUERY] and fill in the POWL type field with your recently created POWL type.

After saving that task you can choose this button  to get into customizing view of the POWL. However in this case we do not need any further customizings.

powlquery.gif

 

 

5.    POWER List UI configuration
Open the IBO_WDC_INBOX Web Dynpro Component within transaction [SE80]. Start the component configuration of USMD_CREQUEST_POWL. Make a copy of the standard and change application id of the configuration context. After that start the configuration mode of the application configuration USMD_CREQUEST_POWL. Now you can assign your own component configuration like you can see it in the animation.


5.1    Component configuration

ui_comp_conifg.gif


5.2    Application configuration


ui_app_conifg.gif


6.    Role Assignment

6.1    Creating POWL role assignment

Create a new role assignment within the transaction [POWL_TYPER]. Fill in the lines with your own application id and POWL type. Then assign it to a role, e.g. in this case SAP_MDGC_REQ_03. Create another role assignment within the transaction [POWL_QUERYR] and fill in the required fields. The category can be set like it is customized in the standard (USMD_CREQUEST_DEFAULT). Attention: Do not forget to assign the activation flag!


roleassign.gif


6.2    Assign application configuration to role

Within the transaction [PFCG] you can now assign your application configuration to the role you have chosen in the step before. Show the details of the change request view - service within the menu tree by executing a double click. Now just change the standard application configuration to your own.


addtorole.gif



7.    Test it!

testit.gif



Best regards,

Flo



Hi,

with the newest MDG 7.0 release SAP will support parallel change requests. Of course, SAP MDG supported already parallel steps in one change requests but it was not possible to start a second change request for the same record. Now, with MDG 7.0 exactly this is possible!

The concept of parallel change requests makes maybe most sense in the context of materials but its supported for all domains including custom objects.

 

Some bullet points on the topic: "Support of Parallel Change Requests - Multiple Change Requests per Material at the same time"

  • Why?
    • MDG`s comprehensives data models:
      MDG-M many views, >400 attributes including organization-dependent data
    • One object, multiple owners & maintainers
      During change, data of plant A is processed  and approved  in parallel  and independently by other people than data  of plant B
      GOAL: Faster organizations are not required to wait  for the slowest!!
    • Multiple & concurrent reasons  for changes
      All in parallel: Add new plant data (long duration!), update  of dimensions,  urgent  corrections of wrong description
      GOAL: Long term changes do not block urgent corrections!
  • What?
    • Parallel Change Requests as standard feature in MDG 7.0
    • Initiate and process concurrent changes to the same object
    • Independent processing of each change request e.g. own notes, own log, process flow and processors, individual approval, rejection and finalization
    • Consistency by preventing that the same attributes are changed  by concurrent  change requests.

 

The following click-trough demo goes through this process:

13-02-2014 09-07-17.jpg

Please open the click through demo here:

 

http://demo.tdc.sap.com/SpeedDemo/4f69e432d69aa404

 

Best Regards

Steffen

Due to the large number of elements involved within a BRF+ application, the dependencies amongst the objects, and the multiple times that an object may have to be activated or de-activated, a situation may often occur where inconsistencies will result in the transport. Some inactive object may be present in the customizing transport or an object that was deleted and had other dependencies, might be present in the transport. In such cases, the transport will get into a lock status when it is released.

Unfortunately, the log for the transport will not show any entries as it is not captured at the application level. The transport has not been released by the system as it ran into the errors.The OS level logs can be verified by basis administrators and they will show the inconsistencies in the transport.To avoid facing the issues above, it is a best practice to utilize the transport analysis tool before releasing any BRF+ related transport.

The transport analyzer tool can be accessed using a BRF+ administrator role and selecting 'Transport Analysis'.


Five main areas of importance are highlighted in the screenshot; the details of each number are provided in the list below.

BRF_Transport_Analyzer_2.jpg

1. This option lists down all the transports under a user id specified in the section highlighted in #2 in the screenshot.

2. This input field allows to select a relevant user ID responsible for the transport requests

3. The column titled ‘Name’ lists down all the BRF+ components present in the transport request. Using this comprehensive list, each component can be analyzed in detail. The links open up the component specific information and each can further provide additional version history details.

4. The column ‘Object State’ shows a typical graphical icon display of the object status i.e. if it is active or inactive.

5. The ‘Error Status’ column highlights if there is an error or a warning associated with the object.

6. The ‘Check and Correct’ option allows activating all the objects within the transport and ensuring the transport entries are consistent.


 

 

Introduction

 

 

 

The MDG Framework Provides the fpm_list_uibb for having list values in MDG 6.1 Custom Objects. This document describes of how you can have different colours for every cell in the list.

 

  

 

 

Process

 

 

 

Let us assume that we have a column column2 in which I will have some numeric values which will be the representation of the colours in column1. The MDG framework uses the standard feeder class cl_usmd_entity_value2_proxy. Open the method IF_FPM_GUIBB_FORM~GET_DEFINITION and place the cursor at the last line and create an enhancement point. Provide the following code over there.

 

 

 

*** check whether your desired UI config, Entity and Entity_cont is called

 

data: wa_field_description like line of et_field_description,
     wa_dt_ffix          
like line of dt_ffix.

read table dt_ffix into wa_dt_ffix with key fieldname = 'CONFIG_ID'.

if wa_dt_ffix-value = <your UI config>.

read table dt_ffix into wa_dt_ffix with key fieldname = 'USMD_ENTITY'.

if wa_dt_ffix-value = <your entity>.

read table dt_ffix into wa_dt_ffix with key fieldname ='USMD_ENTITY_CONT'.

if wa_dt_ffix-value = <your entity_cont>.

read table et_field_description into wa_field_description with key name = <field for which colour has to be filled>.

if sy-subrc eq 0.

wa_field_description
-cell_design_ref = '<field in which the value of the colour is stored>'.
modify et_field_description from wa_field_description index sy-tabix.

endif.

endif.

endif.

endif

 

.

 

 

1.jpg

Introduction

 

 

 

The MDG Framework Provides the POWL list as the Change request Inbox. If we need to have a different colour coding for a specific column, we can achieve this by following the document

 

  

Current Change request Inbox

 

1.jpg

 

 

Process

 

 

 

Now we can see how we can provide a colour for the Sent On colum. The standard class which is used for controlling this inbox is cl_usmd_crequest_powl. We need to inherit this class so that we could modify some of those methods to achieve the colour. We shall create a new class zcl_usmd_crequest_powl so that it inherits from the super class cl_usmd_crequest_powl.

 

 

 

  

 

Create a Constructor method with the following coding:

 

super->constructor(

 

EXPORTING

 

io_cust_helper = io_cust_helper ).

 

initialize_feeder( ).

 

 

 

Now redefine the method READ_CUST_FIELDCAT and have the following code

 

 

 

data: wa_fieldcat like line of et_fieldcat.

 

try.

 

call method super->read_cust_fieldcat

 

exporting

 

iv_type = iv_type

 

iv_langu = iv_langu

 

importing

 

et_fieldcat = et_fieldcat.

 

catch cx_ibo_powl_no_fieldcatalog_se .

 

endtry.

 

Read table et_fieldcat into wa_fieldcat with key colid = ‘WI_CD_TS’.

 

wa_fieldcat-color = '02'.

 

Modify et_fieldcat from wa_fieldcat index sy-tabix.

 

 

 

This would be providing the column in green colour.

 

 

Customizing

 

 

 

Now go to transaction code powl_type and change the entry of Usmd_Crequest_wi from

 

Cl_usmd_crequest_powl to zcl_usmd_crequest_powl.

 

1.jpg

 

1.jpg

 

Note:

This works fine with NWBC but when used from a Portal screen, you might face some Performance issues

 

 

Hi,

 

Maybe you have read the Innovations in SAP Master Data Governance 7.0? blog by Markus Kuppe.

If not: I recommend reading that fantastic blog because it will update you with all the "must know" information on MDG 7.0.


As a follow up, I would like to share some click-through demos in this blog series. The plan is to share more and more simple demos on the new MDG 7.0 features in the next months.


 


This is the list of demo`s:


NumberTopicLink(s)Description
1BCV Side Panel apps

MDG-C: http://demo.tdc.sap.com/SpeedDemo/01c2946bff5b080a

MDG-S: http://demo.tdc.sap.com/SpeedDemo/f974eeda5e85b210

MDG-M: http://demo.tdc.sap.com/SpeedDemo/47d7e221a072b3bd

SAP MDG 7.0 delivers several BCV CHIP applications. These click-through demos will show you the standard content.

Example: In the MDG-C click-through demo you will search for an existing customer record. After opening the details you will display the open Sales Orders in the Side Panel.

2MDG Cleansing CasesMDG-C http://demo.tdc.sap.com/SpeedDemo/91f50d23edc2bcba

New with this release, there is also merge functionality for customers and suppliers. Whenever a user identifies potential duplicates in a search result list, or when the system identifies them in a duplicate check during change request processing, the user can create a Cleansing Case that contains all potential duplicates. Using an underlying change request, this cleansing case can be processed by an expert to merge the data from all duplicates into one target record.*


* from Markus`s Blog Innovations in SAP Master Data Governance 7.0

3

Parallel Change Requests.

MDG-M


http://demo.tdc.sap.com/SpeedDemo/4f69e432d69aa404

Please read this blog:

Demo on parallel change requests in MDG-M

4MDG-BP

MDG 7.0: Click-through demo for new MDG-BP scenario

 

direct link:

http://demo.tdc.sap.com/SpeedDemo/c26b1d8d40066b8f/

Create new Business Partner with MDG-BP. With MDG 7.0 "there are out-of-the-box governance processes for the generic SAP Business Partner. That means in addition to Supplier and Customer that were provided earlier, companies can now also govern the central data of different types of SAP Business Partners"*

 

* from Markus`s Blog Innovations in SAP Master Data Governance 7.0

513-02-2014 12-40-52.jpgMDG-C

MDG 7.0: Click-through demo for MDG-C scenario with Customer Like UI

 

direct link:

http://demo.tdc.sap.com/SpeedDemo/93cb9d046af1e202

Demo with Customer Like UI
613-02-2014 12-40-52.jpgMDG-S

MDG 7.0: Click-through demo for MDG-S scenario with Vendor Like UI

 

direct link:

http://demo.tdc.sap.com/SpeedDemo/841101508357468b

Demo with Vendor Like UI
7DQR Scenario: MDG with Information Steward

Please read this blog:

http://scn.sap.com/community/mdm/master-data-governance/blog/2013/10/22/sap-mdg-sap-information-steward-a-perfect-combination-for-data-quality-remediation-dqr-scenarios#!

 

direct link to click through:

http://demo.tdc.sap.com/speeddemo/9d0e6063076c7a35/

Although this was already available in SAP MDG 6.1 I would like to mention it in this table as well.

Please read also this official information:

 

https://help.sap.com/erp_mdg_addon61/helpdata/en/1c/ 9a6acb29e1484386d24b9f8850cf85/content.htm?frameset=/en/6a/3110a501c14d0ba3d8e81516af03ec/ frameset.htm

8MDG-M Multiple Record maintenancecoming soonNew MDG 7.0 functionality
9....



How to use the click-through-demos:

Generally the click-through-demos should be self-guiding. Just click on the link within the table below and you will be forwarded to the corresponding page. In this page you will typically see in the middle of the screen some numbers. Just click on one number and you will be guided through the first scene. Within the screenshots you will see a doted rectangular shape. Please click within this shape and you will see the next screenshot.

22-01-2014 18-20-09.jpg


If you have any questions on the shared information please contact me via SCN. All the demo`s are also available for a live demonstration on www.sapdemocloud.com and our pre-sales experts are happy to show them to you.


Best Regards

Steffen




Note: If you are new to SAP MDG, you might want to read my blog post on the Enhancement Package 6 version of SAP Master Data Governance first. That blog post provides an overall introduction to SAP MDG. This post here focuses solely on the additional value provided by release 7.0. There is also a blog
post on SAP MDG 6.1
.

 

Introduction

 

SAP Master Data Governance (SAP MDG) offers governance applications for master data domains like Financials, Supplier, Customer, and Material, all tailored for centralized data maintenance. With these applications, you can manage master data that is ready to use within SAP environments, but also beyond.

 

We have just recently released the latest version SAP MDG 7.0 into Ramp-Up shipment in November 2013. We allow installation of SAP MDG 7.0 on top of Enhancement Package 6 for SAP ERP 6.0 as well as on top of Enhancement Package 7 for SAP ERP 6.0. For existing installations of SAP ERP 6.0 EhP6 or SAP MDG 6.1, this means that you do not need to upgrade to any higher Enhancement Package, but can just upgrade to SAP MDG 7.0 in that system.

 

SAP MDG 7.0 ships enhancements for SAP MDG for the mentioned master data domains, this time with a special focus on Financials. It also provides improvements of the MDG Application Foundation, allowing for the extension of standard content and the creation of governance processes for your custom objects.

 

What is new in SAP MDG 7.0?

 

Release 7.0 of SAP MDG focuses on three things. Firstly, it provides a more flexible MDG Application Foundation that allows for refined control in the governance processes, leading to more flexibility and higher efficiency in the business. Secondly, it provides better usability through additional role-based Work Centers and improved user interfaces across the various master data domains and for custom objects. Thirdly, it gives you the option of using SAP HANA for advanced duplicate detection and search, and it provides matching and cleansing capabilities.

 

As with every release, with SAP MDG 7.0 we have improved MDG’s core capabilities. The MDG Application Foundation has been enhanced, for example to allow for parallel change request processes on different parts of the same master data object, or for a more flexible “Edition” management. The already very broad out-of-the-box delivery for SAP MDG’s master data domains has been enhanced even further. The domains of Material, Supplier, and Customer master data were already considerably enhanced in SAP MDG 6.1. While these three have been extended even further in SAP MDG 7.0, a special focus has been put on the Financial master data domains. See below for further information.

 

01_blog_MDG70_Material_Create.jpg

Figure 1: The business consistency check has found issues, when trying to change a material master record

 

In SAP MDG 7.0, the mass data import has also been improved, in particular for loading data into MDG’s “flexible mode” models, like for financial master data and custom objects. The integration scenarios have been enriched, for example for exchanging customer master data with SAP CRM, or for distributing SAP Configuration data that is managed in custom-built MDG objects and processes.

 

In addition, investments have been made to help companies increase their reach with SAP MDG across their enterprise. SAP MDG now allows for data cleansing through merging and correcting duplicate business partners. The usability has also been further improved: for example, now there are dedicated role-based Work Centers for Accounting, Controlling and Consolidation data, as well as improved user interfaces for the maintenance of financial master data and custom objects. And there is a dedicated user interface for multiple-record processing with a first focus on material master data. Also, there are out-of-the-box governance processes for the generic SAP Business Partner. That means in addition to Supplier and Customer that were provided earlier, companies can now also govern the central data of different types of SAP Business Partners. SAP MDG 7.0 also supports the usage of SAP ERP Document Management System for attachments to material master data.

 

Let us have a more detailed look at some of these enhancements.

 

Further data model extension for all domains, but especially for financial master data

 

SAP MDG 6.1 already contained a substantial increase of the data model scope provided by SAP MDG out-of-the-box. In SAP MDG 7.0, this has been extended even further. For example, the material data model now covers close to 400 attributes across basic and classification data, logistics data dependent on organizational units, as well as valuation and costing. For material, similar to the extended supplier and customer models, we would expect that by far the most SAP standard attributes you may want to put under central governance are readily available in SAP MDG now, and only few would have to be added using SAP MDG’s extensibility capabilities.

 

As mentioned above, a special focus was on extending the financial master data domains. This refers to both the introduction of new master data objects (like financial reporting structures) and broader coverage within the already delivered data models (like enhanced address and communication data for companies, cost center, or profit centers).

 

02_blog_MDG70_Model_Coverage.jpg

Figure 2: Extended out-of-the-box data model coverage in SAP Master Data Governance 7.0

 

More flexible foundation for higher business efficiency and refined process control

 

As mentioned above, the Master Data Governance Application Foundation is the framework underneath all MDG applications, which allows extending the MDG standard content as well as building self-defined master data objects and the corresponding governance processes. To use MDG for custom objects, you use the framework to define the appropriate data models, and then generate the staging area and user interfaces based on these models. Then you define the appropriate workflows and the roles that will provide access to the user interfaces. You can use the Data Replication Framework (DRF) to distribute the data that has been maintained. You can build your own validations, or extend existing ones based on the Business Rules Framework (BRF+). Finally, there are also Business Add-Ins (BAdIs) provided to include custom ABAP code into MDG’s processes.

 

Whenever we in SAP development make additional investments in the Application Foundation, the main focus is on extensibility, flexibility, usability, and ease of consumption. That means that we want to allow companies to create very flexible governance processes, with role-based user interfaces, but with very reasonable implementation efforts. Let us discuss some of such new capabilities in SAP MDG 7.0. In particular, we will cover the more flexible Edition management and Parallel Change Requests.

 

The new flexible Edition management has four key capabilities that define its business value: an easier and more flexible scheduling of changes, a very intuitive access to the different states of master data valid in certain timeframes, better transparency of past and planned changes, and a more granular control over replication timing.

 

With the improved concept, you can now use and combine as many editions as you need, and you can also reschedule planned changes across Editions.

 

03_blog_MDG70_Editions.jpg


Figure 3: Overlapping Editions and the resulting validity of the master data managed in these Editions

 

The simple example above shows how several open Editions can handle the same objects. When you create or change an account, the valid-from date of the Edition defines the valid-from date of the change. The valid-to date is defined by the “next change” (i.e. in a later Edition) of the same account. See “Account A” in Figure 3. If there is no future planned change, the valid-to date is unlimited – see “Account B”. You can reschedule open change requests with the related inactive data to another edition. This is useful, for example, when you want to release an Edition, but not all related change requests have yet been approved and activated. See “Account D” in Figure 3.

 

A new search parameter (“valid on”) allows you to very intuitively search and display master data with the status it had or will have on a certain date. And when displaying any change request in an Edition, the user will now get full transparency about other (planned) changes: the system shows the next already planned and approved change of the same master data object and allows the user to directly jump to it. It also provides a link to any pending change that has not been approved yet. Finally, you can now decide which replication timing is allowed for each Edition: replicate all approved change requests together with the Edition, replicate each change request separately and immediately when approved, or let the user decide for each change request whether it shall be replicated immediately or held back and replicated together with the Edition.

 

The new flexible Edition management is used by SAP MDG for Financial Data, but can also be used for (time-dependent) custom objects if they are modeled using the Flexibility Option of the MDG Application Foundation.

 

SAP MDG 7.0 now also supports Parallel Change Requests. Let me explain what we mean by that using a simple example: You decide you want to produce a certain product in an additional factory. Accordingly, the product manager requests the extension of a material master for that additional plant. The gathering of the required data and the approval are handled in an MDG change request that involves several colleagues and some manual processing steps. The whole change request process might perhaps take a week or two to complete. During that time, a change of the packaging requires an urgent update to logistics data to make sure the material can still be correctly shipped from all other plants.

 

Without parallel change requests, you would have to update the logistics data in the same change request that is already blocking other changes to this product. You would therefore not be able to approve and activate these changes before being ready to approve and activate the complete changes, including the data for the additional plant. Typical consequences: urgent changes are either delayed manually “enforced” without governance, or mixed with other changes – resulting in trade-offs between agility to respond to business needs and enforcing governance.

With parallel change requests now being enabled, you can create a second dedicated change request for the urgent change in the logistics data. The already started plant-extension can be carried out as planned. Both change requests can be independently processed and approved. This enables customers to respond to business needs as they occur, while still keeping the governance processes in place.

 

04_blog_MDG70_PCR.jpg


Figure 4: Executing multiple-change requests for the same object in parallel allows for more agile governance

 

Please note: Without parallel change requests, it also is possible to have one change request with sub-workflows (several simultaneous work items). This is useful if more than one group of processors needs to work on the change request simultaneously, for example one group to add financial data and one group to add logistics data – but both of them are working on the same request. The fundamental difference is that, as long as this is one change request, there will only be one final approval and the final activation will activate all data. With parallel change requests, each change request and its scope is activated individually.

 

Of course, when creating parallel change requests on the same master data object, you still want to avoid conflicting changes to the same master data attributes. The good news is that the system will prevent these. This is provided in two ways – once during design-time and once during run-time: At design-time, you define the scope of a data model that can become part of a certain change request type. However, these data models may still contain identical attributes in multiple change request types. (For example, there might be different change request types for “creating a new material” containing all attributes, as well as for “extending a material to a new plant” containing just the plant-dependent attributes, and for “changes to general material attributes” containing just those.) At run-time, the system will prevent you from creating a change request that contains attribute sets that are already contained in another open change request. For example, this means when there is one open change request adding a new plant A, you can still create a new change request to change general data or to add another plant B, but you cannot create a change request that would change data for plant A for this material. This blocking of certain parts of a master data object by a change request is referred to as “interlocking”. For the more technical readers, I would like to add that interlocking is done per entity (not per entity type or attribute).

 

There are more enhancements to the MDG Application Foundation in SAP MDG 7.0, like for example an improved mass data import into objects created with the MDG Flexibility Option. It would go beyond the scope of this blog to mention all of them. I will just discuss the enhanced usability in single-object processing in the context of financial data below, and multi-record processing will be explained in the context of material data.

 

Many companies use the MDG Application Foundation to build governance processes for Reference Data. That is data with a harmonized definition and aligned values across the complete company, very similar to master data. But reference data sometimes needs to be compliant with external standards (such as ISO). It is often referenced by diverse master data and processes. It may be simpler in structure, lower in volume, and less frequently changed than master data. Often, reference data is stored as SAP Customizing Data. These are, for example, country code, currency code, material group, plant, location, payment terms, purchasing organization, and so on. SAP MDG 7.0 provides many improvements that support projects in creating governance processes for reference data. These processes are defined using SAP MDG Custom Objects. The new user interface for Custom Objects (and Financials), which we will discuss in more detail below, provides enhanced usability and configuration options. The new data import I just mentioned above for Custom Objects - built using the Flexibility Option - allows for enhanced monitoring, scheduling and usability compared to the file upload. The flexible Edition management provides a more intuitive access, transparency about changes, and control of the replication timing. In addition, there are improved capabilities for replication of reference data that is SAP customizing. All this will help MDG customers build governance processes for reference data with a low implementation effort.

 

Let me also mention one additional important concept of SAP MDG again: search as a starting point for most activities. When we talk to users of SAP MDG, most of them start many tasks with a search. Obviously, you want to avoid creating duplicates when creating a new object. Hence you first search whether it already exists by entering a few attributes in the MDG search screen. If you find the object, you might want to change it. MDG provides the link to do so. If you do not find it, you might want to create it. And when you select that button, for example to create a new supplier, MDG will take the information from the search screen and prepopulate the fields in the change request. There is no need for the user to retype the information they already entered before. You can also search and then select multiple entries in the result list and trigger a mass change for all selected objects. You can start a change request to block objects, or to mark them for deletion. You can access all past changes to an object with the change log, or look at the objects replication status in the Data Replication Framework (DRF), and (re-)start the replication if necessary. All this is possible directly from the search result screen. The search screen can now also be configured in the Floorplan Manager and personalized by the users.

 

05_blog_MDG70_SearchAsStartingPoint.jpg


Figure 5: The search result screen can be used as a starting point for many activities

 

Better usability for all standard and custom-defined master data objects

 

Given the now broader scope in the financial domain, with SAP MDG 7.0 the structure of the role-based Work Centers for this domain has been redesigned. We now provide separate dedicated roles and Work Centers regarding governance for Financial Accounting, Controlling, as well as Consolidation master data. The Work Centers provide easy access as well as better transparency in the respective areas. The menus now also follow the approach of using Business Activities to determine the right change request type for the user interaction. This is the same concept as introduced earlier in the Business Partner and Material domains. Since the new structure contains suiting templates for authorization roles, you can also grant users more granular authorizations.

 

06_blog_MDG70_FiancialAccounting_HomeScreen.jpg


Figure 6: Role-based Work Center for Financial Accounting governance – one of the three new Work Centers

 

With SAP MDG 7.0 for Financial Data, there are also new user interfaces for single-object maintenance. These new user interfaces allow for easy personalization and flexible configuration. For example, you can configure specific user interfaces for each workflow step or each processor. You can use context-based adaptation (CBA) to provide dynamic user interfaces that change at run-time dependent on attribute values like change request type or object type. The same user interface flexibility had been introduced with Enhancement Package 6 for the Customer, Supplier, and Material domains, and is now newly available for financial master data and Custom Objects designed with the MDG Application Foundation. Additionally, a G/L Account and the related Company Codes can now be maintained in one process step, similar to the Financial Reporting Structure and the related FRS Item. For business partners, customers, and suppliers, it is now also possible to create an organization and contact persons in one step and in the same change request. You can now also generate an IBAN from already provided bank details or, the other way around, generate bank details after entering the IBAN.

 

Newly with this release, there is also a merge functionality for customers and suppliers. Whenever a user identifies potential duplicates in a search result list, or when the system identifies them in a duplicate check during change request processing, the user can create a Cleansing Case that contains all potential duplicates. Using an underlying change request, this cleansing case can be processed by an expert to merge the data from all duplicates into one target record. Then, the changes can be approved – typically by a second person. A typical flow might be: create a new cleansing case, search and assign potential duplicates, check the details of the potential duplicate records, identify one target record, decide which potential duplicates to keep in the cleansing case and which to mark as “not duplicates”, start cleansing, and create a change request. During the cleansing step, the expert will browse through the superset of all data from all duplicates and will decide which parts of which duplicate will be taken over into the target record and which will be dropped.

 

Another usability improvement that I want to mention is the new Multi-Record Processing. This feature enables you to change multiple master data objects simultaneously in one single user interface. You can select multiple materials from a search result screen, and then start the multi-record processing. This will create a change request in the background to fully integrate this maintenance into your governance processes. In a tabular maintenance screen, you can then change all these materials at the same time. For example, you can directly change values in single table cells, or attributes can be changed to the same value across several selected materials at once. You can also copy a value from one material to other selected materials. In order to select the materials in the table that should be maintained together, the system for example offers an automatic selection of all materials with a certain value in a certain attribute. There is also a search and replace function to change values across several materials. All changes you have already done since opening the maintenance screen are highlighted in one color. All saved changes or changes that users have already done before in the same change request are highlighted in a different color. The user interface is, of course, built using the Floorplan Manager for ABAP WebDynpro. It therefore also allows for easy configuration to your users’ needs.

 

07_blog_MDG70_MultiRecord.jpg

Figure 7: Multi-record processing - change many materials simultaneously in one user interface

 

You have probably already heard of the side panel concept in SAP applications. This is a great way of providing users with additional context when working in an SAP MDG user interface. The side panel is an area attached to the right-hand side of the standard screen where the company can configure additional context information. The users can then open / expand this area at any time. The side panel interacts with the main screen and will, for example, dynamically display information about the current supplier or material that the user is changing in the main screen. For example, you might want to display a list of all open purchase orders with a certain supplier that the user currently wants to mark for deletion. The side panel is a very open concept, where typically our customers themselves define the information to be displayed. This is easily possible, since new demand by the business for additional context information does not require any changes to the MDG processes or user interfaces. Instead, the small information snippets can be built separately and just attached to the screens side panel. SAP also delivers some side panel content as Business Context Viewer (BCV) CHIPs that can be attached to SAP MDG screens. For example, newly with SAP MDG 7.0, we deliver a sales overview that displays all sales orders created for the current material, a production overview that shows production orders created for the current material, a purchasing overview that displays purchase orders created for the current material, and a CHIP that displays all changes against the active material that are proposed by the current MDG change request. Again, these are just examples. Typically, companies will add to them and create their own.

 

08_blog_MDG70_SidePanelContent.jpg

Figure 8: Sample side panel content delivered with SAP MDG 7.0

 

Faster search with SAP HANA, duplicate detection, and multi-attribute drill-down

 

SAP MDG 7.0 can either be installed on top of Enhancement Pack 6 (EhP6) or Enhancement Pack 7 (EhP7) for SAP ERP 6.0. If it is installed on EhP7, it can run on SAP HANA as primary database. In any case, regardless of the Enhancement Package SAP MDG is installed on, an SAP HANA database can always be used side-by-side to SAP MDG. You would then set up near real-time replication of the information from MDG’s primary database into SAP HANA in order to make use of the information in SAP HANA.

 

09_blog_MDG70_HANA_Deployment.jpg

Figure 9: If installed on EhP7 for SAP ERP 6.0, SAP MDG 7.0 can run on SAP HANA as primary database

 

Once the MDG information is contained in SAP HANA – as said as the primary database or using replication – you can use SAP HANA’s capabilities for duplicate-detection and similarity-ranked search. This search method comes in addition to other options like plain database search or Enterprise Search / TREX. SAP HANA fuzzy search allows for both, free-style Google-like search with search terms as well as attribute-based search with dedicated thresholds for each attribute. SAP HANA calculates a similarity rank for all search hits and allows you to sort the result by a weighted score across all attributes. This is very helpful in the MDG search application as well as when integrating it within MDG processes for duplicate detection. Based on the high detection quality of SAP HANA fuzzy search and matching, this will even better prevent the creation of duplicates. The calculated similarity score helps the user identify and sort potential duplicates.

 

When your master data is stored in SAP HANA, you can also make use of HANA being an in-memory column-based database. Column-based means that you can easily access all different values of a master data attribute, like for example all states in a selected country in all your customer’s addresses. This is how the SAP HANA database works: there is a column dictionary that lists all different values for an attribute, and the single entries point to this dictionary. Since this dictionary is even in memory, you can access this information in virtually no time. This allows you to first display all attribute values of all of your master data. It also allows you to slice and dice, that is drill-down and filter by attribute values extremely fast. SAP MDG 7.0 provides you with a dedicated application to do exactly this. In the multi-attribute drill-down based on SAP HANA, you can filter and analyze the intrinsic structures in your master data. For example, you can easily find attribute values that need company-wide harmonization. Let us assume that you find 62 different values in the attribute State for customers in the US. There must obviously be spelling mistakes for some of these US states since only 50 exist.

 

This is a completely different approach to searching master data. The users will actually find instead of search, since the system only displays all existing attributes and objects. There is no need for guessing, you simply find what is there. This drill-down search can even be combined with fuzzy search. That means that users can enter a search term that will filter the total master data based on similarity thresholds. The users can then slice and dice all the remaining master data along its attribute values.

 

10_blog_MDG70_DrillDownSearch.jpg

Figure 10: Multi-attribute drill-down based on SAP HANA in SAP MDG 7.0

 

Outlook and broader context

 

This concludes my post on SAP MDG 7.0 capabilities. Please expect more to come in future releases. As mentioned earlier, we will continue to further extend the standard scope for SAP MDG’s master data domains in the future. Usability is always a key theme for us. This will also be supported by additional investments in SAP MDG’s Application Foundation regarding extensibility and flexibility, while at the same time always driving ease of implementation. We will continue to invest in analytical capabilities and integration with SAP’s Information Management and Information Governance portfolio. Also, SAP HANA will help us provide additional innovative capabilities.

 

We see a trend to exchange master data information between business partners via business networks. Together with SAP’s acquisitions of Ariba and hybris, we have additional opportunities to help automate the provisioning and the consumption of master data between companies. Especially in the area of Consumer master data, the integration of information from social networks is also a trend we see and we intend to support in the future. And just to also mention Cloud here: SAP MDG supports the trend towards Cloud-based applications already today: MDG integrates with Cloud-based applications, for example by provisioning master data into the Cloud. And SAP MDG can, of course, also be consumed from the Cloud, for example via the SAP HANA Enterprise Cloud.

 

You might also want to revisit my earlier posts that are linked above. There I describe how the focus of SAP MDG is on comprehensive master data ready for use in business processes, through ready-to-run governance applications for specific master data domains. I also describe how SAP MDG allows for custom-defined master data objects, processes, and user interfaces, and how it can distribute master data to SAP and non-SAP systems. I also explain in detail how SAP MDG is an important part of SAP’s Information Management portfolio, and how you would want to combine MDG with additional tools like the SAP Information Steward for monitoring of master data quality in all relevant systems and correction in SAP MDG, or like SAP Data Services for data quality services, validations and data enrichment.

 

You can find more information about SAP MDG on the SAP Master Data Governance page on SCN.

Actions

Filter Blog

By author: By date:
By tag: