1 2 3 Previous Next

SAP Information Steward

41 Posts

I've recently gone through issue with IS installation (IS 4.2 SP 5) which took longer time to complete than expected (Even though all the prerequisites were in place). After some basic research , I found that the culprit is the mapped network drivers in the system. The workaround for this issue is covered in the IS release note.


Here are the details


Issue ID :



Issue Description:

When you select Disk Cost in the Select features step of the installation, the installer calculates cost using all drives, including mapped drives. This may result in additional installation time if, for example, the network is slow.



Unmap the network drives before installing the software.

To build the custom metadata reports , we often need to know where Information Steward stores the metadata in repository database.SAP provides limited information about the Information Steward metadata tables in their documents.One way of tracing where IS metadata stores the information if the repository resides in SQL server is to use Query sys.dm_db_index_usage_stats view.


This is Dynamic Management View in SQL server that keeps track of any table that has been last updated or scanned.


SELECT OBJECT_NAME(OBJECT_ID) AS Table_Name, last_user_update,*
FROM sys.dm_db_index_usage_stats
WHERE database_id = DB_ID( 'DB_IS_Repo')
ORDER BY last_user_update desc


*since you are querying a system view , make sure you have the necessary rights to access the view.


You can also use SQL server's profiler for the tracing (I haven't tried this approach).

To use this approaches you need to perform the action that you are interested in and run the query. As an example , you may want to know which metadata table stores the IS rule related information or rule binding information ; create a rule and bind it to a table column.Then run the above query/analyze the profiler, which would list the tables which are late updated / scanned.

In one of my recent project , I have used this querying approach extensively with a fair amount of success to build the custom metadata reports.

     It is possible to create a rule without a quality dimension and it can be submitted for approval , but as soon as you click the refresh , its no longer visible under the rule menu. I'm not sure if SAP has already addressed this issue in SP6 which was released late Nov '15.



To find out such rules without a dimension associated with it , you need to query the repository metadata tables MMT_Rule and MMT_Custom_Field_Value


select * from
select distinct 
       ,t1.business_name as Rule_name 
       ,t2.value as Quality_dimension
       ,t1.effective_dt as Rule_created_Date
from MMT_Rule t1
left join MMT_Custom_Field_Value t2 on t1.rule_id = t2.[object_id]
and t2.is_current_Version = 'Y'  and t2.field_name = 'CA.rule dimension'
) T
where Quality_dimension is null

Here's the query that lists all the rules and the associated projects , tables and dimensions within the Information Steward repository.It also provide the rule created date.




Information Steward 4.2.5


SQL server 2012





select distinct 
t5.technical_name as Project_name
,t1.business_name as Rule_name 
,t2.value as Quality_dimension
,t4.technical_name as Table_name  
,t1.effective_dt as Rule_created_Date
from MMT_Rule t1
 join MMT_Custom_Field_Value t2 on t1.rule_id = t2.[object_id] and t2.is_current_Version = 'Y'
 join MMT_Relationship t3 on t1.rule_id = t3.[object_id] and user_flag1 is null  and t3.relationship_subtype_cd in ( 'PRRB' ) and   t3.is_current_Version = 'Y'
 join MMT_Data_Group t5 on t5.data_group_id = t3.container_id and t5.is_current_version = 'Y'   
 join MMT_Data_Group t4 on t4.data_group_id = t3.related_object_id and t4.is_current_version = 'Y'   
where t1.is_current_Version = 'Y'

Here's a SQL query which lists the Information Steward projects , Connections , tables and the tasks (profile/rule) associated.This query access the Information Steward metadata tables available in Information Steward repository.The following query is developed and tested with SQL server 2012 and Information Steward 4.2.You can tweak the query little bit to be able to work with other Database like Oracle , MySQL , DB2 et al.



 select distinct 
 ,t4.technical_name as Table_name 
 ,t6.technical_name as Task_name
 ,t6.transformation_package_type_cd as Task_type
       select  distinct 
        t1.technical_name as Project_name 
       ,t3.technical_name as connection_name
       MMT_Data_Group t1  
       join MMT_relationship t2 on t1.configuration_id = t2.configuration_id and t2.is_current_version = 'Y' and t1.data_group_type_cd = 'PPRJ' 
       join MMT_Connection t3 on t2.user_bigint = t3.connection_id and t3.is_current_version = 'Y'  and t3.sub_type is not null
 join MMT_Data_Group t4 on t4.connection_id = r1.connection_id and t4.data_group_id = r1.related_object_id and t4.is_current_version = 'Y'  and t4.data_group_type_cd in ( 'RTBL' ,'VVIW')
 join MMT_relationship t5 on r1.related_object_id = t5.related_object_id and t5.is_current_version = 'Y' and  r1.configuration_id = t5.configuration_id
 join MMT_transformation_package t6 on t5.[object_id] = t6.transformation_package_id and t6.is_current_version = 'Y'
 where isnull(t6.transformation_package_type_cd,'') <> 'JOB'

If you find a simpler query to achive the same result , please share it in the comment section.

After configuring Email notification, yet not receiving the email for failed or passed scheduled jobs. Failed schedule job error log shows message below. No indication of an error message. Configuration is incorrect. See screenshot below for correct configuration.

Error Log.PNG

Check Configuration to ensure the correct Host and Domain is in place. Only for SAP Information Steward Internal box




Hope this helps. any input is greatly appreciated.



Hello Everyone,


Last week I was searching for how to export Users and Groups in IS but unfortunately could not found the answer and then created a thread where one of the member answered that. I believe that sharing this solution will help other developers not to struggle for the answer in future. Hope you will find it useful.


So, here is the step by step procedure to export and import Users and Groups in information steward:



Considering that you wan to promote Users and Groups from DEV Environment to Production Environment. You simply don't want to re-create everything in new environment and will make use of Users and Groups defined from DEV Environment.


Step 1: Login to Dev Environment of Central Management Console from where you want to export Custom Users and Groups and go to Promotion Management tab as shown below.



Step 2: Right click on the Promotion Jobs folder and select New Job as shown under.



Step 3: Name the Job (Ex: Test_Promotion_Job) and then select the source as 'Login to a New CMS' as shown below. You will need to enter the credentials of your DEV CMC. Select the source system and enter your user Id and Password.




Step 4: You can see a green check adjacent to Source Name if you entered the credentials correctly.


Then, Select 'Output to LCMBIAR File' as a Destination as shown below. Then click on the Create button.


Step 5: It will load all the jobs which can be promoted. Go to User Groups as shown below and check all the Groups you want to promote to Production environment as highlighted below. Then click on 'Add and Close' button.




Step 6: Now User Groups are ready to promote. Click on the Promote Button as shown below.


Step 7: Select the source system and enter the credentials of CMS. Then select the Output to LCMBIAR file. Then click on Export button as shown below and save the LCMBIAR file to your local machine. Now you have successfully exported to User Groups.



Step 8: Now, Login to your Production environment through CMS where you want to import this User Groups. Click on the Promotion Management tab.



Step 9: Right Click on the Promotion Jobs folder and then select Import file as shown below. Select the LCMBIAR file which you had saved in earlier step from your local machine.



Step 10: Then in destination select 'Login to New CMS' and pass your system name, user and password. then click on Create button as shown below.


Step 11: So you are ready to promote the User Groups from DEV to Production. Click on the Promote button as shown below.



So that's it. You can go to CMC home and click on Users and Groups to verify if all are successfully promoted to the new environment.


This is how you can export existing User Groups from existing environment to a new environment.




Ansari MS

The Failed Data Repository

If you are looking to build some custom reports on the results of your data quality assessment - beyond what is available with the Information Steward's Data Quality Dashboards - you can leverage the Failed Data Repository as the database to meet your custom reporting needs.

The Failed Data Repository provides you information about failed data from your validation rules within a supported relational database system.  Information includes:

  • Information about the project, connection and tables which generated the failed data (IS_FD_TABLES_MAP table)
  • Execution history of all the tasks which generated the failed data (IS_FD_RUN_HISTORY table)
  • All failed rules for a given run (IS_FD_RULE_INFO table)
  • All the rows that failed one or more rules (<table_alias>_FD table)
  • All the rules which failed a given row (<table_alias>_FR table)

For detailed information about the above referenced tables, see the section on "Accessing additional failed data information" in the Information Steward User Guide.  The diagram below shows the relationships between the failed data tables.

As an example, the total rows that were validated during the run is available in the IS_FD_RUN_HISTORY table (TOTAL_ROWS column) for each IS task.  And, you can join the *_FD tables to get at the failed data counts per rule/task.

Setting Up the Failed Data Repository

To leverage a Failed Data Repository for custom reporting, you must first establish a connection to the database within the Central Management Console (CMC).  Ensure that the connection type is "For data that failed rules."  The image below shows an example of the connection parameters, many of which will change depending on the Database Type selected.  For the most current listing of supported databases, please check out the SAP Information Steward PAM.

Specifying the Failed Data Repository

When executing a rule or set of rules (per task), you can select to save the failed data to one of the Failed Data Repositories that you have previously configured.

Viewing Failed Data from Information Steward

This is the Data Quality Scorecard Detailed View, from here you can view the failed data:

This gives you the Failed Data screen.  Once you have set up the Failed Data Repository, this will give you access to "View More Failed Data" to get beyond the 500 record sample data size.

The Information Steward Repository Views

Although the Information Steward Repository is not a supported means to exact data from custom reporting, here are a few Information Steward Repository Views that may contain some additional information to meet your needs:

  • MMB_Key_Data_Domain
  • MMB_Key_Data_Domain_Score
  • MMB_Key_Data_Domain_Score_Type
    • Key Data Domain, Quality Dimension, or Rule level
  • MMB_Domain_Value
    • Quality Dimension descriptions
  • MMB_Rule
    • Rule definition/description
  • MMB_Data_Group
    • Project Names

Related Content

How to Create Detailed Failed Data Reports as Part of your Data Quality Analysis

  • How to use SAP BusinessObjects Information Steward, along with the SAP Business Intelligence platform components SAP BusinessObjects Information Design Tool and SAP BusinessObjects BI launch pad, to produce a Web Intelligence report that will help you analyze the quality of your data.

SAP Information Steward Failed Data ReportingInformation Steward Failed Data Reporting

SAP Information Steward:  DQ Reporting of the SAP HANA Live Layer with Lumira Demo

In this blog, we will explore the Business Value Analysis capability of the SAP Information Steward 4.2 release. 


Poor data quality can affect businesses in many ways. The impact can be financial, customer perception, operation efficiency, brand recognition, regulatory compliance, and so on.  Here we see some examples of issues that arise due to bad data and how it affects various day-to-day aspects of the business.


Difficult to determine the right recipients for marketing campaignsOperational Efficiency
Inaccurate order information causes delayed or lost shipments and lower customer satisfaction


Customer Satisfaction

Sales representatives are not able to identify relevant accountsOperational Efficiency
Costs are high due to account duplication, while response rates are low


Operational Efficiency

Customer Acquisition

Potential customers are annoyed by redundant mail, emails, and phone callsCustomer Satisfaction
Total revenue and profitability of products and services is reducedFinancial
Reporting uses wrong data, which leads to wrong conclusions and decisions


Operational Efficiency
Customer Satisfaction

Inaccurate statutory reporting Legal
Carrier stop charges for incorrect or incomplete addresses


Customer Satisfaction

Misalignment between vendors and defined terms due to system inaccuraciesFinancial
Poor spend visibility due to unstandardized, duplicate dataOperational Efficiency
Unable to find the right product / material due to unstandardized, duplicate dataFinancial
Operational Efficiency
Items are purchased off contract at premium prices due to poor quality supplier dataFinancial
Operational Efficiency

And, there can be many more such issues that arise from specific industries and business processes. For any organization, it is important to understand and quantify this impact. By assigning a dollar amount to poor data quality, the business awareness of the downstream and bottom line impact of bad data is increased.  It puts value on clean, accurate data and can be used to justify additional funding of your information governance initiatives.  Sure, an organization may know in theory there is a cost associated with bad data, but to be able to put actual numbers behind it - this can really give the information governance cause the credibility that it needs.

SAP Information Steward's Business Value Analysis enables business-orientated data stewards (or data stewards in collaboration with LOB representatives) to connect financial ROI to the organization's data quality and information governance initiatives.


Information Steward's Business Value Analysis features allows the organization to see the overall trend for the cost of poor data at various levels for root cause analysis. The business can also perform ‘what if’ analysis to identify potential savings / losses if they clean the bad data and accordingly focus their data quality / information governance efforts in the areas that will benefit them most.


With the Information Steward 4.2 release, the process of validation rule development now includes the ability to define itemized cost per failure. So as you add a new set of business rules, the financial impact associated with the data that failed against the rule is immediately taken in account within Data Quality Business Value Analysis.

There are two types of costs that can be considered when you calculate the cost per failure. Some costs are incurred in terms of a human resource spending time on addressing the issue or performing root cause analysis. These are called resource-dependent costs. Then there are costs that are resource-independent in nature.  Here you can see a few examples of different cost types:

Resource-independent costs

  • Cash flow: Additional costs incurred to the organization’s cash flow, such as delays in recognizing revenue or making supplier payments
  • Fees: Costs associated with any additional expenses or direct fees, such as those resulting from regulatory compliance failures
  • Fixed overhead: The fixed overhead cost distributed per failure, such as storage costs due to returned shipments
  • Revenue: Loss of direct revenue, such as lost customers or new sales
  • Sales: Additional costs associated with selling goods, such as sales organizations following erroneous leads
  • Other: User-supplied costs

Resource-dependent costs

  • Labor - Costs associated with the loss of productivity, for example, a resource or person will have to spend a specified amount of time to capture and remediate quality issues
  • Other – Costs that are not covered by the existing types

This is by no means an exhaustive list. The idea is to provoke thinking about such costs when trying to understand impact.

If you would like to find out more about SAP Information Steward's Business Value Analysis feature, here are some additional resources available to you:


Information Steward's Metapedia promotes proactive data governance with common understanding and agreement on business taxonomies related to business data. With Metapedia, users have a central location for defining standard business vocabulary of words, phrases, or business concepts and relating that terminology to metadata objects (e.g. systems, tables, reports) as well as Data Insight quality rules and scorecards. Once the business glossary is established, business users can easily search the content and concepts using familiar business vocabulary and synonyms as well as access related terms from BI reports, data quality scorecards, … well, from really just about anywhere (see Exploring and Accessing Metapedia terms).


So, why might you want to start a Metapedia initiative?  Let’s explore some use cases.


Business Intelligence

Think about the value of giving your BI consumers – especially if those consumers are across multiple lines of business – access to a standard definition of the data used to drive reports and dashboards.  For example, let’s say a new HR report is created that displays information about Org Divisions, Dept Units, Career Grades and Levels, Local and Global Titles, etc. Wouldn't it be great if consumers, especially outside of the Human Resources team, could gain access to the accepted and HR-approved understanding of each of these concepts represented on the report?



Data Migration

Looking outside of the reporting world, how about a data migration project that involves the consolidation of two different companies’ data assets?  As you sit down and map common data elements during a data migration project, what a great asset to be able to capture the common understanding of that data as standardized business terminology.  Then, link the agreed upon definition back to the disparate data sources while maintaining associations to each company’s unique lingo through synonyms and keywords (see Metapedia Categorization, Association and Custom Attributes) to make searching more efficient for both entities.


Information Governance

Some organizations have used Metapedia to capture additional information about their quality rules, documenting a more robust rule definition and business impact in a manner that is easily accessible by rule owners.  Check out the following example presented during a recent ASUG presentation.



Beyond Business Terms to Business Concepts

Metapedia is not just for capturing the definition and categorization of words and phrases. Metapedia can be extended to meet the needs of almost any Business Concept.  For example, business measure and metrics can be managed in Information Steward Metapedia leveraging custom attributes to store unique characteristics such as the actual calculation equation, the business process where used and the system of record the calculation is derived from.  Think about measures such as “Quarterly Sales Growth” and “Operating Income by Division” as well as metrics such as “On-time Delivery” and “Time-to-Develop” and creating a central repository of not only common definitions of those measures and metrics, but also the attributes that explain how those measure and metrics are calculated.  Then, link those concepts back to the metadata object involved in the calculation as well as the reports that display results.


<Insert Your Use Case Here>

We would love to hear your ideas and use cases for where Metapedia does or could provide value as a central glossary of terms, business content or concepts.


Please share via the comments.

A new Information Steward FAQ page has been added to the Information Steward wiki.  Here is a look at the initial questions that have been posted and answered.


Please bookmark the page and the wiki as for future reference as questions arise as well as contribute with additional answers to commonly asked questions.




SAP Information Steward and SAP Data Services and Data Quality are indeed inseparable and complementary solutions.  In this blog post, we are going to cover both an internal and external view as to why.  We will explore use cases, features and architecture that make these two solutions the very best of friends.


Typical Use Case Scenarios

Some of the typical use case scenarios where Information Steward and Data Services work together to provide a solution, include ETL/Data Warehousing, Data Quality Management, Enterprise Master Data Management, IT/System Maintenance, Business Intelligence/Reporting and Data Migration.  The table below contains some example of how the products fulfill use case requirements.


Use CaseSAP Information StewardSAP Data Services
ETL / Data WarehousingAnalyze source and target data to help with mappings and transformationsCreating the data flows, extraction, consolidation/transformation and load
Data Quality ManagementInitial insight into data content to understand cleansing requirementsPerform cleansing and matching, in batch and real-time
Enterprise Master Data ManagementInitial insight and continuous monitoring of master data qualityCleanse, consolidate and load master data repository
IT / System MaintenanceUnderstand quality, impact and lineage of data - Where is data used up/downstreamMovement of data for system upgrades
Business Intelligence / ReportingUnderstand quality, impact and lineage of data - Is data fit for use?Populate business warehouse for reporting
Data MigrationIdentify different data representations and and quality across systemsMigrate data into new system, merge acquired data


Let's focus on two use case scenarios in particular, data warehousing and data migration.  For data warehousing, Information Steward is going to support you in analyzing your source data to understand what content is available at the source as well as the quality of that data.  Profiling results such as word, value and pattern distributions will help you understand the need for mapping tables, or perhaps standardization of the data during the ETL process.  In addition, advanced profiling can help you to identify referential integrity problems.  For example, Information Steward could highlight the fact that the ORDER_DETAIL table contains Parts IDs that do not exist in the PARTS  table.  With a data migration project, let’s say one that arose as a part of an acquisition, Information Steward will help you gain familiarity with the newly acquired source system through data profiling, helping you to understand:

  • Is the content in the new acquired source system of similar format, structure or type than your corporate system(s)?
  • Again, is there a need for mapping tables or data standardization to be used as a part of the data migration process?


You can also perform a data assessment by running the new source system against your already establish data standards/quality rules within Information Steward. If cleansing needs to occur on the source system due to poor quality, the Data Quality Advisor and Cleansing Package Builder can support you to quickly and easily develop the needed cleansing and matching rules.  If there are duplicate customer or product records found across systems, those records (or a portion of those records) can be manually reviewed with Information Steward’s Match Review feature.


In both use case scenarios, Data Services is going to provide you the broad connectivity to databases, applications, legacy systems, and file formats that is needed to support your requirements for data extraction and loading.   Then, based on the results of the data profiling and assessment, Data Services can be used to transform the data to standardize the data from multiple sources to meet a common data warehouse or system schema.  Data Services can additionally be used to cleanse the newly acquired data to meet the quality standards your organization has in place.  De-duplication can be performed when redundancy need to be eliminated when bringing together the multiple sources of similar data.   And, in the case of that data warehouse, Data Services provides you the means to capture change in order to perform delta loads on a regular basis.


Complementary Features

When we focus specifically on Data Quality, Data Services and Information Steward are complementary solutions.  Below is what we like to call the "Data Quality Wheel“.



To start the process, you are assessing the data to identify issues and determine overall health.  And, on the back end, monitoring is in place to keep an eye on the ongoing health of the data.  This is where SAP Information Steward provides your solution.  Information Steward provides the platform for the business-oriented user to gain the necessary insight and visibility into the trustworthiness of their data, allowing them to understand the root cause of poor data quality as well as recognize errors, inconsistencies and omissions across data sources.


The next step in the process takes action with SAP Data Services Data Quality capabilities to guarantee clean and accurate data by automatically cleansing your data based on reference data and data cleansing rules, enhancing your data with additional attributes and information, finding duplicates and relationships in the data, and merging your duplicate or related records into one consolidated, best record.  Data Services enables the technical user with broad data access and the ability to transform, improve, and enrich that data.  This process can occur in a batch mode or as a real-time point of entry quality check against business requirements.


And, there is some overlap.  Information Steward additionally gives that business-oriented user the tools to help with improvement efforts – with intuitive interfaces for developing data quality rules as well as cleansing rules that work within a Data Services ETL data flow to improve enterprise data quality.  Information Steward also supports those business users in manually reviewing the results of the match and consolidation process, to spot check and validate duplicates that are flagged in Data Services as low confidence matches.


Let's look specifically at some of the product features that support the concept of sharing, the type of sharing that we would expect with best friends.


Sharing Validation Rules



Validation or quality rules defined in Information Steward to access and monitor the quality of your information assets can additionally be published to Data Services to be included as a part of a batch or real-time data flow to perform the same quality or consistency checks during various ETL activities.


Sharing Cleansing Rules


Information Steward’s Cleansing Package Builder empowers data stewards and data analysts to develop custom data cleansing solutions for any data domain using an intuitive, drag-and-drop interface.  Cleansing Package Builder allows users to create parsing and standardization rules according to their business needs

and visualize the impact of these rules on their data (as rules are being developed and as changes are being made).  Once the data analyst has developed the custom data cleansing solution, the Cleansing Package is published to Data Services for use with the Data Cleanse transform to parse and standardize incoming data into discrete data components to meet the defined business and target system requirements.



Information Steward's Data Quality Advisor guides data stewards to rapidly develop a solution to measure and improve the quality of their information assets. This is done with built-in intelligence to analyze and assess the data and make a recommendation on a cleansing solution – with the simplicity to allow a Data Steward to further review and tune the rules to get even better results.  When satisfied with the rules and results, the data steward can publish the data cleansing configuration to Data Services. Allowing the IT developer to use – or consume - that solution within the context of larger production data set and ETL data flow.


Reviewing Match Results


Information Steward’s Match Review is a critical step within overall “matching” process. While the Match transform in Data Services provides a very comprehensive set of mechanisms to automatically match and group duplicate records, matching still remains a combination of art and science. While you can get close to accurate matching using the Data Quality Advisor (in Information Steward) and the Match transform (in Data Services), there may be results (a gray area) that would benefit from additional, manual review.  Information Steward provides a business user-centric interface to review suspect or low confidence match groups that consist of duplicate or potentially duplicate records.  Based on their domain expertise, the users can confirm the results of the automated matching process or make changes, such as identifying non-matching records.  In addition, business users can then review and pick and choose fields from different records within the Match group to fine tune that pre-configured, consolidated best record.  The review results are available in the staging area. You can configure whether the results should be made available at the completion of the review task or incrementally as each match group is processed. The downstream job or process can read the results from the staging repository and integrate them into the target system.


Discovering Data Services Metadata

Information Steward’s Metadata Management capabilities discover and consolidate metadata from various source into a central metadata repository to allow users to manage metadata from various data sources, data integration technologies, and Business Intelligence systems, to understand where the data used in applications comes from and how it is transformed, and to assess the impact of change from the source to the target, reports or application.  SAP Data Services objects show up under the Data Integration category of Information Steward’s Metadata Management.  Information Steward has a native Metadata Integrator that can discover  a vast array of metadata objects - including projects, jobs, work flows, data flows, data stores (source and target information), custom functions, table & column instances, etc. - as well as understand the relationship between these metadata objects and up and downstream systems.



Performing data lineage analysis in Information Steward, we can see how data from the source has been extracted, transformed and loaded into the target using Data Services.  In this example, you can drill into to determine how LOS, or length of stay, was calculated and what source fields ultimately make up the patient name.


An Architectural Perspective


Architecturally, Information Steward and Data Services are inseparable in that Information Steward relies on Data Services.  In addition, they have a lot in common as well, they both leverage Information Platform Services (IPS).  Information Steward and Data Services both rely on CMS services for centralized user and group management, security, administrative housekeeping, RFC Server hosting and services for integrating with other SAP BusinessObjects software (i.e. BI Launch Pad).  A dedicated EIM Adaptive Processing Server is shared between Data Services and Info Steward.  Services deployed to the EIM Adaptive Processing Server include:

  • RFC Server - Used for BW loading and reading via Open Hub
  • The View Data and Metadata Browsing Services - Provides connectivity to browse metadata (show tables and columns) and view data
  • Administrator Service - Used for cleaning up the log files and history based on log retention period


In terms of being inseparable, the Data Services Job Server is required for Information Steward Job Server to work.  With this need, there also comes benefit as Information Steward is able to leverage the great capabilities that Data Services has to offer.  For example, Information Steward scales by leveraging Data Services ability to distribute work load across servers as well as across CPUs.  In addition, Information Steward leverages Data Services for direct access to a broad range of source connectivity, including direct access to SAP sources like SAP ECC.  Information Steward leverages Data Services as its core engine to not only access data but also to execute profiling and validation rules against that data.  With that being said, Data Services and Information Steward’s source connectivity capabilities closely mirror each other, where it makes sense and where there are not technical limitations in doing so.


So, what do you say?  SAP Information Steward and Data Services: inseparable, complementary, best friends...?  How about, they complete each other?  In any event, what a match!

Hi Team,


I'm getting an error when I view data on IS view. This error occurs on QA and PROD environments but it is fine on DEV.


Please see screenshot attached.


Please assist.


Thank you.


In this blog, we will explore how Metapedia and the native Metadata Integrators bundled with Information Steward can support you in managing your SAP landscape’s metadata.

In general, Information Steward Metadata Management collects metadata information from your enterprise systems, information such as:

  • Attributes (name, size and data type)
  • Structures (length, fields and columns)
  • Properties (where it is located, how it is associated, and who owns it)
  • Descriptive information (about the context, quality and condition, or characters of the data)


Information Steward then organizes metadata objects to allow you to:

  • Browse, search and explore the metadata
  • Understand relationships between different objects within the same source and across different sources
  • Customize metadata objects and relationships with annotations and custom attributes and relationships




SAP BusinessObjects Enterprise and Some Basics on Data Impact/Lineage and Metapedia


In support of your SAP BusinessObjects Enterprise environment, the Metadata Management module of SAP Information Steward can discover metadata about universes, reports (including Crystal Reports, Web and Desktop Intelligence documents), dashboards, CMS folders and systems, server instances, application users and user groups and SAP BW systems.  And, for each object, there is additional associated metadata.  For example, for a universe, the associated metadata may include queries, universe classes and connections, objects and filters.  For reports, metadata could includes universe objects, InfoObjects, queries, report fields, columns, variables, SQL expression fields, formula fields and running total fields.




Impact analysis will allow you to view the objects that are affect by data within a particular object.  For example, the rather simple impact diagram above shows that the Calculate Totals report field impacts the Charts and Dials report.  When you hover your mouse over an element in the impact diagram – in this case the Calculate Totals field - additional information about that metadata object appears.




The impact analysis for a universe object lists the objects that are affected by the data within the universe object.  In this example, you can see that two reports are affected by the data in the “Revenue Amt Func” universe object, InvoiceSummary and Revenue.  You can also take note of the report consumers, both users and user groups, that have permissions to access these particular reports.  And, why is this important?  Simple.  It answers the question, what is the downstream impact if I change this universe or that query?  And, this includes not only what does it impact, but also who and how many?




Data Lineage enables you to view a list of sources from which an object obtains its data.  In this example, the report lineage diagram shows that the universe objects in the BOE Business Intelligence system came from tables that were loaded by the DS_Repo_RapidMarts Data Service data integration job.  The dashed lines between each column name in BOE Business Intelligence and DS_Repo_RapidMarts systems indicates that the columns are the same (Same As relationship). And, you could explore the data lineage further to see the source database or business warehouse of the Data Services data flow.




Adding to the potential for insight, SAP Information Steward Metadata Management information can be accessed directly from within BI Launch Pad to view the data lineage of a Crystal report or Web Intelligence document, enabling direct access for report developers and consumers to understand where the data is coming from and how that data is being transformed.




And, not only can you help report developers and consumers to understand where the data is coming from, you can additionally instill a degree of trust in that data by allowing them to see how good the data really is. The lineage information provided via the BOE and Information Steward integration includes and highlights the quality scores of the specific data assets and allows the user to drill into those scores to see the details as to the data quality rules, failed data as well as profiling results, if these rules and results are available.




Metapedia terms can also be associated with report metadata objects.




And again, a link within the BI Launch Pad allow BI users to access the business terms that have been associated with a particular Crystal Report or Web Intelligence document directly from the BI Launch Pad.  This promotes a common understanding of business concepts and terminology through Metapedia as your central location for defining standard business vocabulary (words, phrases, or business concepts).  So, why might you want to start a Metapedia initiative?  Well, think of your report/dashboard consumers, especially if those consumers are across multiple lines of business.  For example, let’s say Human Resources has created a new report that displays information about Org. Units, Dept. Units, Functional Area, Career Grades and Levels.  Wouldn't it be great if consumers outside of the HR team could gain access to the accepted and "HR-approved" understanding of each of the concepts represented on the report?  Or, looking outside of the reporting world, think about a data migration project to bring two companies' data assets together.  As you sit down together and map common data elements, what a great asset to be able to capture the common understanding of the data – data that may be technically named differently - in business terminology and link that definition back to the disparate sources.




With the data migration example, so what if you want to expose your central repository of business terms to additional applications or locations to promote a common understanding across the two newly joined companies?  Good news!  Metapedia content can also be accessed via WebServices, which includes APIs that support searching the Metapedia repository terms, descriptions, authors, synonyms, categories, etc.  Above is an example MS-Word Plugin created using the Metapedia WebService API (see the Information Steward Developer's Guide for more information).



SAP NetWeaver Business Warehouse (SAP NW BW)


Okay, so we are going to dive back down into the metadata with a look at SAP NetWeaver BusinessWarehouse (SAP NW BW).  Besides relational databases and data warehouses, BOE universes and reports can also assess data from SAP NW BW.  However, the SAP NW BW is a “blackbox” for BOE BI users.  It is not possible for them to see how the data crosses between the BOE BI and SAP NW BW environments.  The Information Steward SAP NW BW metadata integrator removes the barrier between these two environments (BI to BW) by exposing the objects inside SAP NW BW environment and thus providing transparency and traceability needed.  This allows questions such as, "If I change the definition of a specific SAP NW BW object, what universes or reports are affected?" or "From what SAP NetWeaver BW source does the universe obtain its data?" to be answered.  The SAP NW BW metadata objects and relationships supported by the Information Steward SAP NetWeaver Business Warehouse metadata integrator are displayed in the light-blue boxes in the above diagram.





The Information Steward HANA metadata integrator has the ability to collect standard relational objects and information models in HANA.  It collects all of the information about service instances, databases, packages, views, including Attribute Views, Analytic Views, and Calculation Views, tables, columns, measures, variables, etc.  It also collects relationships between schemas, tables and columns as well as attributes and measures in your SAP HANA database sources.  And, of course, all of the relationships upstream and downstream of your HANA instance.



SAP Data Services


SAP Data Services objects show up under the Data Integration category of Metadata Management.  Metadata objects applicable for Data Services includes projects, jobs, work flows, data flows, datastores (source and target information), custom functions, table and column instances, etc.




With Data Services data lineage analysis, we can see how data from the source has been extracted, transformed and loaded into the target

In the example above, you can see the ETL process in action, following the data from report to source.  Specifically, you can drill into to determine how LOS, or length of stay, was calculated and what source fields ultimately make up the PATIENT_NAME.




While analyzing the lineage of a Data Services integrator source column, you can view the Data Services Auto Documentation report for the associated data flow objects.  The Auto Documentation report allows you to see another view of the ETL process.




If you click the data flow hyperlink, it will launch the Data Services Management Console and allow you to navigate to the dataflow details.



SAP PowerDesigner

The Information Steward SAP PowerDesigner integrator is new with the Information Steward 4.2 release.  With this out-of-the-box capability, users have access to all the metadata related to Power Designer, thus improving collaboration between Data Modelers and Data Stewards by extending impact and lineage analysis to the design models that are available in PowerDesigner.  Once you have aligned the current-state (the operational view) with the architectural view, Data Stewards can then “inform” the Data Modelers where the root of quality concerns come from, informing architects so that they can address these quality concerns at the source as they design the next generation business applications.  Data Stewards also have easy access to data quality rules and domains defined as part of these PowerDesigner models, which they can leverage to implement actual validation rules within Information Steward.  In addition, the business terms defined in PowerDesigner can be integrated with Information Steward’s Metapedia so that all the business concepts are captured in a central location.




PowerDesigner metadata is collected for conceptual, logical and physical data models.  In the image above, the left side shows the view in the PowerDesigner client and the right side shows corresponding objects in Information Steward.  Note that the intent is not to replicate everything possible object from PowerDesigner to Information Steward.  Only basic properties of conceptual and logical models are captured along with the relationships between the conceptual, logical and physical models.  The connecting entity between PowerDesigner design time metadata and Information Steward operational metadata is the physical model, so that is where the focus is.  Details about basic properties, physical diagrams, business rules, domains, references, tables, and server Instances are collected for PowerDesigner.




The above is an example of how the impact/lineage diagram shows up.  In this example, the database was created using a script generated by PowerDesigner itself.  On the BOE side, there was a universe built on top of that database, which is being used by the report.  On the PowerDesigner side, there were domains and business rules that were associated with a few columns being used by the report fields. Hence, the lineage is shown as report > report fields > universe objects > PowerDesigner columns > domain/rules.




The Business Glossary in PowerDesigner is very similar to Information Steward Metapedia, so it is very easy to map concepts from one to another.  You can import the content of PowerDesigner's Business Glossary to Metapedia. If the glossary terms were associated with some other objects in PowerDesigner, that association is maintained in Metapedia as well.





So, what about the SAP Business Suite?  Well, there is more work to be done here specific to Information Steward‘s Metadata Management capabilities.  Currently, Information Steward gives you native connectivity to SAP ECC within Data Insight for data profiling and data validation.  This gives you access to browse SAP ECC metadata down to the column level, similar capabilities exist within Data Services.  This will also allow you to relate your Metapedia business terms to SAP ECC metadata (we covered this capability, object association, earlier).  However, in terms of Metadata Management and the ability to discover objects and relationships all the way to the SAP Business Suite, this item is currently on our Information Steward roadmap (SMP Roadmaps, go to Database & Technology area).  The goal is to provide complete metadata management for your SAP landscape, from data definition (via PowerDesigner) all the way to your operational systems and business processes (SAP ECC).  Watch for more great capabilities to come with Information Steward's Metadata Management!

Hi Team,


How do I display a change log in SAP IS or if something has been removed/deleted, how do I recover it?


Example, I created a View and next thing it has been removed/deleted.


Please help.


Thank you.




Filter Blog

By author: By date:
By tag: