Our goal is to copy a BPC appset from our QA to our development server using the BPC NW 7.5 Appset Backup/Restore -
(tcode UJBR or program UJT_BACKUP_RESTORE_UI)
Issue: The restore of the appset to BWD is working properly for transaction data and master data but we are having issues restore the metadata. The restore program is aborting with a short dump when selecting metadata (we tried the restore with trans/meta/master data and with metadata alone)
This is more than likely a resource issue with our development server since the same restore is successful into the production server. However, we have 86GB of memory on the DEV server and our appset is not that large e.g. the metadata file is only about 1GB. Though we are suspecting the issues may be due to the fact that we use a virtual environment (VMWare) and causing differences in memory management.
1. What BPC areas / tables are backed up under the metadata of the restore program UJT_BACKUP_RESTORE_U ?
2. Is there a way to optimize the metadata restore e.g. run as multiple steps, etc ?
3. Any other suggestions or workaround to let us complete this restore into BWD (we would rather avoid doing a system refresh from BWQ to BWD since we have a lot of development in process in BW Dev)
Thank you in advance,
Yes, there are some issues with the restore process in certain support packages levels. If you are on an SP below maybe SP06, then there are issues with the way large volume data is handled and it could cause some memory issues. I think the biggest problem that we have seen thus far is when the customer has a lot of data manager data files in the file service, with really large data. So the first thing I would suggest to do, is to remove the large data files in the BPC file service from the source system. Then run another backup, then try to do the restore again with the new backup file. You can get an idea of what the large files might be by looking at the UJF_DOC table in SE16. There is a field called DOC_CONTENT_DB. If this field has a table name in it, then this is a data manager data file. The DOC_LENGTH field will provide the number of rows that are contained in that table. The largest files should be removed. You can know what the file and path are from the DOCNAME field. You can then either delete them from the frontend, or you can use transaction UJFS to delete them directly.
All data in the UJ* tables, as well as the generated /1CPMB/* tables are backed up when backing up metadata. This includes work status locks, file service, journals, audit data, etc, etc, etc. The most time that is spent when doing the restore is actually restoring the data manager data files, so if you get rid of those, the metadata restore process should be a lot faster as well.
Hope this helps!
Thank you for your quick and detailed response Rich (and taking time away from your vacation to answer). That was very helpful as this process does not seem to be well documented.
Have not been able to confirm that your suggestion worked as I have not deleted all files/documents due to the large number of files to be deleted (over 500).
Issue: In UJFS I only have opportunity to delete one file at a time (each delete takes over 30 seconds) and do not see an option to select/delete multiple files/documents. I also tried to delete the records with document related entries in table UJF_DOC but delete is not an option (greyed out).
1. Is there a way to delete multiple documents from UJFS?
2. Is there a table from which I can do a mass delete of these documents?
3. Where in the BPC front end can you delete documents ? and can you delete multiple documents ?
Thank you again,
You may test these two programs/transaction codes
UJF_FILE_SERVICE_DLT_DM_FILES - deletion of DM files
ZUJF_DLT_DM_LOG_FILES - deletion of log files
If you would like to delete the DM files from BPC front end, an option you may test would be Data download - Browse the source file folder - highlight the files and delete
Thank you Rich and Al for your help and detailed instructions.
Sorry for the delay in responding. My question has been answered and we were finally able to restore the application set.
The deletion of the logs and DM files helped us significantly reduce the size of the file to be restored.
Happy New Year !
Quick question about performance.
We now have two appsets in production:
1- The original productive appset (financial planning) used by the users - 2+ million records
2- A copy of the productive appset (using the BPC appset backup/restore function) - 2+ million records
The copy appset may seem redundant but the benefit is that it can be used by the BPC developers for purpose of testing and development in production (due to transport issues with BPC)
a/ Do you anticipate any significant performance impact for BPC reports and/or templates if we keep/use a copy of the productive BPC appset in production ?
b/ Since all templates are loaded to cache when login to BPC, is the login time increased / doubled due to the fact that we have twice as many templates (the original and the copy) ?