Financial Management Blogs by Members
Dive into a treasure trove of SAP financial management wisdom shared by a vibrant community of bloggers. Submit a blog post of your own to share knowledge.
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

Hope this simple guide will help when you want Schedule pas admin load procedures.

Guide to schedule loading
processes
within PAS Administrator

You must create a dedicated user to run the implementation timetable proceedings (this is the recommendation of the official manual of SAP), the user must
have administrator permissions (security
level = supervisor)


 
 
 
 
 
 
 
 


 

For procedures run automatically
loads
the current period only
to upload data to
the current day
or month (depending
on
frequency), two lines of code
are used, the first indicates if the period is monthly, or diary, etc with
SET MONTHLY PERIOD,
SET PERIOD DAILY, etc, and the second tells
you
pull the current month or day with
SET PERIOD TODAY.

This loading procedure in the section where
the
KPI is specified and refresh dimensions and
the time period to be loaded and before the connection and query.

You must create a file (or several) with
the instructions to execute during
loading
/ scheduling to ease this file
is placed in the home directory location within the PAS
Administrator for Muniguate, the full path and name of the test file command
is:

C: \ Program Files
\
SAP BusinessObjects \ Strategy Management \
ApplicationServer \ home \ ejecutarprocedures.batch

If not put it in this
directory
HOME, at the time of
scheduling
or run the file from
the console
, you must pass the full path and not just the name. This file contains the
script you want to
schedule
, and he could put multiple calls
cargo procedures, not just one,
so you could call all procedures in one file, or create a file for each procedure .

The script should
contain
instructions for running
procedures that would be used from the command line of
the
PAS Administrator, ie:
USE to select which database / hub use
and JOB to
execute a procedure
within that nature,
but also this, before
selecting
the database / cube the script requires
that
all connections to nature
are killed, this is achieved by:

CONN TO KILL
SUP cube_name

They could perform procedures runs multiple
cubes
within a single file,
you should only
have a SUP CONN
TO KILL nature
and nature USE
EXC for each cube.



When you finish making procedure calls,
indicate the end of the log and file with the following lines:

TRACE OFF BOTH



EXIT CLEAR



procedures to run another cube



CONN TO KILL
SUP Cubo2



USE EXC Cubo2





JOB procedimiento4



JOB procedimiento5



.......



Stop generation ...
bitacora



TRACE OFF BOTH



Exit and clean ...



EXIT CLEAR

Below, you can see the outline
or structure of a file that executes various processes within multiple cubes as follows:





Generation ... Turn the log and archive log naming the





TRACE BOTH 'ejecutarprocedures.txt'; ext UPDATE





Sure ... there are no connections to the hub to update



CONN TO KILL
SUP cubo1



Use the model
... in exclusive
mode



USE EXC cubo1

Call ... procedures to be executed in the bucket



JOB procedure1



JOB procedure.2



JOB proceeding3



Call more procedures
...





Perform ... steps
similar to
the above procedures
to run another cube



CONN TO KILL
SUP Cubo2



USE EXC Cubo2





JOB procedimiento4



JOB procedimiento5





Stop generation ...
bitacora



TRACE OFF BOTH



Exit and clean ...



EXIT CLEAR