cancel
Showing results for 
Search instead for 
Did you mean: 

Adaptive Processing Server DSL max heap and Number of WebI Processing Servers

former_member196781
Participant
0 Kudos

Hello,

While reading the BI4 Sizing guide again after some time, I was wondering about two things:

- The info about APS DSL maximum heap sizing (-xmx) in several places, e.g. page 40:

"Memory is calculated based on the number of expected active concurrent users as follows:

0.25 GB per active concurrent user

8 GB minimum

30 GB maximum"

This is strange, since in openSAP training - BI 4 Platform Innovation and Implementation we were told that

the maximum heap size for any APS shouldn't be higher than 8GB, to avoid garbage collection issues.

But when looking at the given calculation formula, our APS DSL should have around 12,5 GB.

What is your experience with that?

- The number of Web Intelligence Processing Servers (WIPS), e.g. page 39:

"When adding additional WebI instances, it is recommended that each instance run on a separate

machine in order to maximize the I/O capacity available to the servers."

Because we had a lot of issues with WebI performance in the past, we were advised to increase

the number of WIPS on our BIP by SAP consulting. This should also ensure (better) load balancing.

Now we have two WIPS for DEV/QA and four for PRD, but these are all single node systems.

What is your experience with this?

Is this mentioned I/O bottleneck when having more than one WIPS on the same node really there?

On the other hand: are there any issues possible when having only one WIPS in the PRD deployment (loadbalancing, etc.)?

Since we have around 50 ACU, one WIPS should be enough when looking at the sizing guide, but I'm wondering about the side effects.

(We are running 4.1 SP2 patch 1 on W2k8 R2)

Best Regards

Moritz

Accepted Solutions (1)

Accepted Solutions (1)

0 Kudos

Hi Moritz,

with regards to your APS Question. These openSAP courses are pretty good. If the Trainer told you that 8 GB should be the maximum for any APS than this is fine. I have to be honest i never configured more than 8 GB per APS as well.

If you say, based on your calculation, you need to adress 12,5 GB of RAM than i would recommend you configure 2 APS Services hosting the DSL Bridge. For a better Loadbalancing and fault tolerance you could configure two DSL APS Services per Node.

I never had major I/O issues when i had multiple WIPS on one physical/virtual Host. So i cant really comment this.

As per default configuration one WIPS handles 50 Sessions. If you say you have 50 ACU`s than this would be fine. I rather have it to decrease these 50 Sessions to 25 Session per WIPS and then have also multiple WIPS running.

Hope this helps.

Regards

-Seb.

former_member196781
Participant
0 Kudos

Hi Sebastian,

Thanks for the info provided!

Regards

Moritz

Answers (2)

Answers (2)

Former Member
0 Kudos

Hi Moritz,

I think that the best way of sizing your environment is through Load Testing and using different tools to monitor the platform.

I would use jVisualVM to monitor the heap on the APS servers and depending on what you load test, I'd monitor that APS. For example, I would monitor the DSL Bridge APS while load testing Webi while at the same time monitoring the IO, CPU and memory of the actual machine. Ideally you want those to be used as much as you can without causing a bottleneck. Having 1 webi server on 1 machine which is 100% busy is great, except if you have 16 cores of which 15 are not doing anything. You could plop a few more webi servers to help distribute the load, maximise the usage of the machine, etc.

In the end, a lot of the information that is out there is really just a guidance. It's a starting point and you need to use the tools to your advantage to get the best out of your platform.

--

Patrick

former_member196781
Participant
0 Kudos

Hi Patrick,

Thanks for the info.

Krgds

Moritz

Henry_Banks
Product and Topic Expert
Product and Topic Expert
0 Kudos

Hey,

I have (in some cases) had to whack-up the -xmx to some silly large numbers (i.e. 30g on an Explorer indexing server)  to see if it could 'swallow the pill' .

in that case, it didn't, and so I had to review the kind of requests that were being expected of the tool.

in your situation, i don't envisage a problem with a DSL of 10g or above. But you're really going to want 2. and ideally, you want your PROD architecture to be a (properly) fault tolerance cluster.

regards,

H

former_member196781
Participant
0 Kudos

Hi Henry,

Thanks for the info!

I will proceed with the fault tolerant approach and setup a 2nd DSL.

Regarding the WIPS:

What is your recommendation?

Having one WIPS with 50+ connections to accommodate the 50 ACU,
because having multiple WIPS on the same node can cause I/O issues

or

Having two WIPS with 25+ connections each, to have fault tolerance/load balancing and

avoiding possible I/O issues, which could occur because of running two WIPS on the same node?

Regards

Moritz

Former Member
0 Kudos

Hi Moritz,

First thing - multiple WIPS should not cause IO issues - there's no more disk activity than having one large WIP and the comms activity should not be much.

I would go for 2 WIPS on your single server with the default 50 connections and monitor it.  It will all depend on the usage - report sizes, complexity, frequency of activity etc. 

No need to limit it until you hit issues I would say.

former_member196781
Participant
0 Kudos

Hi Bill,

Thanks for the info.

I have reduced the number of WIPS from 4 to 2, having the default connections of 50 and at a first glance (~two weeks), everything is working fine.

Regards

Moritz