on 11-24-2014 2:09 PM
Dear Experts,
reuse count value for "number of open objects" is higher, which indicates to increase the parameter value.
But "sp_countmetadata" says the server already have more than required (150000 configured)
Does the server need our intervention here?
Also, any comment on "permission cache entries" from the below output will be highly appreciated.
1> sp_countmetadata "open objects"
2> go
There are 98743 user objects in all database(s), requiring 325538 Kbytes of memory. The 'open objects' configuration parameter is currently set to 150000
1> sp_monitorconfig "all"
2> go
Usage information at date and time: Nov 24 2014 5:52PM.
Name Num_free Num_active Pct_act Max_Used Reuse_cnt Instance_Name
------------------------- ----------- ----------- ------- ----------- ----------- -----------------------------------------------------------------------
number of open indexes 31131 168869 84.43 188078 0 NULL
number of open objects 124 149876 99.92 150000 18585 NULL
number of open partitions 51131 168869 76.76 188078 0 NULL
number of remote connecti 20 0 0.00 2 0 NULL
number of remote logins 20 0 0.00 2 0 NULL
number of remote sites 10 0 0.00 1 0 NULL
number of sort buffers 10976 10144 48.03 21120 0 NULL
number of user connection 102 148 59.20 186 0 NULL
number of worker processe 32 0 0.00 0 0 NULL
partition groups 1024 0 0.00 0 0 NULL
permission cache entries 89 39 30.47 57 124361 NULL
procedure cache size 3375482 1867398 35.62 2329224 0 NULL
.
ASE 15.7 SP122 on AIX 7.1
Regards,
Rajesh
Wellll....one problem with sp_countmetadata is that it depends on when it is run. For example, light weight stored procs are objects - and since SAP uses a ton of fully prepared statements, this can cause the 'number of open objects' requirement to fluctuate quite a bit. In addition, temp tables (including work tables) also can impact this.....sooooo....in your case, I would go with the output of sp_monitorconfig "all".
One thing that might help is to limit each NW worker process to 300 statements in cache...this might benefit you in several ways - one of which might be to reduce the amount of reuse.
As far as reuse goes, remember, what you are configuring is how much internal memory ASE will allocate to track certain metadata structures. Obviously, you could try to allocate enough for every table/index/rule/default/LWP, etc. - but you would waste a lot of memory. There are times when a rarely used object is accessed and it grabs a metadata structure - and when it is done, it is put on the scavenge list.....for example, you may have table that is only queried once a day. To dedicate a metadata descriptor for it is a bit of a waste. Now, when the query is done, that table's structure is on the scavenge list. When ASE needs to grab a structure for another table, it might "reuse" that one that you just finished - and in a sense, no real big deal in this case.
Soooo....I suspect a lot of the reuse in your case is likely due to reuse of object descriptors used by LWP's - could be wrong - there is no way to tell for certain. However, if you start seeing errors in the error log about it, then it is really bad. For open objects, though, in your case, I might be tempted to bump it up. For the permissions cache, that is usually by SPID - so I am not sure what the numbers are showing above - is it the average of all SPIDs or just the current SPID??? I think the former, but.... Regardless, if you think about it, 100 permission cache entries means that as long as the process doesn't access more than 100 different objects, the permissions are in cache and we don't have to go look it up. However, considering the schema size, any NW process - especially a dialog process - will likely hit thousands of tables. Again, would we want to cache all the permissions for all tables???? Nah - it would waste a ton of memory. So....we settle on a number that allows us to cache quite a few tables so that if we hit the same set of tables within a logical unit of work, that we don't need to keep looking up the same permissions each time. Yep, that means we will probably see a lot of reuse for permission cache as dialog processes move from logical unit of work to a different logical unit of work.......100 is a good starting number - only you know how big/how many objects your logical units of work access - maybe you need 150 or maybe only 50.....but based on the above, I wouldn't be changing it.
Essentially - "reuse" is normal. What you don't want is so much reuse that it affects other applications. For example, to reuse a table's metadata structure, the table has to be flushed from cache. This will be reported in the errorlog as a DES scavenge - and is definitely a sign that the number of open objects is too low.
The general rule to avoid that situation is that anytime sp_monitorconfig starts showing that Pct_Active is north of 90% over several samples (over several days), consider increasing the value. If the peak usage (Max_Used) is <50% then you could consider reducing it by 10-20%. In your case, I might increase 'number of open objects'......I would watch 'number of open indexes' as it is getting up there....and 'permission cache entries' of 100 might be a tad high, but reducing it is not really advisable at this stage as the max of 57 is ~60% and having that 100 gives you some slack for any 'surge' of activity caused by some errant app. You can find out via sp_configure how much memory it is consuming to have it at 100 - and if you reduced it to 80 - how much you would gain back....unless you gain enough back to substantially benefit data cache (e.g. 10MB or more), probably isn't worth it.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Jeff,
Thank you for being eloborative
By "One thing that might help is to limit each NW worker process to 300 statements in cache"
I could not get this. Can you please explain?
1> sp_configure "open objects"
2> go
Parameter Name Default Memory Used Config Value Run Value Unit
---------------------------------------------------------------------------------------------------------------------------
number of open objects 500 296748 150000 50000 number
Here, Memory used us 296748 KB (~290 MB). Right?
I received the following error in the ASE errorlog 25 times since Nov 4, 2014
Increase the config parameter 'number of open objects' to avoid descriptor reuse. Reuse may result in performance degradation.
So I've decided to increase the open objects parameterby adding more "18585".
Thanks for the heads up on open indexes, I will keep a check on it.
Regards,
Rajesh
Rajesh,
In simple sense ideally you should have the number of open objects = sp_countmetadata "open objects" +(plus) 10-20% more as this needs to be opening and using connections for the other processes(including dialog processes) and other transactions.
In the above case increase your connection count to 170000 you will not see any problem here.
But dont bother about the "permission cache entries" parameter, you should really think about your "procedure and statement cache" parameters.
Cheers
Kiran Kumar A
SAP
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
88 | |
23 | |
11 | |
9 | |
8 | |
5 | |
5 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.