cancel
Showing results for 
Search instead for 
Did you mean: 

OS Error:

Former Member
0 Kudos

Hi Everyone,

I am getting the following error when running the job.

Cannot write onto Disk for the file <D:\Program Files (x86)\SAP BusinessObjects\Data

                                                     Services\log\pCache/ptbod1__bodq/924321CacheSplit3_6108_1>. OS Error: <No space left on device>

In the monitor log I see that this particular dataflow is pulling almost 4.8 million records.

DF_CUST_V /QueryPROCEED4908000750.6391960.385

Also, very interesting thing is we have our production environment clustered (

4 job serveres in separate machines) and we are facing this error only while the job runs on 2 particular jobservers. I cannot really ask the developer to change the design as the job runs perfectly fine on the other 2 servers. As the environment is laod balanced this job can be kicked off from an one of the servers.

Few more details about the job:

  • memory type is pageable
  • Max_Long_Data_In_Memory=2048 in DSconfig.txt. ( will increasing this and using Inmemory Cache as defualt work? We have 26 gig RAM on this box alone! )

Thanks,

Pramod

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hi Pramod,

You can concider changing the log directory and pcache of the 2 job servers that are facing disk space problems to a another disk/share with plenty space.

To do this see:

https://service.sap.com/sap/support/notes/1265817

https://service.sap.com/sap/support/notes/1372924

regards

Nawfal

Former Member
0 Kudos

Hi Nafwal,

The disk space we have is 28 GB. The jobservers on which the job is successfull has about 22 GB. I will try this solution anyways and let you know how it goes.

Thanks,
Pramod

Former Member
0 Kudos

Hi,

In that case monitor the log and pCache to see their current size and how much they grow when the culprit job servers are being used. That would give some indications to what's happening.

Nawfal

Former Member
0 Kudos

I will be monitoring it once this job is kicked off in production. It is scheduled to run on Sundays, and I cannot really kick off the job right now.

Former Member
0 Kudos

Hi Nawfal,

Thanks for your help. Creating disk space has worked for us. I see that 40gb free space works for us and I observed the pcache folder size increases when the particular dataflow is in progess.

However, I overlooked this as in the other server the disk space was lesser and the job was able to complete without any issues. I will hopefully find the answer for it.

Thanks for your help, I appreciate it

Pramod

Answers (5)

Answers (5)

0 Kudos

Hi Everyone,

I am getting the following error when running the job.

Cannot write onto Disk for the file <D:\Program Files (x86)\SAP BusinessObjects\Data

Services\log\pCache/bods_repo_admin/2621Sort1528_3604_111>. OS Error: <No space left on device>

Can anyone please help

Former Member
0 Kudos

Hi  Pramod,

I am also getting the same issue which is you got when triggere the BODS validation job after pacting 4.0 to 4.2.

When we place a file in share folder path and triggered the job then the file data moved to BODS staging but the job got error that OS Error:<No Space left on Device>.

Could you please let me how could you resolve your issue.

Please give any suggestions on the error.

Thanks,

Subbu

Former Member
0 Kudos

I have tried using the inmemory cache for the dataflow which is causing the issue and it seems like it was using the whole CPU memory ( which is 28 gb ). The server got hanged because of this. I have forced to run the job on the server which it is running without any issues for now.

My next try will be increasing the pageable buffer size to 16 GB and see if that works. As I will be away this sunday for thaksgiving, i will be working on this issue fortnight later.

Do give me the suggestions in between.

Thanks,
Pramod

Former Member
0 Kudos

Can you check the various DSConfig.txt files and compare if the working ones are using:

MAX_64BIT_PROCESS_VM_IN_MB=4096

              
Explanation: On 64-bit
platforms, determines the maximum size of virtual memory in MB., int, Y,
&gt;2048, 4096,

Or even if not, set itsee if it works.

Regards

Norbert

Former Member
0 Kudos

Hi Norbert,

All the working ones have the same configuration. But, this week I will be trying by increasing

MAX_64BIT_PROCESS_VM_IN_MB=4096 to 8192.

I really appreciate for your advice. I hope it works.

Thanks,

Pramod

Former Member
0 Kudos

Hi Pramod,

What about the files system:

NTFS vs. Fat32 (4GB)

Did you set quotas on NTFS for the user?

Regards

Norbert

Former Member
0 Kudos

Hi Norbert,

There are no set quotas on the system.

@Everyone:

I will try doing the follwoing:

  • Change PCache directory
  • Run the job using InMemory Cache ( as we have 28 GB on this server )
  • Increase Pageable buffer pool size to 16GB ( which effectively means using 16 GB of Inmemory)
  • Going to run this job my checking the option Collect statstics for optimization ( as I do not know how the previous admin ran this job)

I will be working on these on every sunday from now ( for each change ) and let you guys know if it worked.

but I still have this question " why is this job running on other jobservers which have less RAM and disk space?"

In the monitor log I see that this dataflow is taking 650s to process 4.8 million rows on the servers it is able to run and on the particular server I am trying to troubleshoot it takes 750-800s. ( I dunno if this would help any one, but this is the observation for now )

Former Member
0 Kudos

Adding one more point,

All the production boxes have a common database. So, the settings and the job are exactly same as far as I checked.