Note: You must be registered in order to post a reply. To register, click here. Registration is FREE!
T O P I C R E V I E W
Adria
Posted - Feb 07 2012 : 4:34:29 PM Hi,
I am running step 2 (fslvbm_2_template) on my data and I keep on getting this error in my output file for some images: + /usr/local/packages/fsl-4.1.8/bin/fast -R 0.3 -H 0.1 PATB_FSE480_struc_brain Image Exception : #99 :: Out of memory terminate called after throwing an instance of 'RBD_COMMON::BaseException' fslvbm2a: line 215: 22480 Aborted /usr/local/packages/fsl-4.1.8/bin/fast -R 0.3 -H 0.1 PATB_FSE480_struc_brain + /usr/local/packages/fsl-4.1.8/bin/immv PATB_FSE480_struc_brain_pve_1 PATB_FSE480_struc_GM
and then later in the file:
** ERROR (nifti_image_read): failed to find header file for 'PATB_FSE480_struc_GM' ** ERROR: nifti_image_open(PATB_FSE480_struc_GM): bad header info ERROR: failed to open file PATB_FSE480_struc_GM ERROR: Could not open image PATB_FSE480_struc_GM Image Exception : #22 :: Failed to read volume PATB_FSE480_struc_GM terminate called after throwing an instance of 'RBD_COMMON::BaseException' /usr/local/packages/fsl-4.1.8/bin/fsl_reg: line 124: 21260 Aborted ${FSLDIR}/bin/flirt -ref $REFERENCE -in $INPUT $INMASK -omat ${I2R}.mat $flirtopts ** ERROR (nifti_image_read): failed to find header file for 'PATB_FSE480_struc_GM' ** ERROR: nifti_image_open(PATB_FSE480_struc_GM): bad header info ERROR: failed to open file PATB_FSE480_struc_GM ERROR: Could not open image PATB_FSE480_struc_GM Image Exception : #22 :: Failed to read volume PATB_FSE480_struc_GM terminate called after throwing an instance of 'RBD_COMMON::BaseException'
What could be the problem?
8 L A T E S T R E P L I E S (Newest First)
petty
Posted - Feb 24 2012 : 11:29:09 AM the maximum is 92G ... there are currently only two nodes that can handle this.
the max is dictated by how much physical RAM is installed in each node, since the RAM cannot be shared across the cluster.
syam.gadde
Posted - Feb 24 2012 : 10:10:12 AM There is no theoretical maximum, though practically, the nodes in the cluster have only a finite amount of memory. Some nodes have a maximum of 30 gigabytes or so available to cluster jobs. Others have more. Most users will never need more than 30 gigabytes. Generally, trial and error will show you the minimum you need to request. If you request more than you need, that's OK, but it also potentially increases the amount of time your job will need to wait for a node with enough resources to allocate to your job.
Adria
Posted - Feb 24 2012 : 10:03:06 AM What is the maximum amount of memory I can submit a qsub with?
syam.gadde
Posted - Feb 10 2012 : 10:40:04 AM Yes, at the end or beginning of the qsub command is fine:
qsub -l h_vmem=6G ...other options...
Adria
Posted - Feb 10 2012 : 10:28:57 AM Thank you! So I can tag that at the end of the command "qsub -v EXPERIMENT= etc."?
syam.gadde
Posted - Feb 10 2012 : 10:17:10 AM You will need to request more memory for these types of jobs. So when you submit you will need to add the following option to qsub:
-l h_vmem=6G
where 6G means 6 Gigabytes of memory. Default for our cluster is 4G. If that doesn't work, you can try higher numbers.
Adria
Posted - Feb 10 2012 : 10:12:38 AM Is the memory issue from the cluster or the program?
petty
Posted - Feb 07 2012 : 5:21:41 PM 1st one seems to indicate there isn't enough memory (RAM) to write the data
2nd one indicates that the image being read is incomplete.