Duke-UNC Brain Imaging and Analysis Center
BIAC Forums | Profile | Register | Active Topics | Members | Search | FAQ
 All Forums
 Support Forums
 Windows Support
 FEAT and saving disk space

Note: You must be registered in order to post a reply.
To register, click here. Registration is FREE!

Screensize:
UserName:
Password:
Format Mode:
Format: BoldItalicizedUnderlineStrikethrough Align LeftCenteredAlign Right Horizontal Rule Insert HyperlinkInsert EmailInsert Image Insert CodeInsert QuoteInsert List
   
Message:

* HTML is OFF
* Forum Code is ON
Smilies
Smile [:)] Big Smile [:D] Cool [8D] Blush [:I]
Tongue [:P] Evil [):] Wink [;)] Clown [:o)]
Black Eye [B)] Eight Ball [8] Frown [:(] Shy [8)]
Shocked [:0] Angry [:(!] Dead [xx(] Sleepy [|)]
Kisses [:X] Approve [^] Disapprove [V] Question [?]

 
Check here to subscribe to this topic.
   

T O P I C    R E V I E W
dvsmith Posted - Feb 18 2008 : 4:44:34 PM

Goldman has been relatively low on space. If you're running FEAT to analyze your fMRI data, there are several GBs worth of redundant files that you can delete if you're nearing your quota. The easiest way to do this is to put a few "rm" points in your scripts as they run analyses like in the examples below. You can do this at all three levels of processing, but make sure you keep one copy of the first-level filtered_func_data.nii.gz files so that you can go back and do peristimulus plots if you need time courses. If you do pre-preprocessing separately, you can just keep your pre-processed as is and delete things in the FEAT scripts like below.

In all cases below $REALOUTPUT should be set to the output of FEAT, so it's essentially your $OUTPUT variable with .feat or .gfeat on the end of it. All the "rm" points occur after FEAT completes, so they're all below that line (make sure you don't add a "&" following the feat command).

I have not ran into a case where I needed these files. Future analysis steps in FEAT do not call these particular files at any point. Most folks in Scott's lab have been doing this without any problems. If you do encounter a problem after you delete a file, it most likely will be an unrelated path issue.

Let me know if you have any questions.

Cheers,
David


#FIRST LEVEL

feat ${MAINOUTPUT}/FEAT_0${run}.fsf
cd ${REALOUTPUT}
rm -f filtered_func_data.nii.gz


#SECOND LEVEL

feat ${MAINDIR2}/2ndLvlFixed_${SUBJ}.fsf
cd $REALOUTPUT
for j in `seq 22`; do #replace the 22 with the number of copes you have at the second level.
	
	COPE=cope${j}.feat
	cd $COPE
	rm -f filtered_func_data.nii.gz
	rm -f var_filtered_func_data.nii.gz
	cd ..

done



#THIRD LEVEL

feat ${ANALYZED}/3rdLvl_${RUN}_${CON_NAME}.fsf
cd $REALOUTPUT
cd cope1.feat
rm -f filtered_func_data.nii.gz
rm -f var_filtered_func_data.nii.gz

5   L A T E S T    R E P L I E S    (Newest First)
dvsmith Posted - Jun 04 2011 : 2:00:36 PM
One minor update... You can also delete the corrections.nii.gz file at the first level (this is about the same size as the biggest files in your first level output).

feat ${MAINOUTPUT}/FEAT_0${run}.fsf
cd ${OUTPUT}.feat
rm -f filtered_func_data.nii.gz
rm -f stats/res4d.nii.gz
rm -f stats/corrections.nii.gz


Nobody has found a use for these files or a subsequent step in the FSL processing pipeline that uses these files. So, do not hesitate to delete them.
dvsmith Posted - Aug 26 2008 : 2:35:10 PM
One other big file that could be deleted is the res4d.nii.gz within the stats directory in each *.feat directory (higher level or first level). This is just the residuals and the only data files that are really needed from level to the next are the cope*.nii.gz and the varcope*.nii.gz files in the stats directory.

With Rich's code, this would like like this if you've already ran your jobs:

cd $FSL_Output_Dir_to_clean # eg, Level 1, 2, or 3
find . -name res4d.nii.gz -exec rm -f {} \;
find . -name filtered_func_data.nii.gz -exec rm -f {} \;
find . -name var_filtered_func_data.nii.gz -exec rm -f {} \;


And if you're running your jobs, it would like this:

(REALOUTPUT should be the output of your analysis, which will end in either ".feat" or ".gfeat")

#FIRST LEVEL
feat ${MAINOUTPUT}/FEAT_0${run}.fsf
cd ${REALOUTPUT}
rm -f filtered_func_data.nii.gz
rm -f stats/res4d.nii.gz


#SECOND LEVEL
feat ${MAINDIR2}/2ndLvlFixed_${SUBJ}.fsf
cd $REALOUTPUT
for j in `seq 22`; do #replace the 22 with the number of copes you have at the second level. you might also be able to do this with simple wildcard (*)
	
	COPE=cope${j}.feat
	cd $COPE
	rm -f filtered_func_data.nii.gz
	rm -f var_filtered_func_data.nii.gz
	rm -f stats/res4d.nii.gz
	cd ..

done




#THIRD LEVEL
feat ${ANALYZED}/3rdLvl_${RUN}_${CON_NAME}.fsf
cd $REALOUTPUT
cd cope1.feat
rm -f filtered_func_data.nii.gz
rm -f var_filtered_func_data.nii.gz
rm -f stats/res4d.nii.gz
yaxley Posted - Apr 11 2008 : 3:53:03 PM
For FSL models that have already been run, the simple bash commands below will delete the necessary files.

$ cd $FSL_Output_Dir_to_clean # eg, Level 1, 2, or 3
$ find . -name filtered_func_data.nii.gz -exec rm -f {} \;
$ find . -name var_filtered_func_data.nii.gz -exec rm -f {} \;



dvsmith Posted - Feb 25 2008 : 3:43:30 PM
Another thing people could do to conserve space in the future (and now) would be to delete all the reg_standard folders in your first-level directories. These will automatically be deleted by FEAT if you tell it to do so when it's doing higher level analyses, which is what I normally do. If you have tons of these sitting around, you may as well trash them since they serve no purpose except to save time, which really isn't critical since the cluster can do registration in a few minutes.

David

clithero Posted - Feb 25 2008 : 3:25:26 PM
Goldman is once again incredibly low on space...seems to be hovering around 30-40 GB.

BIAC Forums © 2000-2010 Brain Imaging and Analysis Center Go To Top Of Page
This page was generated in 0.42 seconds. Snitz Forums 2000