| T O P I C R E V I E W |
| luke.vicens |
Posted - Nov 27 2006 : 4:40:51 PM I'm trying to run a 3rd-level analysis with FEAT but it keeps dying after running for a few minutes. The last line in the report.log file reads:
/usr/local/packages/fsl/bin/avwmaths mean_func -Tmean mean_func
and the following error message pops up in my terminal window when FEAT exits:
syntax error in expression "1.000000e+00 * ": premature end of expression while executing "expr $mean_lcon_incr * [ exec sh -c "cat $feat_files($session)/design.lcon" ] " (procedure "feat5:proc" line 289) invoked from within "feat5:proc [ file rootname [ lindex $argv 0 ] ]" (file "/usr/local/packages/fsl/bin/feat" line 16)
Does anyone know what would cause this or have any suggestions as to where I should begin troubleshooting it?
Thanks,
Luke |
| 15 L A T E S T R E P L I E S (Newest First) |
| syam.gadde |
Posted - Feb 02 2007 : 3:30:40 PM When you specify directories like "cope1.feat" as the inputs, you need to choose "Inputs are lower-level FEAT directories". If you specify the actual images like "cope1.img", then you choose "Inputs are 3D cope images from FEAT directories". It's confusing.
If you change to "inputs are lower-level FEAT directories", it should work. |
| syam.gadde |
Posted - Feb 02 2007 : 3:03:20 PM Maybe you can send me your third-level analysis .fsf file and I will take a look. |
| tankersley |
Posted - Feb 02 2007 : 3:01:20 PM There are lower-level .feat directories, but they're not on the same pathway. Also, if it is looking in all those subdirectories, I've looked in all 120 of them, and they all have the example_func2standard.mat file. Do you think I need to put the .feat directories in the same pathway as the .gfeat files that the level_3 analysis is taking as input?
Thanks,
Dharol |
| syam.gadde |
Posted - Feb 02 2007 : 2:56:28 PM It kind of looks like it from the code. Though are there lower-level .feat directories in the second-level analysis outputs? Maybe it is looking in those subdirectories for one of those files? |
| tankersley |
Posted - Feb 02 2007 : 1:58:41 PM Maybe I'm getting confused on where the files should be. When I run what FEAT calls "First-level analysis", it produces $featdirname/reg/example_func2standard.mat in every file.
When I run a higher level analysis that averages over runs within a subject, it does not produce a $featdirname/reg/example_func2standard.mat, only a mean_func.nii.gz.
It's when I try to run the next higher level analysis that averages across subjects, that I'm getting this error message. Is the second level analysis also supposed to produce a $featdirname/reg/example_func2standard.mat ?
Thanks,
Dharol
|
| syam.gadde |
Posted - Feb 02 2007 : 12:37:00 PM Sorry, I misread the code. It will complain only if all three files are missing. Are any of your inputs missing (for example) reg/example_func2standard.mat? |
| tankersley |
Posted - Feb 02 2007 : 11:14:14 AM I have the file $featdirname/reg/example_func2standard.mat
I don't have the other two fies, however, a data set I've successfully run through level3 only has the first of these three files, too. Has there been a change in the past couple months for what FSL requires?
Thanks,
Dharol |
| syam.gadde |
Posted - Feb 02 2007 : 10:17:33 AM FEAT looks for these three files in all the second-level feat directories:
$featdirname/reg/example_func2standard.mat $featdirname/example_func2standard.mat $featdirname/design.lev
If any of them are missing, it gives you the error message you mentioned.
|
| tankersley |
Posted - Feb 02 2007 : 10:06:52 AM I've run a data set through level 1 and level 2 FEAT and am getting the following error when I try to run level 3:
Registration has not been run for all of the FEAT directories that you have selected for group analysis. Please turn on and setup registration.
I've looked at all the first-level registration reports and everything seems to be registering correctly for func2highres, highres2standard and func2standard.
The reports at 2nd level look normal as well. Any suggestions on where to look next?
Thanks,
Dharol
|
| syam.gadde |
Posted - Dec 07 2006 : 10:28:27 AM IUsed (inodes used) is related in a way to the *number* of files or directories that are on the volume, but not generally useful to determine whether you are hitting space issues (we hardly ever run out of inodes).
I think space might not be the most likely cause of a failed mkdir. My guess is that you hit the annoying "file system disappears from under you" error. Sharity, which is used to mount most file servers onto golgi, will lose connection once in a while. This is more likely to hit long-running jobs. If some of your analyses worked, I would try running the failed ones again to see if they make it through this time. |
| luke.vicens |
Posted - Dec 07 2006 : 10:09:31 AM I was wondering if disk space might be an issue. Here's the output I get from df:
Filesystem 512-blocks Free %Used Iused %Iused Mounted on /dev/hd4 262144 0 100% 2688 5% / [vicens@golgi: ~] $ df Filesystem 512-blocks Free %Used Iused %Iused Mounted on /dev/hd4 262144 0 100% 2688 5% / /dev/hd2 4194304 279256 94% 47342 10% /usr /dev/hd9var 262144 105160 60% 1013 4% /var /dev/hd3 262144 252896 4% 97 1% /tmp /proc - - - - - /proc /dev/hd10opt 1048576 360320 66% 10882 9% /opt /dev/usrlocallv 40632320 712232 99% 137005 3% /usr/local /dev/homelv 62914560 4141024 94% 276639 4% /home /dev/datalv 836763648 293897520 65% 1172385 9% /data /dev/lv00 131072 126056 4% 35 1% /var/loadl 152.3.98.130:/vol/vol0 2857266528 192584192 94% 6514211 21% /mnt/huxley/data 152.3.98.131:/vol/vol0 2857266528 642152248 78% 6275149 20% /mnt/hodgkin/data 152.3.98.138:/vol/vol0 2857266528 82683512 98% 6454269 21% /mnt/katz/data localhost: 20 0 100% - - /CIFS localhost:x-browser: 200 0 100% 100000 100% /Sharity3 localhost: 20 0 100% - - /CIFS
Any idea what the difference between %Used and %Iused is? From %Used, it looks like a few volumes are close to full, but from %Iused there would appear to be plenty of space. More importantly, if disk space is an issue, is there anything I can do to resolve this (e.g. move some of my data elsewhere to free up space)? |
| petty |
Posted - Dec 07 2006 : 08:51:11 AM looks like the disk is potentially full, since it could not make a directory. |
| luke.vicens |
Posted - Dec 06 2006 : 8:43:20 PM I was able to rerun the second level analyses and then rerun the 3rd level analyses and everything worked great. Unfortunately I'm now getting a different error when I rerun some 1st level analyses. The following is the error information that I'm getting in the report.log files:
/bin/mkdir mc ; /bin/mv -f prefiltered_func_data_mcf.mat prefiltered_func_data_mcf.par prefiltered_func_data_mcf_abs .rms prefiltered_func_data_mcf_abs_mean.rms prefiltered_func_data_mcf_rel.rms prefiltered_func_data_mcf_rel_mean.rms mc mkdir: 0653-358 Cannot create mc. mc: There is an input or output error. Usage: mv [-i | -f] [-E{force|ignore|warn}] [--] src target or: mv [-i | -f] [-E{force|ignore|warn}] [--] src1 ... srcN directory
Any idea what would cause this? I thought maybe my temp directory was filling up, but I tried deleting all the files in /data/users/vicens/tmp before running feat, and this doesn't seem to resolve the issue.
Luke |
| luke.vicens |
Posted - Nov 28 2006 : 2:19:20 PM Thanks for the detective work, guys. My analyses meet the criteria you described above, so I'll re-run the 2nd level analyses and then give the 3rd level one another shot.
Luke |
| syam.gadde |
Posted - Nov 28 2006 : 2:10:49 PM OK, I tracked this down (thanks to Joe Crozier for giving me access to his data). This is an AIX-specific bugfix that I had applied to FSL-3.3.5 but neglected to apply to FSL-3.3.7 when it was installed. I have applied this bug fix. If you are getting this error, and you ran second-level analyses after the move to FSL-3.3.7 (Nov. 17), and you intend to use the outputs of the second-level analyses as inputs to a third-level analysis, you will need to re-run the second-level analyses. It is my impression that the first-level analyses should be fine. |