Duke-UNC Brain Imaging and Analysis Center
BIAC Forums | Profile | Register | Active Topics | Members | Search | FAQ
 All Forums
 Support Forums
 Analysis Software Support
 "DOF cannot be zero or negative!" Error

Note: You must be registered in order to post a reply.
To register, click here. Registration is FREE!

Screensize:
UserName:
Password:
Format Mode:
Format: BoldItalicizedUnderlineStrikethrough Align LeftCenteredAlign Right Horizontal Rule Insert HyperlinkInsert EmailInsert Image Insert CodeInsert QuoteInsert List
   
Message:

* HTML is OFF
* Forum Code is ON
Smilies
Smile [:)] Big Smile [:D] Cool [8D] Blush [:I]
Tongue [:P] Evil [):] Wink [;)] Clown [:o)]
Black Eye [B)] Eight Ball [8] Frown [:(] Shy [8)]
Shocked [:0] Angry [:(!] Dead [xx(] Sleepy [|)]
Kisses [:X] Approve [^] Disapprove [V] Question [?]

 
Check here to subscribe to this topic.
   

T O P I C    R E V I E W
vvs4 Posted - Sep 30 2009 : 09:08:35 AM
I am running a third level (group) analysis, which I've done successfully before, but this time I see the following error in my cope1 log:


Higher-level stats

cat ../design.lcon | awk '{ print }' > design.lcon

/usr/local/fsl/bin/fslsplit mask tmpmask -z

/usr/local/fsl/bin/fslsplit filtered_func_data tmpcope -z

/usr/local/fsl/bin/fslsplit var_filtered_func_data tmpvarcope -z
/usr/local/fsl/bin/flame --cope=tmpcope0000 --vc=tmpvarcope0000 --mask=tmpmask0000 --ld=stats0000 --dm=design.mat --cs=design.grp --tc=design.con --ols --nj=10000 --bi=500 --se=1 --fm --zlt=100000 --zut=100000
DOF cannot be zero or negative!
DOF cannot be zero or negative!


it reports that particular line about DOF 918 times, and then continues into what appears to be normal, UNTIL I see this:



Setting up:
ntptsing=100.000000

No f contrasts
nevs=2
ntpts=100
ngs=1
nvoxels=4042
Running:
nmaskvoxels=8
njumps = 10000
burnin = 500
sampleevery = 1
nsamples = 9500

Metropolis Hasting Sampling
Number of voxels=8
Percentage done:
1
stdtr domain error

ndtri domain error
2
stdtr domain error

ndtri domain error
3
stdtr domain error

ndtri domain error
4
stdtr domain error

ndtri domain error
5
stdtr domain error

ndtri domain error
6
stdtr domain error

ndtri domain error
7
stdtr domain error

ndtri domain error
8
stdtr domain error

ndtri domain error

Saving results

Log directory was: stats0055
Log directory is: stats0056


It has this type of error for many different stats00XX -
so my next logical step is to look at "stats0055" and "stats0056" - I ran a search and looked in the "logs" directory. The search turned up no results and the log directory for cope1 is filled with files called feat1, feat3aflame, etc.

I then looked at the Post Stats, and saw the following error:

/usr/local/fsl/bin/fslmaths stats/zstat1 -mas mask thresh_zstat1

echo 144554 > thresh_zstat1.vol
zstat1: DLH=nan VOLUME=144554 RESELS=nan
domain error: argument not in valid range
while executing
"expr int ( $fmri(VOLUME$rawstats) / $fmri(RESELS$rawstats) ) "
(procedure "feat5:proc_poststats" line 138)
invoked from within
"feat5:proc_poststats $RERUNNING $STDSPACE "
("-poststats" arm line 6)
invoked from within
"switch -- [ lindex $argv $argindex ] {

-I {
incr argindex 1
set session [ lindex $argv $argindex ]
}

-D {
incr argindex 1
set..."
("for" body line 2)
invoked from within
"for { set argindex 1 } { $argindex < $argc } { incr argindex 1 } {
switch -- [ lindex $argv $argindex ] {

-I {
incr argindex 1
set ses..."
(file "/usr/local/fsl/bin/feat" line 137)


which again made me want to investigate ths "feat5" for which I found two files similar to the following two files in the cope1 logs directory:

feat5_stop.o12656

and I have no clue what that is or how to investigate it.

I DID find the post from Bethany last year about this same error, that I should open the filtered_functional_data.nii in showsrs2 and look for black dots that represent the subjects with NAN values - I did exactly this and all I saw was stark white for each volume (there were 100 - and I have 100 subjects - so I am assuming each "image" = 1 subject.

Now I am sort of clueless where to start investigating. Is it random error, as that last post suggested, and I should re-run? Is it something about an EV, or a lower level feat directory? I want to note that I am running this third level analysis with lower level .feat directories as input, and NOT 2nd level COPES.

Any and all ideas and suggestions would be greatly appreciated!

Many thanks,

Vanessa
8   L A T E S T    R E P L I E S    (Newest First)
vvs4 Posted - Oct 02 2009 : 7:51:52 PM
Hi David,

Thanks for the link to the different output descriptions - it's funny how I've been on the fmrib site a million times but that particular page was new to me.

I wound up simply re-running the subject and the first level FEAT came out fine, and then the group analysis worked perfectly. It must have been one of those random-die jobs on the cluster. Had that not fixed the problem, I definitely like and agree with your suggestion to go back to step 1 and look at each step along the way to find the... culprit!

Best,

Vanessa
dvsmith Posted - Oct 02 2009 : 7:05:34 PM
Hi Vanessa,

I'm jumping in a bit late here, so sorry if I'm missing anything...

This subject (number 52) is clearly messed up. Any number of things could be wrong with it, but the problem should stand out once you go back to the beginning and start piecing this subject back together starting with each run of data. I would look at each 4D file from the run level to make sure the data isn't screwy. If that checks out, make sure there weren't any random errors in any of the 1st level FEAT reports for this guy. If all that looks good, then something happened at the 2nd level, and odds are, you can just re-run it and it will work. (Sorry, I know that sounds random, but jobs randomly die for no reason on our cluster and on other clusters.)

At the third level, filtered_func_data.nii.gz is the concatenation of all subjects' cope<num>.nii.gz file (i.e., the point estimate or "beta"). var_filtered_func_data.nii.gz is the concatenation of all subjects' varcope<num>.nii.gz file (i.e., basically the variance of the point estimate). These are the files in subject.gfeat/cope<num>.feat.

Some of other output descriptions: http://www.fmrib.ox.ac.uk/fsl/feat5/output.html

Hope this helps,
David

vvs4 Posted - Oct 02 2009 : 07:47:26 AM
Thanks again for everyone's help! Although it ran perfectly I decided to look into some of these things, just for fun, learning, and being better at this stuff in the future. I'll put questions in bold so they are easy to get back to.

First, a clarification: so for future purposes: seeing white in the filtered_func_data is NOT normal! :O)
It's funny that subject 52 turned out to have that abnormally high cope image value, 8x10^32. I think I actually noticed this visually when I loaded the var_filtered_func_data in showsrs2. I saw this HUGE spike at just after 50 (see image here: http://www.haririlab.com/Vanessa/Screenshots/spike.JPG) I only saw this when I used the var_filtered_func_data as an overlay. What is the difference between the filtered_func_data and the var_filtered_func_data? After finding this I went into the FEAT reports for subjects 50-55, but my eye wasn't as trained as yours, Josh, so I didn't see anything amiss. Now I see the error about "data bytes missing" which turns into calculating zstats that are "nan" - not a number - and then when I look at the timeseries plots of the zstats and how at a certain point they plummet to a constant 0 - yeah something is definitely not right! It makes sense that the group FEAT produced the error that it did, given the NANs. This error has prompted me to add some additional checks to our first level FEATs - my checks were not extensive enough.

Syam - the colormap does show a spike, I believe, I don't think an outlier, check it out here: http://www.haririlab.com/Vanessa/Screenshots/colormap.JPG What is the significance of the labels on the axis, and what does the colormap represent? I also opened up the tdof nifti, as you suggested, and didn't see any huge spikes or something that would make an alarm bell go off in my head, but again, my untrained eye makes me as skilled at looking at imaging data as a dog trying to eat with dinner utensils without any fingers.

I didn't know that there was an FSL mailing list - cool! It's always great to find another troubleshooting resource.

Thanks again! Happy Friday

-Vanessa



syam.gadde Posted - Oct 01 2009 : 5:52:00 PM
quote:
Originally posted by josh.bizzell
I had to loop through all the subjects and find the index of the subject that had the maximum value of 8x10^32. In this case, that was subject index 52 (of 100). Then, you can go into the FSL report (report.html), click on "Inputs", and find which subject was the 52nd data set.



Just to clarify, by "loop[ing] through all the subjects", did you mean you looked at individual volumes in the filtered_func_data.nii file in the top(third?)-level analysis?
josh.bizzell Posted - Oct 01 2009 : 4:51:52 PM
quote:
Originally posted by vvs4

Quick question - where in the errored group FEAT were you able to look to figure out the subject that was causing the problem?



I had to loop through all the subjects and find the index of the subject that had the maximum value of 8x10^32. In this case, that was subject index 52 (of 100). Then, you can go into the FSL report (report.html), click on "Inputs", and find which subject was the 52nd data set.

-Josh
vvs4 Posted - Oct 01 2009 : 12:17:35 PM
I was able to re-run the first level analysis for that subject - and the group FEAT worked perfectly! Many thanks for everyone's help. Quick question - where in the errored group FEAT were you able to look to figure out the subject that was causing the problem?
josh.bizzell Posted - Oct 01 2009 : 10:54:11 AM
I helped track down the possible source of the error, in a very similar fashion to what Syam mentioned above. It appears to be faulty prestats with one particular subject.

Again, what you can do is to load your 3rd-level (group analysis) filtered_func_data into showsrs2. This data is created by merging all the of cope images from every subject used in the group analysis. Loop through the subjects using the time slider to see if you can visually find any problems. In this case, the image was all white, and as Syam suggested, when looking at the histogram, I found that there were min and max values around +-8x10^32. Since it was just a few voxels in the offending subject, it was difficult to find visually. I had to create a loop that went through all the subjects' cope images finding and displaying the maximum value. After doing this, there was one subject in the group whose cope image had a maximum value of 8x10^32. I went back to the 1st level report for that subject, and sure enough, there was a file I/O error in the prestats section of the log.

-Josh
syam.gadde Posted - Oct 01 2009 : 10:41:59 AM
I have not yet personally experienced this error, so these are just random suggestions. But if the "filtered" inputs to your 3rd-level analysis (filtered_func_data.nii) show up as all white, then something is definitely amiss. Does the histogram (under Config/Adjust Colormaps and Clipping) show any outliers, and do they appear in any volume in particular?

As was mentioned in another thread, sometimes re-running (first try 3rd-level, then try 2nd and 3rd level, etc.) will fix these things.

You could also consider looking at the tdof_* image files to see if there is anything interesting in there. There should be one volume for each input to that level of analysis. If only one volume looks weird, that is the 2nd-level input you need to look at.

Again, just random suggestions. There may be some nuggets of info on the FSL mailing list too:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?S2=FSL&q=dof+cannot+be+zero&s=&f=&a=&b=

BIAC Forums © 2000-2010 Brain Imaging and Analysis Center Go To Top Of Page
This page was generated in 0.23 seconds. Snitz Forums 2000