| Author |
Topic  |
|
djt11
New Member

USA
19 Posts |
Posted - Mar 01 2011 : 11:40:14 AM
|
I managed to run the qsub, which ran executed the BET command, and it says the files are stored in here "/mnt/users.q.shares/djt11.YETTMej9bTUu5Y51eVyh/Data/FSLVBM28022011/FA/slicesdir/index.html" There is no directory created, under /mnt/users.q.shares Also it says it cannot access fslvbm2b, which is the part of the processing step. Can you please help me with those.
Thanks
|
 |
|
|
petty
BIAC Staff
    
USA
453 Posts |
Posted - Mar 01 2011 : 1:04:53 PM
|
"/mnt/users.q.shares/djt11.YETTMej9bTUu5Y51eVyh" was your temporary mount location ... so you'll need to look on your actual server \\Munin\Lewis\FSE.01\Data/FSLVBM28022011/FA/slicesdir/index.html to get to the actual results.
and as i mentioned above ... i highly recommend that you run these steps individually on a interactive node until you understand everything thats happening in each of those steps. especially the last one ( randomize ), because you have to set-up your own models with GLM and feed them to your analysis ... you can't just uncomment my line and expect it to work because that stuff was specific to my study.
Easiest thing to do is run the individual steps seperately, and once they work ... use the script, with your lines |
 |
|
|
djt11
New Member

USA
19 Posts |
Posted - Mar 01 2011 : 1:12:45 PM
|
Thanks Again,
I have been working on fslvbm, haven worked on the cluster, I know I need to create design files and modify the appropriate randomise command. I don no how these things normally work on the cluster though.
Thanks a lot for all your help |
 |
|
|
petty
BIAC Staff
    
USA
453 Posts |
Posted - Mar 01 2011 : 1:35:33 PM
|
they'll work exactly the same as on a local linux box if you have your paths correct, which you do in this case .. if you are changing into the directory where your data is ( cd $RAWDIR in your case ), then it should be no different if you were running it on your own local machine
|
Edited by - petty on Mar 01 2011 1:41:09 PM |
 |
|
|
djt11
New Member

USA
19 Posts |
Posted - Mar 02 2011 : 3:08:56 PM
|
Thanks for the input though, So I was successful in running the brain extraction and part of template creation, when the script reaches the fslvbm2c, it gives me an error stating permission denied, I have checked the permissions, under fsl 4.1.5 and they seem to be alright. I cannot understand what could have caused this problem. I have prepared my design files and they are in the directory along with the scans. Can you please help me with this.
Thanks Dipti |
 |
|
|
petty
BIAC Staff
    
USA
453 Posts |
Posted - Mar 02 2011 : 3:23:52 PM
|
i think that is the step which calls fslmerge to merge all your subject data together.
the scripts and outlog/errorlog are actually written to the struc directory inside your fslvbm directory, so you can look at the script to see why it may have failed. |
 |
|
|
djt11
New Member

USA
19 Posts |
Posted - Mar 02 2011 : 3:33:59 PM
|
I have already gone through the output and the error files, which just says permission denied. Can there be something else going wrong? |
 |
|
|
petty
BIAC Staff
    
USA
453 Posts |
Posted - Mar 02 2011 : 3:37:26 PM
|
| i would suggest running the fslvbm2c from the command line on the interact node to see what happens |
 |
|
|
djt11
New Member

USA
19 Posts |
Posted - Mar 03 2011 : 4:00:02 PM
|
Hey
Can someone please tell me,if i Need to edit, an fslvbm script, How do i do it? |
 |
|
|
petty
BIAC Staff
    
USA
453 Posts |
Posted - Mar 03 2011 : 4:19:20 PM
|
you can't edit them directly ... i did however have a thought about your "permission denied" error.
there's 2 parts of fslvbm that write scripts, so i looked into their code and they seem to try and "chmod" a script which they've produced. A normal user wouldn't have permission to do this, hence "permission denied"
I edited the chmod lines in the fslvbm_2_template and fslvbm_3_proc steps ( only in 4.1.5 currently ) ... Give this a shot again and see if you get the same error |
 |
|
|
petty
BIAC Staff
    
USA
453 Posts |
Posted - Mar 04 2011 : 07:54:58 AM
|
i peaked in your results directory to see that it failed in the same place.
however, i was able to see the outputs in the struct directory before they were deleted and i tried a couple things.
i figured out why they were crashing ( hopefully ). It was the way they were trying to run the scripts ... you can't run them that way on a cifs mounted file system. I changed the lines in the source code to a way that would let a script be executed on a cifs system and tested it on the command line.
After that, i resubmitted step 2 and 3 for you ... let me know if it finishes. |
 |
|
|
djt11
New Member

USA
19 Posts |
Posted - Mar 04 2011 : 10:36:50 AM
|
Hey Chris,
Thanks a lot for the help.
Though it did go a little further than what I was getting till yesterday, It yet crashed, at fslvbm2c and this time it gave permission denied and command not found. Also I have been researching, that this could be due to the time it is suppose to remain on each command is only for 15 to 30.I wanted to change that and see to say around 1000 to see if that makes a difference. So do i need to create completely different scripts for my fslvbm to run on the cluster ? Thanks Dipti |
 |
|
|
petty
BIAC Staff
    
USA
453 Posts |
Posted - Mar 04 2011 : 11:20:35 AM
|
dipti .. the corrected version i created this morning is actually still running ( under your name ), so this is a very good sign for you.
time should have no relevance here because the time ( $HOWLONG ) they are referencing corresponds to which cluster queue they are submitting jobs to with fsl_sub. We aren't using fsl_sub at biac at this point due to other constraints. Their fsl_sub command assumes we have our sge grid set-up exactly like theirs, which we don't (yet) |
 |
|
|
djt11
New Member

USA
19 Posts |
Posted - Mar 04 2011 : 11:28:20 AM
|
Thanks a lot
I guess I confused with the error files it produced yesterday. Sorry about that |
 |
|
|
petty
BIAC Staff
    
USA
453 Posts |
Posted - Mar 04 2011 : 1:13:43 PM
|
dipti, your vbm from this morning failed at the randomise step, but its because the model had more points than the data.
if you created the correct model, you could actually just run the randomise step |
 |
|
Topic  |
|