Post Reply Any Neuroscientists, Radiologists, or fMRI lab workers in the building?
15370 cr points
Send Message: Send PM GB Post
26 / M / United States
Online
Posted 8/4/16
Historical insight: http://www.scientificamerican.com/article/after-another-statistical-speed-bump-is-the-science-of-fmri-learning-from-its-mistakes/
Article in question: http://www.pnas.org/content/113/28/7900.full
Simple read: https://www.sciencedaily.com/releases/2016/06/160627160927.htm

The biggest problem the field has had to contend with is the constant problem of statistics. Faulty statistical methods can lead to false-positive results—and researchers in the field tend to agree that there are probably false positives out there.

The study in question found the following:

Functional MRI (fMRI) is 25 years old, yet surprisingly its most common statistical methods have not been validated using real data. Here, we used resting-state fMRI data from 499 healthy controls to conduct 3 million task group analyses. Using this null data with different experimental designs, we estimate the incidence of significant results. In theory, we should find 5% false positives (for a significance threshold of 5%), but instead we found that the most common software packages for fMRI analysis (SPM, FSL, AFNI) can result in false-positive rates of up to 70%. These results question the validity of some 40,000 fMRI studies and may have a large impact on the interpretation of neuroimaging results.


Sooo....anyone in that field care to share their thoughts? That's a lot of studies that just got pissed down the drain. It hurts my feelings; that's a lot of cool studies I can't quote anymore & really makes me question all fMRI studies I have read thus far....which is a lot
Banned
20728 cr points
Send Message: Send PM GB Post
Hoosierville
Offline
Posted 8/4/16
It goes to show you that studies are so easy to manipulate that entire fields of study can be based on faulty information. Always question and always wonder if your ways are wrong.
15370 cr points
Send Message: Send PM GB Post
26 / M / United States
Online
Posted 8/4/16 , edited 8/4/16


It wasn't that it was "manipulated" - the statistical modeling system that was being utilized had errors in it that leaves it hard to assert any clear finding because the rate of error could be so high. It's not as if there was a mastermind or intent lol.

But yes, always be skeptical and always be prepared to find out your initial conclusions were wrong as new information comes in.
relt95 
531 cr points
Send Message: Send PM GB Post
22 / M
Offline
Posted 8/4/16
A false positive is better than a false negative.

In most cases, if you get a positive result, they will typically do more tests to make sure and get more data (I am not in this field, so please correct me if this is wrong). If you get a negative you just get sent home.

Balancing the results away from false negatives is more important than avoiding false positives.
15370 cr points
Send Message: Send PM GB Post
26 / M / United States
Online
Posted 8/4/16 , edited 8/4/16


Maybe in some areas and instances a false positive may be better, but not in this particular instance where they are using positives to verify brain activity & then infer conclusions from that positive verification on which region is being activated. Error rates are now ranging at 70% in some cases (by this paper's new model without the alleged errors)...quite a jump from the normal 5% that everyone had been assuming was the error rate.

Your statement in general is correct though. & keep in mind this criticism has just came out. It may or may not stick - idk. Last year Psychology had a similar debacle with a study coming out not being able to replicate over half of the 100 studies the tried...but after a few months the academic community pretty much picked it apart on their methodological flaws in their replication efforts & it is not as big of a deal now.

Edit: It also occurs to me that you may be viewing false positive and false negative in a statistical hypothesis light, but medical statistics are a little different. Whereas normal stat would consider a false positive a rejection of a null hypothesis, in medical statistics a false positive is more true to the name and indicates a condition has been fulfilled.
relt95 
531 cr points
Send Message: Send PM GB Post
22 / M
Offline
Posted 8/4/16


Sorry, I didn't know you were talking about a positive for brain activity; I just assumed you were talking about tumors (I didn't read enough). I was more thinking on the "positive means you have the medical condition" line of thought.

Also 70% errors is basically useless in any situation. You would be better off flipping a coin than going to the doctor.
49012 cr points
Send Message: Send PM GB Post
28 / M
Offline
Posted 8/5/16 , edited 8/5/16
... I am a PhD student in the field of cognition and neuroscience, I have some experience with this but I do not specialize in fMRI work. The results of this study, only reflect on prior results in the fMRI research literature. This has literally zero to do with clinical practices and applications. The ways a Dr. will use fMRI and MRI for diagnosis and other purposes are completely different from how they are typically used in this kind of research. So this paper has nothing to do with your Dr. and his ability to say you have a tumor, or are having a seizure or happen to be brain dead, or have a clot. Practically anything of a clinical nature involving MRI and FMRI your Dr knows what they are doing and the tests they do will not be subject to these issues.

Further these results only apply to specific research methodologies, there are plenty of research techniques that performed fine even mentioned in the paper. It would be fair to say a very popular set of methods (voxel cluster based methods) seem most vulnerable to this problem. The paper suggests that 40,000 fmri papers may need to be viewed very skeptically now, but there are multiple hundreds of thousands of fmri papers, its a large field, and most older papers that used single voxel based analysis were not subject to this problem. Additionally I'm not sure that these issues apply to multi-voxel pattern analysis, which uses a more machine learning oriented approach and something called cross-validation which may not be as susceptible to these problems, because it tests if a trained model can accurately predict things on data it has not seen, as opposed to trying to see if there is a statistically significant difference between conditions.

There is another fun paper where using software like this found face selective regions in the brains of dead fish put in fMRI scanners. That paper was to illustrate that not correcting for multiple comparisons is almost inexcusable. This paper fits in a loooong lineage of papers that help fMRi researchers understand that there are issues with many common practices in the field that need to be questioned more. This is why we have methodologists (people who study methodologies) who specifically seek out these problems to help improve our fields. And it should be mentioned that most brain researchers are aware that results in fMRI often come with a litany of caveats and potential issues.

TL:DR
This is just science improving itself. You should already be skeptical of sciency headlines in news, because results are misreported and inflated as the norm. Your Dr. knows what they are doing. Don't go out and buy a Tin Foil hat, unless you believe dead fish have brain areas specifically for recognizing your face.

15370 cr points
Send Message: Send PM GB Post
26 / M / United States
Online
Posted 8/5/16


ahhhh, thanks for the response. I only have a Minor in psych & was hoping someone would explain in little better detail to clarify.

Your statement distinguishing how MD's use it vs how Researchers use it was something I was not thinking of.

I love smart people.

What's your dissertation on? If you don't mind me prying.
49012 cr points
Send Message: Send PM GB Post
28 / M
Offline
Posted 8/5/16
I am actually in the middle of working on my second year project. So dissertation is a ways away. But in general I create neural network models of possible theories of human object recognition and test them against human behavioral effects. It's a lot of programming, math, and keeping up with the human visual neuroscience literature.

As a side note, my sympathy goes out to fMRI researchers who will be majorly effected by this. Most new researchers are taught a series of basic tasks like how to do block designs in SPM and AFNI and given some rules of thumb to follow for setting the software up. fMRI is a pretty complicated technology that very complicated math is built up on top of in order to get inferential results. It is rare for people to understanding the technology, the math, and experiment design to a degree that they can seriously question what they are told to do by established practitioners in the field. I do programming, matrix calculus, and tensor calculus as a basic components of my work and I think the statistical methods for fMRI are overly complicated and detailed. A certain amount of trust in ones tools is required to do much of anything and some very common tools were just shown to be flawed in very serious ways, which can be a pretty big punch to the gut. Luckily it seems like there are good fixes for these problems and in maybe 5 years fMRI will be back up to full steam and have much better and replicatable results. Sorry that was a long side note.
49012 cr points
Send Message: Send PM GB Post
28 / M
Offline
Posted 8/7/16
Glad you asked the question!
You must be logged in to post.