Meta-Analytic Functional Decoding

# First, import the necessary modules and functions
import os

import matplotlib.pyplot as plt
from myst_nb import glue
from nilearn import plotting
from repo2data.repo2data import Repo2Data

# Install the data if running locally, or points to cached data if running on neurolibre
DATA_REQ_FILE = os.path.abspath("../binder/data_requirement.json")
repo2data = Repo2Data(DATA_REQ_FILE)
data_path = repo2data.install()
data_path = os.path.join(data_path[0], "data")
---- repo2data starting ----
/srv/conda/envs/notebook/lib/python3.7/site-packages/repo2data
Config from file :
/home/jovyan/binder/data_requirement.json
Destination:
./../data/nimare-paper

Info : ./../data/nimare-paper already downloaded
from nimare import dataset, meta

neurosynth_dset = dataset.Dataset.load(os.path.join(data_path, "neurosynth_dataset.pkl.gz"))

kern = meta.kernel.MKDAKernel(memory_limit="500mb")
neurosynth_dset_first500 = dataset.Dataset.load(
    os.path.join(data_path, "neurosynth_dataset_first500_with_mkda_ma.pkl.gz"),
)

# Collect features for decoding
# We use any features that appear in >10% of studies and <90%.
id_cols = ["id", "study_id", "contrast_id"]
frequency_threshold = 0.001
cols = neurosynth_dset_first500.annotations.columns
cols = [c for c in cols if c not in id_cols]
df = neurosynth_dset_first500.annotations.copy()[cols]
n_studies = df.shape[0]
feature_counts = (df >= frequency_threshold).sum(axis=0)
target_features = feature_counts.between(n_studies * 0.1, n_studies * 0.9)
target_features = target_features[target_features]
target_features = target_features.index.values
print(f"{len(target_features)} features selected.", flush=True)

continuous_map = os.path.join(data_path, "map_to_decode.nii.gz")
amygdala_roi = os.path.join(data_path, "amygdala_roi.nii.gz")
amygdala_ids = neurosynth_dset_first500.get_studies_by_mask(amygdala_roi)
95 features selected.

Functional decoding performed with meta-analytic data, refers to methods which attempt to predict mental states from neuroimaging data using a large-scale meta-analytic database [Smith et al., 2009]. Such analyses may also be referred to as “informal reverse inference” [Poldrack, 2011], “functional characterization analysis” [Bzdok et al., 2013, Cieslik et al., 2013, Rottschy et al., 2013], “open-ended decoding” [Rubin et al., 2017], or simply “functional decoding” [Amft et al., 2015, Bzdok et al., 2013, Nickl-Jockschat et al., 2015]. While the terminology is far from standardized, we will refer to this method as meta-analytic functional decoding in order to distinguish it from alternative methods like multivariate decoding and model-based decoding [Poldrack, 2011]. Meta-analytic functional decoding is often used in conjunction with MACM, meta-analytic clustering, meta-analytic parcellation, and meta-ICA, in order to characterize resulting brain regions, clusters, or components. Meta-analytic functional decoding models have also been extended for the purpose of meta-analytic functional encoding, wherein text is used to generate statistical images [Rubin et al., 2017, Dockès et al., 2018, Nunes, 2018].

Four common approaches are correlation-based decoding, dot-product decoding, weight-sum decoding, and chi-square decoding. We will first discuss continuous decoding methods (i.e., correlation and dot-product), followed by discrete decoding methods (weight-sum and chi-square).

Decoding continuous inputs

When decoding unthresholded statistical maps (such as Fig. 16), the most common approaches are to simply correlate the input map with maps from the database, or to compute the dot product between the two maps. In Neurosynth, meta-analyses are performed for each label (i.e., term or topic) in the database and then the input image is correlated with the resulting unthresholded statistical map from each meta-analysis. Performing statistical inference on the resulting correlations is not straightforward, however, as voxels display strong spatial correlations, and the true degrees of freedom are consequently unknown (and likely far smaller than the nominal number of voxels). In order to interpret the results of this decoding approach, users typically select some arbitrary number of top correlation coefficients ahead of time, and use the associated labels to describe the input map. However, such results should be interpreted with great caution.

fig, ax = plt.subplots(figsize=(10, 6))
plotting.plot_stat_map(continuous_map, axes=ax, figure=fig, annotate=False, draw_cross=False)
glue("figure_map_to_decode", fig, display=False)
/srv/conda/envs/notebook/lib/python3.7/site-packages/nilearn/plotting/img_plotting.py:348: FutureWarning: Default resolution of the MNI template will change from 2mm to 1mm in version 0.10.0
  anat_img = load_mni152_template()
/srv/conda/envs/notebook/lib/python3.7/site-packages/nilearn/_utils/niimg.py:62: UserWarning: Non-finite values detected. These values will be replaced with zeros.
  "Non-finite values detected. "
_images/12_decoding_5_3.png
_images/12_decoding_5_2.png

Fig. 16 The unthresholded statistical map that will be used for continuous decoding.

This approach can also be applied to an image-based database like NeuroVault, either by correlating input data with meta-analyzed statistical maps, or by deriving distributions of correlation coefficients by grouping statistical maps in the database according to label. Using these distributions, it is possible to statistically compare labels in order to assess label significance. NiMARE includes methods for both correlation-based decoding and correlation distribution-based decoding, although the correlation-based decoding is better established and should be preferred over the correlation distribution-based decoding. As such, we will only show the CorrelationDecoder here.

Important

CorrelationDecoder currently runs very slowly. We strongly recommend running it on a subset of labels within the Dataset. It is also quite memory-intensive.

In this example, we have only run the decoder using features appearing in >10% and <90% of the first 500 studies in the Dataset. Additionally, we have pre-generated the results and will simply show the code that would generate those results, as the decoder requires too much memory for NeuroLibre’s servers.

from nimare import decode, meta

corr_decoder = decode.continuous.CorrelationDecoder(
    frequency_threshold=0.001,
    meta_estimator=meta.MKDADensity(kernel_transformer=kern, memory_limit=None),
    target_image="z",
    features=target_features,
    memory_limit="500mb",
)
corr_decoder.fit(neurosynth_dset_first500)
corr_df = corr_decoder.transform(continuous_map)
import pandas as pd

corr_df = pd.read_table(
    os.path.join(data_path, "correlation_decoder_results.tsv"),
    index_col="feature",
)
corr_df = corr_df.reindex(corr_df["r"].abs().sort_values(ascending=False).index)
corr_df = corr_df.iloc[:10]
glue("table_corr", corr_df)
r
feature
terms_abstract_tfidf__cingulate 0.448665
terms_abstract_tfidf__anterior cingulate 0.415396
terms_abstract_tfidf__anterior 0.391049
terms_abstract_tfidf__auditory 0.343746
terms_abstract_tfidf__conditions 0.338773
terms_abstract_tfidf__cortices 0.331308
terms_abstract_tfidf__information 0.330473
terms_abstract_tfidf__role 0.326022
terms_abstract_tfidf__level 0.322515
terms_abstract_tfidf__healthy 0.319750
r
feature
terms_abstract_tfidf__cingulate 0.448665
terms_abstract_tfidf__anterior cingulate 0.415396
terms_abstract_tfidf__anterior 0.391049
terms_abstract_tfidf__auditory 0.343746
terms_abstract_tfidf__conditions 0.338773
terms_abstract_tfidf__cortices 0.331308
terms_abstract_tfidf__information 0.330473
terms_abstract_tfidf__role 0.326022
terms_abstract_tfidf__level 0.322515
terms_abstract_tfidf__healthy 0.319750

Fig. 17 The top ten terms, sorted by absolute correlation coefficient, from the correlation decoding method.

Decoding discrete inputs

Decoding regions of interest (ROIs) requires a different approach than decoding unthresholded statistical maps. One simple approach, used by GCLDA and implemented in the function gclda_decode_roi(), simply sums the P(topic|voxel) distribution across all voxels in the ROI in order to produce a value associated with each topic for the ROI. These weight sum values are arbitrarily scaled and cannot be compared across ROIs. We will not show this method because of its simplicity and the fact that it can only currently be applied to a GCLDA model.

Before we dig into the other decoding methods are are available, let’s take a look at the ROI we want to decode.

fig, ax = plt.subplots(figsize=(10, 6))
plotting.plot_roi(amygdala_roi, axes=ax, figure=fig, annotate=False, draw_cross=False)
glue("figure_roi_to_decode", fig, display=False)
_images/12_decoding_11_1.png
_images/12_decoding_11_0.png

Fig. 18 The amygdala region of interest mask that will be used for discrete decoding.

One method which relies on correlations, much like the continuous correlation decoder, is the ROI association decoding method (ROIAssociationDecoder), originally implemented in the Neurosynth Python library. In this method, each study with coordinates in the dataset is convolved with a kernel transformer to produce a modeled activation map. The resulting modeled activation maps are then masked with a region of interest (i.e., the target of the decoding), and the values are averaged within the ROI. These averaged modeled activation values are then correlated with the term weights for all labels in the dataset. This decoding method produces a single correlation coefficient for each of the dataset’s labels.

Important

Because the ROIAssociationDecoder generates modeled activation maps for all of the experiments in the Dataset, we will only fit this decoder to the first 500 studies.

from nimare import decode

assoc_decoder = decode.discrete.ROIAssociationDecoder(
    amygdala_roi,
    kernel_transformer=kern,
    u=0.05,
    correction="fdr_bh",
)
assoc_decoder.fit(neurosynth_dset_first500)
assoc_df = assoc_decoder.transform()
INFO:nimare.base:Retaining 2941/(3228 features.
assoc_df = assoc_df.reindex(assoc_df["r"].abs().sort_values(ascending=False).index)
assoc_df = assoc_df.iloc[:10]
glue("table_assoc", assoc_df)
r
feature
terms_abstract_tfidf__amygdala 0.627186
terms_abstract_tfidf__fear 0.334112
terms_abstract_tfidf__reinforcement 0.329516
terms_abstract_tfidf__neutral faces 0.320801
terms_abstract_tfidf__appraisal 0.317214
terms_abstract_tfidf__conditioning 0.286165
terms_abstract_tfidf__age sex 0.273753
terms_abstract_tfidf__neutral 0.272666
terms_abstract_tfidf__amygdala response 0.267777
terms_abstract_tfidf__olfactory 0.265687
r
feature
terms_abstract_tfidf__amygdala 0.627186
terms_abstract_tfidf__fear 0.334112
terms_abstract_tfidf__reinforcement 0.329516
terms_abstract_tfidf__neutral faces 0.320801
terms_abstract_tfidf__appraisal 0.317214
terms_abstract_tfidf__conditioning 0.286165
terms_abstract_tfidf__age sex 0.273753
terms_abstract_tfidf__neutral 0.272666
terms_abstract_tfidf__amygdala response 0.267777
terms_abstract_tfidf__olfactory 0.265687

Fig. 19 The top ten terms, sorted by absolute correlation coefficient, from the ROI association decoding method.

# Here we delete the recent variables for the sake of reducing memory usage
del assoc_decoder, assoc_df

A more theoretically driven approach to ROI decoding is to use chi-square-based methods. The two methods which use chi-squared tests are the BrainMap decoding method and an adaptation of Neurosynth’s meta-analysis method.

In both chi-square-based methods, studies are first selected from a coordinate-based database according to some criterion. For example, if decoding a region of interest, users might select studies reporting at least one coordinate within 5 mm of the ROI. Metadata (such as ontological labels) for this subset of studies are then compared to those of the remaining, unselected portion of the database in a confusion matrix. For each label in the ontology, studies are divided into four groups: selected and label-positive (SS+L+), selected and label-negative (SS+L-), unselected and label-positive (SS-L+), and unselected and label-negative (SS-L-). Each method then compares these groups in order to evaluate both consistency and specificity of the relationship between the selection criteria and each label, which are evaluated in terms of both statistical significance and effect size.

BrainMap method

The BrainMap discrete decoding method, implemented in BrainMapDecoder, compares the distributions of studies with each label within the sample against those in a larger database while accounting for the number of foci from each study. Broadly speaking, this method assumes that the selection criterion is associated with one peak per study, which means that it is likely only appropriate for selection criteria based around foci, such as regions of interest. One common analysis, meta-analytic clustering, involves dividing studies within a database into meta-analytic groupings based on the spatial similarity of their modeled activation maps (i.e., study-wise pseudo-statistical maps produced by convolving coordinates with a kernel). The resulting sets of studies are often functionally decoded in order to build a functional profile associated with each meta-analytic grouping. While these groupings are defined as subsets of the database, they are not selected based on the location of an individual peak, and so weighting based on the number of foci would be inappropriate.

This decoding method produces four outputs for each label. First, the distribution of studies in the sample with the label are compared to the distributions of other labels within the sample. This consistency analysis produces both a measure of statistical significance (i.e., a p-value) and a measure of effect size (i.e., the likelihood of being selected given the presence of the label). Next, the studies in the sample are compared to the studies in the rest of the database. This specificity analysis produces a p-value and an effect size measure of the posterior probability of having the label given selection into the sample. A detailed algorithm description is presented in Appendix I: BrainMap Discrete Decoding.

brainmap_decoder = decode.discrete.BrainMapDecoder(
    frequency_threshold=0.001,
    u=0.05,
    correction="fdr_bh",
)
brainmap_decoder.fit(neurosynth_dset)
brainmap_df = brainmap_decoder.transform(amygdala_ids)
brainmap_df = brainmap_df.reindex(
    brainmap_df["probReverse"].abs().sort_values(ascending=False).index
)
brainmap_df = brainmap_df.iloc[:10]
glue("table_brainmap", brainmap_df)
pForward zForward likelihoodForward pReverse zReverse probReverse
Term
terms_abstract_tfidf__magnetic 1.000000e+00 0.000000 1.627342 1.000000e+00 0.000000 0.011441
terms_abstract_tfidf__magnetic resonance 1.000000e+00 0.000000 1.569942 1.000000e+00 0.000000 0.010827
terms_abstract_tfidf__resonance 1.000000e+00 0.000000 1.560855 1.000000e+00 0.000000 0.010815
terms_abstract_tfidf__amygdala 6.176744e-07 4.985607 6.407830 2.152100e-18 8.749022 0.010609
terms_abstract_tfidf__functional magnetic 1.000000e+00 0.000000 1.664782 1.000000e+00 0.000000 0.010540
terms_abstract_tfidf__using 1.000000e+00 0.000000 1.205627 1.000000e+00 0.000000 0.007210
terms_abstract_tfidf__response 1.000000e+00 0.000000 1.902148 1.000000e+00 0.000000 0.007039
terms_abstract_tfidf__stimuli 1.000000e+00 0.000000 2.087218 1.000000e+00 0.000000 0.006286
terms_abstract_tfidf__human 1.000000e+00 0.000000 2.128483 1.000000e+00 0.000000 0.005356
terms_abstract_tfidf__neutral 5.556739e-03 2.772852 5.502956 2.753866e-06 4.688372 0.005320
pForward zForward likelihoodForward pReverse zReverse probReverse
Term
terms_abstract_tfidf__magnetic 1.000000e+00 0.000000 1.627342 1.000000e+00 0.000000 0.011441
terms_abstract_tfidf__magnetic resonance 1.000000e+00 0.000000 1.569942 1.000000e+00 0.000000 0.010827
terms_abstract_tfidf__resonance 1.000000e+00 0.000000 1.560855 1.000000e+00 0.000000 0.010815
terms_abstract_tfidf__amygdala 6.176744e-07 4.985607 6.407830 2.152100e-18 8.749022 0.010609
terms_abstract_tfidf__functional magnetic 1.000000e+00 0.000000 1.664782 1.000000e+00 0.000000 0.010540
terms_abstract_tfidf__using 1.000000e+00 0.000000 1.205627 1.000000e+00 0.000000 0.007210
terms_abstract_tfidf__response 1.000000e+00 0.000000 1.902148 1.000000e+00 0.000000 0.007039
terms_abstract_tfidf__stimuli 1.000000e+00 0.000000 2.087218 1.000000e+00 0.000000 0.006286
terms_abstract_tfidf__human 1.000000e+00 0.000000 2.128483 1.000000e+00 0.000000 0.005356
terms_abstract_tfidf__neutral 5.556739e-03 2.772852 5.502956 2.753866e-06 4.688372 0.005320

Fig. 20 The top ten terms, sorted by reverse-inference posterior probability, from the BrainMap chi-squared decoding method.

# Here we delete the recent variables for the sake of reducing memory usage
del brainmap_decoder, brainmap_df

Neurosynth method

The implementation of the MKDA Chi-squared meta-analysis method used by Neurosynth is quite similar to BrainMap’s method for decoding, if applied to annotations instead of modeled activation values. This method, implemented in NeurosynthDecoder, compares the distributions of studies with each label within the sample against those in a larger database, but, unlike the BrainMap method, does not take foci into account. For this reason, the Neurosynth method would likely be more appropriate for selection criteria not based on regions of interest (e.g., for characterizing meta-analytic groupings from a meta-analytic clustering analysis). However, the Neurosynth method requires user-provided information that BrainMap does not. Namely, in order to estimate probabilities for the consistency and specificity analyses with Bayes’ Theorem, the Neurosynth method requires a prior probability of a given label. Typically, a value of 0.5 is used (i.e., the estimated probability that an individual is undergoing a given mental process described by a label, barring any evidence from neuroimaging data, is predicted to be 50%). This is, admittedly, a poor prediction, which means that probabilities estimated based on this prior are not likely to be accurate, though they may still serve as useful estimates of effect size for the analysis.

Like the BrainMap method, this method produces four outputs for each label. For the consistency analysis, this method produces both a p-value and a conditional probability of selection given the presence of the label and the prior probability of having the label. For the specificity analysis, the Neurosynth method produces both a p-value and a posterior probability of presence of the label given selection and the prior probability of having the label. A detailed algorithm description is presented in Appendix II: Neurosynth Discrete Decoding.

neurosynth_decoder = decode.discrete.NeurosynthDecoder(
    frequency_threshold=0.001,
    u=0.05,
    correction="fdr_bh",
)
neurosynth_decoder.fit(neurosynth_dset)
neurosynth_df = neurosynth_decoder.transform(amygdala_ids)
neurosynth_df = neurosynth_df.reindex(
    neurosynth_df["probReverse"].abs().sort_values(ascending=False).index
)
neurosynth_df = neurosynth_df.iloc[:10]
glue("table_neurosynth", neurosynth_df)
pForward zForward probForward pReverse zReverse probReverse
Term
terms_abstract_tfidf__neutral faces 6.163179e-12 6.875826 0.024702 4.685748e-31 11.588999 0.971583
terms_abstract_tfidf__conditioning 1.026669e-16 8.301660 0.022543 1.221816e-32 11.897339 0.970346
terms_abstract_tfidf__olfactory 1.482800e-02 2.436553 0.021075 2.058612e-12 7.030455 0.961833
terms_abstract_tfidf__conditioned 7.813282e-05 3.950056 0.017439 2.697558e-13 7.308692 0.955737
terms_abstract_tfidf__reinforcement 7.813282e-05 3.950056 0.016397 2.058612e-12 7.030455 0.952900
terms_abstract_tfidf__unpleasant 7.813282e-05 3.950056 0.015924 4.616361e-12 6.916899 0.951487
terms_abstract_tfidf__amygdala 1.953150e-123 23.628732 0.006013 2.152100e-18 8.749022 0.947992
terms_abstract_tfidf__amygdala response 1.482800e-02 2.436553 0.014081 1.526484e-07 5.249336 0.942719
terms_abstract_tfidf__differentiated 1.482800e-02 2.436553 0.014081 1.526484e-07 5.249336 0.942719
terms_abstract_tfidf__pleasant 1.482800e-02 2.436553 0.013307 5.206710e-07 5.018535 0.939358
pForward zForward probForward pReverse zReverse probReverse
Term
terms_abstract_tfidf__neutral faces 6.163179e-12 6.875826 0.024702 4.685748e-31 11.588999 0.971583
terms_abstract_tfidf__conditioning 1.026669e-16 8.301660 0.022543 1.221816e-32 11.897339 0.970346
terms_abstract_tfidf__olfactory 1.482800e-02 2.436553 0.021075 2.058612e-12 7.030455 0.961833
terms_abstract_tfidf__conditioned 7.813282e-05 3.950056 0.017439 2.697558e-13 7.308692 0.955737
terms_abstract_tfidf__reinforcement 7.813282e-05 3.950056 0.016397 2.058612e-12 7.030455 0.952900
terms_abstract_tfidf__unpleasant 7.813282e-05 3.950056 0.015924 4.616361e-12 6.916899 0.951487
terms_abstract_tfidf__amygdala 1.953150e-123 23.628732 0.006013 2.152100e-18 8.749022 0.947992
terms_abstract_tfidf__amygdala response 1.482800e-02 2.436553 0.014081 1.526484e-07 5.249336 0.942719
terms_abstract_tfidf__differentiated 1.482800e-02 2.436553 0.014081 1.526484e-07 5.249336 0.942719
terms_abstract_tfidf__pleasant 1.482800e-02 2.436553 0.013307 5.206710e-07 5.018535 0.939358

Fig. 21 The top ten terms, sorted by reverse-inference posterior probability, from the Neurosynth chi-squared decoding method.

# Here we delete the recent variables for the sake of reducing memory usage
del neurosynth_decoder, neurosynth_df

In both methods, the database acts as an estimate of the underlying distribution of labels in the real world, such that the probability of having a peak in an ROI given the presence of the label might be interpreted as the probability of a brain activating a specific brain region given that the individual is experiencing a given mental state. This is a very poor interpretation, given that any database of neuroimaging results will be skewed more toward the interests of the field than the distribution of mental states or processes experienced by humans, which is why decoding must be interpreted with extreme caution. It is important not to place too much emphasis on the results of functional decoding analyses, although they are very useful in that they can provide a quantitative estimate behind the kinds of interpretations generally included in discussion sections that are normally only backed by informal literature searches or prior knowledge.

The meta-analytic functional decoding methods in NiMARE provide a very rudimentary approach for open-ended decoding (i.e., decoding across a very large range of mental states) that can be used with resources like NeuroVault. However, standard classification methods have also been applied to datasets from NeuroVault (e.g., Varoquaux et al. [2018]), although these methods do not fall under NiMARE’s scope.