(advanced) using a custom background mesh for native-space visualization¶
This notebook demonstrates how to generate a background mesh from a nifti image and use it in the plot_subcortical() and plot_tracts() functions.
Why using a custom background mesh? In the case where your data lives in a different space than a template space (such as fsaverage), using the template as the background might lead unaligned structures. To circumvent this issue, we can generate a glass brain to use as background mesh from a nifti image.
inputs and outputs¶
we start with:
- atlas volume (
.nii.gz): the 3D NIfTI file where each voxel contains an integer region ID. - atlas metadata (
.txt,.csv, etc.): a file listing the string names for each integer ID. (note: because every atlas creator formats this differently, we will write a quick loop to parse this into a python dictionary before feeding it to the builder). - tractography geometry (
.trkor.tck): the streamline files. - nifti image (
.nii.gz): the 3D NIfTI file to use to build the background mesh. Can be a preprocessed anatomical image, brain mask, etc.
we need to generate:
- surface meshes (
.vtk): a dedicated folder containing individual 3D mesh files for every extracted subcortical regions. A complete tutorial is available indocs/tutorials/custom_subcortical_atlas.ipynb. - background mesh: a mesh derived from a nifti file.
# Define imports
import os
import pooch
import yabplot as yab
# define where your source NIfTI and text files are located
# you can download the same atlas for this tutorial in here:
# https://www.gin.cnrs.fr/wp-content/uploads/AAL3v2_for_SPM12.tar.gz
aal_txt = "/Users/anthonygagnon/Downloads/AAL3/AAL3v1_1mm.nii.txt"
aal_nii = "/Users/anthonygagnon/Downloads/AAL3/AAL3v1_1mm.nii.gz"
# Define output folder for reconstructed meshes.
dir_full_subcortical = "./subcortical/AAL3v1"
# This section parses the AAL3 text file into a standard python dictionary mapping integers IDs to region names.
atlas_labels = {}
with open(aal_txt, 'r') as f_in:
for line in f_in:
parts = line.strip().split()
if len(parts) >= 2:
try:
rid = int(parts[0])
name = parts[1].replace(' ', '_').replace('/', '-')
atlas_labels[rid] = name
except ValueError:
continue
print(f"successfully parsed {len(atlas_labels)} total regions from text file.")
print(atlas_labels)
# Filtering the atlas labels to extract subcortical regions and cerebellar regions.
# define all subcortical keywords present in the mixed AAL3 atlas
subcortical_keywords = [
'Hippocampus', 'Amygdala', 'Caudate', 'Putamen', 'Pallidum', 'Thalamus', 'Thal',
'Cerebellum', 'Vermis', 'N_Acc', 'VTA', 'SN', 'Red_N', 'LC', 'Raphe'
]
print("--- building atlas 1: full subcortical (using include_list) ---")
yab.build_subcortical_atlas(
nii_path=aal_nii,
labels_dict=atlas_labels,
out_dir=dir_full_subcortical,
include_list=subcortical_keywords,
smooth_i=20, smooth_f=0.7
)
# check the amount of regions
regions_full = yab.get_atlas_regions(atlas=None, category='subcortical', custom_atlas_path=dir_full_subcortical)
print(f"full atlas: found {len(regions_full)} meshes.")
successfully parsed 170 total regions from text file.
{1: 'Precentral_L', 2: 'Precentral_R', 3: 'Frontal_Sup_2_L', 4: 'Frontal_Sup_2_R', 5: 'Frontal_Mid_2_L', 6: 'Frontal_Mid_2_R', 7: 'Frontal_Inf_Oper_L', 8: 'Frontal_Inf_Oper_R', 9: 'Frontal_Inf_Tri_L', 10: 'Frontal_Inf_Tri_R', 11: 'Frontal_Inf_Orb_2_L', 12: 'Frontal_Inf_Orb_2_R', 13: 'Rolandic_Oper_L', 14: 'Rolandic_Oper_R', 15: 'Supp_Motor_Area_L', 16: 'Supp_Motor_Area_R', 17: 'Olfactory_L', 18: 'Olfactory_R', 19: 'Frontal_Sup_Medial_L', 20: 'Frontal_Sup_Medial_R', 21: 'Frontal_Med_Orb_L', 22: 'Frontal_Med_Orb_R', 23: 'Rectus_L', 24: 'Rectus_R', 25: 'OFCmed_L', 26: 'OFCmed_R', 27: 'OFCant_L', 28: 'OFCant_R', 29: 'OFCpost_L', 30: 'OFCpost_R', 31: 'OFClat_L', 32: 'OFClat_R', 33: 'Insula_L', 34: 'Insula_R', 35: 'Cingulate_Ant_L', 36: 'Cingulate_Ant_R', 37: 'Cingulate_Mid_L', 38: 'Cingulate_Mid_R', 39: 'Cingulate_Post_L', 40: 'Cingulate_Post_R', 41: 'Hippocampus_L', 42: 'Hippocampus_R', 43: 'ParaHippocampal_L', 44: 'ParaHippocampal_R', 45: 'Amygdala_L', 46: 'Amygdala_R', 47: 'Calcarine_L', 48: 'Calcarine_R', 49: 'Cuneus_L', 50: 'Cuneus_R', 51: 'Lingual_L', 52: 'Lingual_R', 53: 'Occipital_Sup_L', 54: 'Occipital_Sup_R', 55: 'Occipital_Mid_L', 56: 'Occipital_Mid_R', 57: 'Occipital_Inf_L', 58: 'Occipital_Inf_R', 59: 'Fusiform_L', 60: 'Fusiform_R', 61: 'Postcentral_L', 62: 'Postcentral_R', 63: 'Parietal_Sup_L', 64: 'Parietal_Sup_R', 65: 'Parietal_Inf_L', 66: 'Parietal_Inf_R', 67: 'SupraMarginal_L', 68: 'SupraMarginal_R', 69: 'Angular_L', 70: 'Angular_R', 71: 'Precuneus_L', 72: 'Precuneus_R', 73: 'Paracentral_Lobule_L', 74: 'Paracentral_Lobule_R', 75: 'Caudate_L', 76: 'Caudate_R', 77: 'Putamen_L', 78: 'Putamen_R', 79: 'Pallidum_L', 80: 'Pallidum_R', 81: 'Thalamus_L', 82: 'Thalamus_R', 83: 'Heschl_L', 84: 'Heschl_R', 85: 'Temporal_Sup_L', 86: 'Temporal_Sup_R', 87: 'Temporal_Pole_Sup_L', 88: 'Temporal_Pole_Sup_R', 89: 'Temporal_Mid_L', 90: 'Temporal_Mid_R', 91: 'Temporal_Pole_Mid_L', 92: 'Temporal_Pole_Mid_R', 93: 'Temporal_Inf_L', 94: 'Temporal_Inf_R', 95: 'Cerebellum_Crus1_L', 96: 'Cerebellum_Crus1_R', 97: 'Cerebellum_Crus2_L', 98: 'Cerebellum_Crus2_R', 99: 'Cerebellum_3_L', 100: 'Cerebellum_3_R', 101: 'Cerebellum_4_5_L', 102: 'Cerebellum_4_5_R', 103: 'Cerebellum_6_L', 104: 'Cerebellum_6_R', 105: 'Cerebellum_7b_L', 106: 'Cerebellum_7b_R', 107: 'Cerebellum_8_L', 108: 'Cerebellum_8_R', 109: 'Cerebellum_9_L', 110: 'Cerebellum_9_R', 111: 'Cerebellum_10_L', 112: 'Cerebellum_10_R', 113: 'Vermis_1_2', 114: 'Vermis_3', 115: 'Vermis_4_5', 116: 'Vermis_6', 117: 'Vermis_7', 118: 'Vermis_8', 119: 'Vermis_9', 120: 'Vermis_10', 121: 'Thal_AV_L', 122: 'Thal_AV_R', 123: 'Thal_LP_L', 124: 'Thal_LP_R', 125: 'Thal_VA_L', 126: 'Thal_VA_R', 127: 'Thal_VL_L', 128: 'Thal_VL_R', 129: 'Thal_VPL_L', 130: 'Thal_VPL_R', 131: 'Thal_IL_L', 132: 'Thal_IL_R', 133: 'Thal_Re_L', 134: 'Thal_Re_R', 135: 'Thal_MDm_L', 136: 'Thal_MDm_R', 137: 'Thal_MDl_L', 138: 'Thal_MDl_R', 139: 'Thal_LGN_L', 140: 'Thal_LGN_R', 141: 'Thal_MGN_L', 142: 'Thal_MGN_R', 143: 'Thal_PuI_L', 144: 'Thal_PuI_R', 145: 'Thal_PuM_L', 146: 'Thal_PuM_R', 147: 'Thal_PuA_L', 148: 'Thal_PuA_R', 149: 'Thal_PuL_L', 150: 'Thal_PuL_R', 151: 'ACC_sub_L', 152: 'ACC_sub_R', 153: 'ACC_pre_L', 154: 'ACC_pre_R', 155: 'ACC_sup_L', 156: 'ACC_sup_R', 157: 'N_Acc_L', 158: 'N_Acc_R', 159: 'VTA_L', 160: 'VTA_R', 161: 'SN_pc_L', 162: 'SN_pc_R', 163: 'SN_pr_L', 164: 'SN_pr_R', 165: 'Red_N_L', 166: 'Red_N_R', 167: 'LC_L', 168: 'LC_R', 169: 'Raphe_D', 170: 'Raphe_M'}
--- building atlas 1: full subcortical (using include_list) ---
filtered down to 82 subcortical regions to extract.
extracting: Hippocampus_L (id 41)...
extracting: Hippocampus_R (id 42)...
extracting: Amygdala_L (id 45)...
extracting: Amygdala_R (id 46)...
extracting: Caudate_L (id 75)...
extracting: Caudate_R (id 76)...
extracting: Putamen_L (id 77)...
extracting: Putamen_R (id 78)...
extracting: Pallidum_L (id 79)...
extracting: Pallidum_R (id 80)...
[WARNING] Thalamus_L is empty in the volume!
[WARNING] Thalamus_R is empty in the volume!
extracting: Cerebellum_Crus1_L (id 95)...
extracting: Cerebellum_Crus1_R (id 96)...
extracting: Cerebellum_Crus2_L (id 97)...
extracting: Cerebellum_Crus2_R (id 98)...
extracting: Cerebellum_3_L (id 99)...
extracting: Cerebellum_3_R (id 100)...
extracting: Cerebellum_4_5_L (id 101)...
extracting: Cerebellum_4_5_R (id 102)...
extracting: Cerebellum_6_L (id 103)...
extracting: Cerebellum_6_R (id 104)...
extracting: Cerebellum_7b_L (id 105)...
extracting: Cerebellum_7b_R (id 106)...
extracting: Cerebellum_8_L (id 107)...
extracting: Cerebellum_8_R (id 108)...
extracting: Cerebellum_9_L (id 109)...
extracting: Cerebellum_9_R (id 110)...
extracting: Cerebellum_10_L (id 111)...
extracting: Cerebellum_10_R (id 112)...
extracting: Vermis_1_2 (id 113)...
extracting: Vermis_3 (id 114)...
extracting: Vermis_4_5 (id 115)...
extracting: Vermis_6 (id 116)...
extracting: Vermis_7 (id 117)...
extracting: Vermis_8 (id 118)...
extracting: Vermis_9 (id 119)...
extracting: Vermis_10 (id 120)...
extracting: Thal_AV_L (id 121)...
extracting: Thal_AV_R (id 122)...
extracting: Thal_LP_L (id 123)...
extracting: Thal_LP_R (id 124)...
extracting: Thal_VA_L (id 125)...
extracting: Thal_VA_R (id 126)...
extracting: Thal_VL_L (id 127)...
extracting: Thal_VL_R (id 128)...
extracting: Thal_VPL_L (id 129)...
extracting: Thal_VPL_R (id 130)...
extracting: Thal_IL_L (id 131)...
extracting: Thal_IL_R (id 132)...
extracting: Thal_Re_L (id 133)...
[WARNING] Thal_Re_L is too small to form a 3D mesh (volume: 0.0000 mm³). dropping from atlas.
extracting: Thal_Re_R (id 134)...
[WARNING] Thal_Re_R is too small to form a 3D mesh (volume: 0.0000 mm³). dropping from atlas.
extracting: Thal_MDm_L (id 135)...
extracting: Thal_MDm_R (id 136)...
extracting: Thal_MDl_L (id 137)...
extracting: Thal_MDl_R (id 138)...
extracting: Thal_LGN_L (id 139)...
extracting: Thal_LGN_R (id 140)...
extracting: Thal_MGN_L (id 141)...
extracting: Thal_MGN_R (id 142)...
extracting: Thal_PuI_L (id 143)...
extracting: Thal_PuI_R (id 144)...
extracting: Thal_PuM_L (id 145)...
extracting: Thal_PuM_R (id 146)...
extracting: Thal_PuA_L (id 147)...
extracting: Thal_PuA_R (id 148)...
extracting: Thal_PuL_L (id 149)...
extracting: Thal_PuL_R (id 150)...
extracting: N_Acc_L (id 157)...
extracting: N_Acc_R (id 158)...
extracting: VTA_L (id 159)...
extracting: VTA_R (id 160)...
extracting: SN_pc_L (id 161)...
extracting: SN_pc_R (id 162)...
extracting: SN_pr_L (id 163)...
extracting: SN_pr_R (id 164)...
extracting: Red_N_L (id 165)...
extracting: Red_N_R (id 166)...
extracting: LC_L (id 167)...
extracting: LC_R (id 168)...
extracting: Raphe_D (id 169)...
extracting: Raphe_M (id 170)...
subcortical atlas successfully saved to: ./subcortical/AAL3v1
full atlas: found 78 meshes.
Step 2: Building the background mesh¶
One can use almost any nifti image to build this background mesh. Brain-extracted images will work best.
You can customize the behavior of the function by tweaking those parameters:
smooth_i: Number of smoothing iterations to perform. The higher the smoother the mesh will be.smooth_f: Determines how agressively the vertices are changed during smoothing iterations. Lower values have higher stability and better for complex meshes.threshold: A blur is optionally applied before mesh creation, this threshold will select which voxels to keep after blurring.blur_sigma: Gaussian blur (in voxel units)
# Use your desired nii file to create the background mesh. Here, we will use the full AAL3 altas file.
bg_mesh = yab.load_nii_as_mesh(
nii_path=aal_nii,
smooth_i=20,
smooth_f=0.1,
threshold=0.5,
blur_sigma=1.5
)
Step 3: Combine both into your final figure¶
Simply provide your newly created background mesh to the plot_subcortical function as the bmesh parameter.
ax = yab.plot_subcortical(
custom_atlas_path=dir_full_subcortical,
views=['superior', 'anterior', 'left_lateral'],
bmesh=bg_mesh,
bmesh_alpha=0.2,
bmesh_color='lightgray'
)
Using a custom background mesh for white matter tracks visualization¶
In this section, we will apply the same procedure as for the subcortical atlas, but using the plot_tracts function to visualize white matter bundles.
Step 1: Fetch the atlas tracks¶
For the purpose of this example, we use bundles from an atlas. However, this could also be applied to custom bundles, by using the custom_atlas_path argument (similarly to what we did for the subcortical atlas section above). The first steps here are identical to the ones in docs/tutorials/plotting_tractometry.ipynb.
# fetch an example 3D fractional anisotropy (FA) volume from neurovault
fa_url = "https://neurovault.org/media/images/264/JHU-ICBM-FA-2mm.nii.gz"
fa_path = pooch.retrieve(url=fa_url, known_hash=None, path=pooch.os_cache("yabplot"),
fname="sample_fa_map.nii.gz")
# sample the volume across all tracts in the atlas
# this returns a dictionary mapping tract names to their 1D scalar arrays.
tract_data = yab.project_vol2tract_atlas(nii_path=fa_path, atlas="xtract_medium")
Step 2: Build the background mesh from the FA map¶
Similar to what we did above, we will use the downloaded FA map to reconstruct a background mesh. One important note here, we need to tweek the threshold value to consider the range of possible value on the FA map (range from 0-1). Therefore, the default 0.5 will not work. Setting it to a more conservative value of 0.01 should do the trick. We can also adjust the blurring sigma to a higher value, this will smooth out the boundaries and reduce the effect of noise.
bg_mesh = yab.load_nii_as_mesh(
nii_path=fa_path,
smooth_i=20,
smooth_f=0.1,
threshold=0.01,
blur_sigma=3
)
Step 3: Combine your background mesh and the white matter bundles into a clean figure¶
To use your background mesh, simply specify your resulting bg_mesh to the bmesh argument of the plot_tracts() function.
ax = yab.plot_tracts(
atlas="xtract_medium",
data=tract_data,
views=["superior", "anterior", "left_lateral"],
bmesh=bg_mesh,
bmesh_alpha=0.2,
cmap="berlin"
)
Step 4: Visualize a single tract using the same background mesh.¶
Now that we have a working background mesh, we can also visualize a single tract using the same method as describe in docs/tutorials/plotting_tractometry.ipynb.
# locate the specific .trk file you want to map
atlas_dir = yab.data._resolve_resource_path('xtract_large', 'tracts')
tract_files = yab.data._find_tract_files(atlas_dir)
# lets get the path to the left inferior fronto-occipital fasciculus (IFOF)
cst_l_path = tract_files['IFOF_L']
# sample the volume for just this one tract and add it to dictionary
sampled_array = yab.project_vol2tract(trk_path=cst_l_path, nii_path=fa_path)
single_tract_data = {'IFOF_L': sampled_array}
ax = yab.plot_tracts(
atlas="xtract_large",
data=single_tract_data,
views=["superior", "anterior", "left_lateral"],
bmesh=bg_mesh,
nan_alpha=0.0, # hide other tracts
bmesh_alpha=0.2,
cmap="magma"
)
ax.set_title("Single tract (IFOF_L) with custom background mesh")
Text(0.5, 1.0, 'Single tract (IFOF_L) with custom background mesh')