Axon Segmentation Analysis of Whole-Brain Fluorescence Images

[ ]:
from brainlit.BrainLine.util import (
    json_to_points,
    download_subvolumes,
)
from brainlit.BrainLine.parse_ara import *
import xml.etree.ElementTree as ET
from brainlit.BrainLine.imports import *
from brainlit.BrainLine.apply_ilastik import (
    ApplyIlastik,
    ApplyIlastik_LargeImage,
    plot_results,
    examine_threshold,
)
from brainlit.BrainLine.analyze_results import (
    AxonDistribution,
    collect_regional_segmentation,
)

%gui qt5

1. Before Using this notebook:

1a. Install brainlit, and dependencies

1b. Write images to s3 using CloudReg

- e.g. python -m cloudreg.scripts.create_precomputed_volumes --s3_input_paths <path-to-stitched-images> --s3_output_paths  <s3-address-for-output>  --voxel_size <x-resolution> <y-resolution> <z-resolution> --num_procs <num-cpus> --resample_iso <boolean-to-resample-isotropically>

1c. Make point annotations in neuroglancer to identify subvolumes for validation (and possible training)

- instructions: https://neurodata.io/help/neuroglancer-pt-annotations/
    ,
{
"type":"pointAnnotation",
"name": "val",
"points": []
}

1d. Update axon_data.py file

[ ]:
brainlit_path = Path(os.path.abspath(""))
brainlit_path = brainlit_path.parents[3]
print(f"Path to brainlit: {brainlit_path}")
data_file = brainlit_path / "brainlit" / "BrainLine" / "data" / "axon_data.json"

with open(data_file) as f:
    data = json.load(f)
brain2paths = data["brain2paths"]
for id in brain2paths.keys():
    if "base" in brain2paths[id].keys() and "val_info" in brain2paths[id].keys():
        base = brain2paths[id]["base"]
        if "http" in base:
            print(f"Sample {id}: http in basepath, which may cause write errors")

        try:
            url = brain2paths[id]["val_info"]["url"]
            layer = brain2paths[id]["val_info"]["layer"]
            pts = json_to_points(url)[layer]
        except:
            print(f"Sample {id}: Error with val_info")

        if "train_info" in brain2paths[id].keys():
            try:
                url = brain2paths[id]["train_info"]["url"]
                layer = brain2paths[id]["train_info"]["layer"]
                pts = json_to_points(url)[layer]
            except:
                print(f"Sample {id}: Error with train_info")
    else:
        print(f"Sample {id}: Does not conform to desired format")

Steps 2, 4-6 below can be done alternatively via a script with: brainlit/BrainLine/scripts/axon_validate.py

2. Download benchmark data

*Inputs*

[ ]:
antibody_layer = "antibody"
background_layer = "background"
endogenous_layer = "endogenous"

brain = "test"  # brain ID
axon_data_dir = (
    str(brainlit_path.parents[0]) + "/"
)  # path to directory where training/validation data should be stored
dataset_to_save = "val"  # train or val

Setup paths

[ ]:
cvol_base = brain2paths[brain]["base"]
layer_names = [antibody_layer, background_layer, endogenous_layer]

for layer in [antibody_layer, background_layer, endogenous_layer]:
    try:
        CloudVolume(cvol_base + layer)
    except:
        print(f"Sample {id}: Layer {layer} not found in {cvol_base}")

if brain not in brain2paths.keys():
    raise ValueError(f"brain {brain} not an entry in brain2paths in axon_data.py file")

if f"{dataset_to_save}_info" not in brain2paths[
    brain
].keys() or dataset_to_save not in ["train", "val"]:
    raise ValueError(f"{dataset_to_save}_info not in brain2paths[{brain}].keys()")

Download data

[ ]:
download_subvolumes(
    axon_data_dir,
    brain_id=brain,
    data_file=data_file,
    layer_names=layer_names,
    dataset_to_save=dataset_to_save,
)

3. View downloaded data (optional)

*Inputs*

[ ]:
fname = "/Users/thomasathey/Documents/mimlab/mouselight/brainlit_parent/braintest/val/2931_4163_1602.h5"  # path to file for viewing
scale = [1.8, 1.8, 2]  # voxel size in microns
[ ]:
with h5py.File(fname, "r") as f:
    pred = f.get("image_3channel")
    image_bg = pred[0, :, :, :]
    image_fg = pred[1, :, :, :]
    image_endo = pred[2, :, :, :]

viewer = napari.Viewer(ndisplay=3)
viewer.add_image(image_fg, scale=scale)
viewer.add_image(image_bg, scale=scale)
viewer.add_image(image_endo, scale=scale)
viewer.scale_bar.visible = True
viewer.scale_bar.unit = "um"

4. Apply Ilastik to validation data

You will need to do two things: - add annotations to the downloaded data (for me, partial labels on 3 of the z-slices using ilastik) - apply axon segmentation model to the downloaded data. Results should be located in the same directory at the subvolumes, with the addition of “_Probabilities” appended to the file names: you can do this programmatically (below), or you can use the ilastik GUI (which is sometimes faster)

Note: make sure foreground/background labels are matched between the model and your annotations (for me, blue/1 =axon yellow/0=bg)

[ ]:
project_path = f"/Users/thomasathey/Documents/mimlab/mouselight/ailey/detection_axon/axon_segmentation.ilp"  # path to ilastik model to be used
ilastik_path = (
    "/Applications/ilastik-1.4.0b21-OSX.app/Contents/ilastik-release/run_ilastik.sh"
)
brains = [brain]
[ ]:
applyilastik = ApplyIlastik(
    ilastk_path=ilastik_path,
    project_path=project_path,
    brains_path=axon_data_dir,
    brains=brains,
)
applyilastik.process_subvols()

5. Check results

[ ]:
plot_results(
    data_dir=axon_data_dir, brain_ids=[brain], positive_channel=1, object_type="axon"
)

If results above are not adequate improve the model and try again

In my case, I identify more subvolumes from the sample at hand using the same process as for validation data, and add it as training data to the model and retrain.

Examine best threshold

[ ]:
examine_threshold(
    data_dir=axon_data_dir,
    brain_id=brain,
    threshold=0.38,
    object_type="axon",
    positive_channel=1,
)

6. Make annotation layers

Transformed layers

[ ]:
atlas_vol = CloudVolume(
    "precomputed://https://open-neurodata.s3.amazonaws.com/ara_2016/sagittal_10um/annotation_10um_2017"
)
for layer in [
    antibody_layer,
    background_layer,
    "axon_mask",
]:  # axon_mask is transformed into an image because nearest interpolation doesnt work well after downsampling
    layer_path = brain2paths[brain]["base"] + layer + "_transformed"
    info = CloudVolume.create_new_info(
        num_channels=1,
        layer_type="image",
        data_type="uint16",  # Channel images might be 'uint8'
        encoding="raw",  # raw, jpeg, compressed_segmentation, fpzip, kempressed
        resolution=atlas_vol.resolution,  # Voxel scaling, units are in nanometers
        voxel_offset=atlas_vol.voxel_offset,
        chunk_size=[32, 32, 32],  # units are voxels
        volume_size=atlas_vol.volume_size,  # e.g. a cubic millimeter dataset
    )
    vol_mask = CloudVolume(layer_path, info=info)
    vol_mask.commit_info()

7. Apply ilastik to whole image:

This can be done alternatively via a script with: brainlit/BrainLine/soma_detect_image

* Inputs *

[ ]:
threshold = 0.38  # threshold to use for ilastik
data_dir = (
    axon_data_dir + "brain_temp/"
)  # data_dir = "/data/tathey1/matt_wright/brain_temp/"  # directory to store temporary subvolumes for segmentation

# Ilastik will run in "headless mode", and the following paths are needed to do so:
ilastik_path = "/Applications/ilastik-1.4.0b21-OSX.app/Contents/ilastik-release/run_ilastik.sh"  # "/data/tathey1/matt_wright/ilastik/ilastik-1.4.0rc5-Linux/run_ilastik.sh"  # path to ilastik executable
ilastik_project = "/Users/thomasathey/Documents/mimlab/mouselight/ailey/detection_axon/axon_segmentation.ilp"  # "/data/tathey1/matt_wright/ilastik/model1/axon_segmentation.ilp"  # path to ilastik project


max_coords = [
    3072,
    4352,
    1792,
]  # max coords or -1 if you want to process everything along that dimension
ncpu = 1  # 16  # number of cores to use for detection
chunk_size = [256, 256, 256]  # [256, 256, 300]
[ ]:
layer_names = [antibody_layer, background_layer, endogenous_layer]
alli = ApplyIlastik_LargeImage(
    ilastik_path=ilastik_path,
    ilastik_project=ilastik_project,
    ncpu=ncpu,
    data_file=data_file,
)
# alli.apply_ilastik_parallel(
#     brain_id=brain,
#     layer_names=layer_names,
#     threshold=threshold,
#     data_dir=data_dir,
#     chunk_size=chunk_size,
#     max_coords=max_coords,
# )
alli.collect_axon_results(brain_id=brain, ng_layer_name="antibody")

8. Register volume and transform data to atlas space using CloudReg

8a. You need to find an initial affine alignment using cloudreg.scripts.registration.get_affine_matrix. For example:

A link to the ARA parcellation is:

precomputed://https://open-neurodata.s3.amazonaws.com/ara_2016/sagittal_10um/annotation_10um_2017

And some python commands to help with affine alignment is:

from cloudreg.scripts.registration import get_affine_matrix
get_affine_matrix([1,1,1], [15,0,0], "PIR", "RAI", 1.15, "precomputed://https://open-neurodata.s3.amazonaws.com/ara_2016/sagittal_10um/annotation_10um_2017")

8b. Run registration using cloudreg.scripts.registration. For example:

python -m cloudreg.scripts.registration -input_s3_path precomputed://s3://smartspim-precomputed-volumes/2022_11_01/8790/Ch_561 --output_s3_path precomputed://s3://smartspim-precomputed-volumes/2022_11_01/8790/atlas_to_target --atlas_s3_path https://open-neurodata.s3.amazonaws.com/ara_2016/sagittal_50um/average_50um --parcellation_s3_path https://open-neurodata.s3.amazonaws.com/ara_2016/sagittal_10um/annotation_10um_2017 --atlas_orientation PIR -orientation RAI --rotation 0 0 0 --translation 0 0 0 --fixed_scale 1.2 -log_s3_path precomputed://s3://smartspim-precomputed-volumes/2022_11_01/8790/atlas_to_target --missing_data_correction True --grid_correction False --bias_correction True --regularization 5000.0 --iterations 3000 --registration_resolution 100

8c. Transform segmentation to atlas space using CloudReg

python -m cloudreg.scripts.transform_data --target_layer_source precomputed://s3://smartspim-precomputed-volumes/2022_11_03/8589/axon_mask --transformed_layer_source precomputed://s3://smartspim-precomputed-volumes/2022_11_03/8589/axon_mask_transformed --affine_path /mnt/NAS/Neuroglancer\ Data/2021_11_03/8589/8589_Ch_561_registration/downloop_1_A.mat  --velocity_path /mnt/NAS/Neuroglancer\ Data/2021_11_03/8589/8589_Ch_561_registration/downloop_1_v.mat

This will write a layer to s3 with the transformed axon mask. The s3 path to this layer should be added to axon_data.py under the axon_mask_transformed key. Then the code below, or axon_brainrender.py, can be used to visualize the data.

Steps 9-11 below can be done alternatively via a script with: brainlit/BrainLine/scripts/axon_analyze.py

9. Combine registration and segmentation results

[ ]:
collect_regional_segmentation(
    brain_id=brain, data_file=data_file, outdir=axon_data_dir, max_coords=max_coords
)

10. View axon segmentation in brain space

*Inputs*

[ ]:
brain_ids = ["test"]

colors = {
    "test_type": "red",
}  # colors for different genotypes
fold_on = True
[ ]:
ad = AxonDistribution(
    brain_ids=brain_ids, data_file=data_file, regional_distribution_dir=axon_data_dir
)
[ ]:
ad.napari_coronal_section(z=1000, subtype_colors=colors, fold_on=fold_on)
[ ]:
ad.brainrender_axons(subtype_colors=colors)

11. Display bar charts

*Inputs*

[ ]:
wholebrain_results_dir = ""  #

brains = [brain]  # list of sample IDs to be shown

regions = [
    688,  # cerebral cortex
    698,  # olfactory areas
    1089,  # hippocampal formation
    # 583, # claustrum
    477,  # striatum
    # 803, # pallidum
    351,  # bed nuclei of stria terminalis
    # 703, #cortical subplate
    1097,  # hypothalamus
    549,  # thalamus
    186,  # lateral habenula
    519,  # cerebellar nuclei
    313,  # midbrain
    1065,  # hindbrain
]  # allen atlas region IDs to be shown
# see: https://connectivity.brain-map.org/projection/experiment/480074702?imageId=480075280&initImage=TWO_PHOTON&x=17028&y=11704&z=3

composite_regions = {
    "Amygdalar Nuclei": [131, 295, 319, 780]
}  # Custom composite allen regions where key is region name and value is list of allen regions
[ ]:
ad.region_barchart(regions, composite_regions=composite_regions, normalize_region=872)
[ ]: