Measuring leakage through the blood-brain barrier

The brain is unusual compared to other organs in that it has a very dense vascular network, but tightly restricts the passage of molecules from the blood into the brain tissues – referred to as the blood-brain barrier (BBB).  In many disease processes, from mild conditions through to traumatic brain injury, the barrier function lessens and molecules leak through.

Surface scans of mouse brain.

A current research focus at the Curtin Health Innovation Research Institute (CHIRI) is the relationship between leakage across the BBB and cognitive decline.  Several teams are examining the effect of dietary and lifestyle interventions, environmental toxins and medications on the function of the BBB, and its relationship to ageing, dementia and other neurodegenerative diseases.

In the laboratory of Professor John Mamo and Associate Professor Ryusuke Takechi, Dr Matt Albrecht, Dr Virginie Lam and Mr Corey Giles detect leakage across the BBB by measuring the spread of a blood-based protein that shouldn’t normally exist in brain tissue.  Microscope images of brain slices are collected from mouse models that have had interventions that may impair, restore or maintain BBB function.  Fluorescence staining of the brain tissue allows visualisation of the blood protein, both within capillary structures and in areas of leakage.  Capillaries appear as distinct branch-like structures with well defined edges, while leakage appears as diffuse unformed fluorescence.

Manually quantifying the leakage in these images is extremely time consuming and as studies become bigger, microscopes become better, and data acquisition becomes faster, the number of images generated can keep a researcher busy annotating them for months.  It is painstaking, tedious work, and consistency between different researcher classifications is difficult to achieve due to normal human variation in attention and strategy.

"It became a real bottleneck to the research", explains Giles.  "I used maybe several hundred images in my original BBB leakage studies.  But by the time I was doing my PhD I was recording about 200 images per mouse, looking for variations between five to eight different groups of ten mice each.  The workload was ridiculous, and I can’t emphasise enough how tedious manual annotation is either!"

The group teamed up with Dr Kevin Chai of the Curtin Institute for Computation (CIC) to see if machine learning could be used to automate capillary image processing and measurement of BBB leakage. For one method of quantitating leakage, researchers create image ‘masks’ through a labour-intensive annotation process. A mask is created for each image by manually identifying and blocking out the segments of capillaries they don’t want to measure, making it easier to measure the amount of diffuse leakage remaining that surround the capillaries.

Initially, the team considered training a machine learning model to generate these annotated masks. However, during initial experimentation, they decided to simplify the problem and to try measure leakage directly. Interestingly, the convolutional neural net they trained was able to take a raw image, automatically segment out the capillaries (effectively assimilating the masking technique), and then calculate an accurate leakage score.

Validating the improved algorithm against a new image set classified by Giles (who had manually masked and classified the training data) resulted in a correlation of 96 per cent against the manual measurements.  Testing it against new images that had been classified by Albrecht gave a correlation of 80 per cent, highlighting the difference between individual researcher manual annotations.

"Each researcher has their own way of doing annotation, how you weight the importance of the details you see.  As a researcher, you’re probably not even aware of the clues you pick up that guide your annotation.  And a lot of variability is due to the time involved, as people get fatigued.  So being able to automate this sort of measurement with explicit parameters removes all of the guesswork and personal bias.  I’d rather trust computers than humans for this sort of work!" admits Albrecht.

Chai agrees: "In the past, developing a machine learning model to achieve human-level performance for these types of problems was very difficult.  However, developments in the the field of machine learning now allow us to make effective use of large datasets to train very accurate models.  If a network is trained using data sets classified by teams of specialists, it can not only out-perform an individual specialist, but outperform even teams of specialists."  Albrecht can immediately give a pertinent example: "Medical publications are already demonstrating that neural nets can be used to classify melanomas with super high accuracy, as good or better than the specialists."

The team’s neural network for measuring BBB leakage can classify 1,000 capillary images in under two minutes, whereas an expert researcher would need approximately one month to complete the same task.  Additional stress testing and training of the model using different experimental image sets is expected to further improve its accuracy and applicability.

The power in this method is that through the selection of an appropriate training data set, the model can be applied to many different medical research applications where the discrimination between well-defined edges, whether on the vessel or cellular scale, and other more diffuse features is of interest.

Giles is enthusiastic about the possibilities (and also about not manually annotating any more images than he really needs to).  "The eye has a blood-retina barrier similar to the BBB, so the same principles would apply.  There is a lot of potential for using retinal images in this way, in diabetes research and management for example.  And you can take good pictures of the capillary structures through the pupil without having to cut people open!"

/Public Release. This material from the originating organization/author(s) may be of a point-in-time nature, edited for clarity, style and length. The views and opinions expressed are those of the author(s). View in full here.