Motivating Problem

  • The current process for interpreting blood films within a haematology lab involves automated haematology analysers. These machines perform blood cell counts autonomously however when a sample is flagged, the blood smear must be reviewed manually which is extremely labour intensive.
  • Furthermore, in some rural/regional areas where haematology labs are too expensive, blood samples have to be sent to metropolitan areas to be analysed.
  • Our overarching goal is to be able to automate the identification of diseased white blood cells within a sample and present them to a haematologist for confirmation.

Computer Vision Types

There are broadly three main types of basic computer vision tasks that a model can be trained to do.

  • Object classification, for instance identifying whether a WBC is an eosinophil or a basophil.
  • Object detection, for instance finding the location and bounding box of a WBC.
  • Object segmentation, which involves highlighting which pixels of an image belong to an object.
Object Detection: An End to End Theoretical Perspective - MLWhiz
Examples of a mix of computer vision tasks.

Datasets

There were limitations to the datasets available. In an ideal world a dataset would have bounding boxes with very granular WBC-type classifications e.g. basophil, eosinophil, lymphocyte, monocyte, neutrophil. I had the following:

  • BCCD Dataset - contains bounding boxes for white blood cells (WBCs), red blood cells (RBCs) and platelets which makes it great for training an object detection model. The downside is it doesn't contain WBC types which present pathology in different ways.
  • Acevedo et al. Dataset - contains individual labelled cells but no bounding boxes. Great for training an object classification model.
  • Unlabelled images from a hematology lab at Royal Prince Alfred Hospital - importantly these were likely pathological.

The Approach

Stage 1 - Detecting WBCs

I used the BCCD dataset to train an object detection model to detect WBC, RBC and platelets and draw boxes around them. You can see that it struggles a little to separate overlapping RBCs because of their homogeneity. I'm not overly fussed as the focus is on the WBCs.

-- CODE language-python -- #Install dependencies for YOLOv5 !pip install -r https://raw.githubusercontent.com/ultralytics/yolov5/master/requirements.txt #Clones repository containing models from Git Repo: https://github.com/ultralytics/yolov5 !git clone 'https://github.com/ultralytics/yolov5.git' # In-place stream editing of the YOLOV5s YAML where 's/old/new/g' s initiates regexp and g means global !sed -i 's/nc: 80/nc: 3/g' ./yolov5/models/yolov5s.yaml # replace nc:80 wtih nc:3 # Train YOLOv5 Model %%time !python ./yolov5/train.py --img 1280 --batch 8 --epochs 100 --data ./yolov5/data.yaml --cfg ./yolov5/models/yolov5s.yaml --name BCCM # Visualisate Performance !python ./yolov5/detect.py --source './RPA Blood Film Images' --weights "./best.pt" --imgsz 3520 --save-txt

Some examples of bounding box detection for WBC, RBC and platelets.

A challenge that cropped up was scaling. The model performed well on the image dimensions it was trained on but trying to apply the model directly to an image where cells were much smaller is not feasible. That's where slice-aided hyper inference (SAHI) came to the rescue. It divides the input image into overlapping patches and inferenced over. Bounding boxes are then scaled back to the original image coordinates.

-- CODE language-python -- !pip install -U torch sahi yolov5 #Importing required functions and classes from sahi.utils.yolov5 import ( download_yolov5s6_model ) from sahi.model import Yolov5DetectionModel from sahi.utils.cv import read_image from sahi.predict import get_prediction, get_sliced_prediction, predict #Loading trained model model_path = './4-01-2022-artemis.pt' download_yolov5s6_model(destination_path=model_path) #instantiating a new YOLOv5 detection model object model = Yolov5DetectionModel( model_path=model_path, confidence_threshold=0.3, device="cpu" ) #Setting the slice parameters and getting a prediction on text image result = get_sliced_prediction( image = "./40x.jpg", detection_model = model, slice_height = 500, slice_width = 500, overlap_width_ratio = 0.2, overlap_height_ratio = 0.2 )

YOLOv5s vs. YOLOv5s+SAHI

Stage 2 - Cropping out WBCs & Classification

Since I don't have a dataset with specific labelled WBC types on bounding boxes, I have to first crop out the WBCs from the original image and then train a separate classifier to differentiate between those WBCs. There are some suprising challenges with that:

  • How do you deal with a bounding box that runs off the edge of the image?
  • Classifiers need standard image sizes e.g. all squares but bounding boxes can be any dimension they so please.
  • How do you deal with multiple WBCs overlapping each other

I went with max(width, height) centred at the middle of the bounding box. Any crop dimension that leaked off the edge would be shifted back in. I won't bore you with the laborious batch cropping code but you can view the Github repository for more details.

Acevedo et al. dataset was already labelled with detailed WBC types. I then cropped WBCs out of the BCCD dataset which my supervisor Dr Alex Wong helped me label (we were desperately needed training data). I trained a simple ResNet18 classifier in PyTorch on all these training samples. Did they work on the out-of-sample test set? Of course not. Most cells were being incorrectly labelled as monocytes. Back to the drawing board. A couple reasons were possible:

  • Perhaps the relative size of the WBCs was actually important information that was lost when I cropped out all the images and resized them all to the standard 224px by 224px. After all, monocytes are much larger.
  • The poor image quality in low-quality scans might make differentiating WBC-types difficult because the granules (a big differentiating factor) is difficult to make out.
  • Classification is often context-dependent e.g. the relative colour of a eosinophil to a basophil helps with differentiating one from the other
  • Using a wonderful little explainability package lime you can also see that the positive detection isn't even using much of the nucleus at all! WBCs are meant to have very distinctive nuclei e.g. neutrophil with a horseshoe-shaped nucleus.
Positive detection was mostly NOT coming from features within the cell. That's problematic...

To retain relative size information, I tried to scale WBC crop size relative by matching the average RBC size across images - effectively using RBC size as a yardstick. Nice idea in theory but no dice. I got a bit of break when I tried pre-processing image augmentation. A gauntlet of augmentations (Random Erasing + Random Resized Crop + Flip + Rotation + Hue/Saturation/Brightness) produced the best results with an out-of-sample accuracy of 0.84 on a dataset completely different in source from the training set. Although these results are heavily skewed by the imbalanced dataset with far more neutrophils.

Confusion matrix of white blood cell classification.


Stage 3 - Diseased WBCs & Future Work

I never quite got the chance to hit this stage but it's still conceptually interesting to discuss. It was just at this moment that my Centenary Institute Summer Research scholarship concluded. The original plan was to try and further build a classifier that could distinguish diseased from normal WBCs. However, since doing this project, I think there could be a more interesting, novel approach. If we have large amounts of unlabelled training data, we could just apply contrastive learning and see if the latent features cluster nicely into disease/non-disease. Perhaps I may revisit this project again some time.