How can we remove subjectivity from science, whilst keeping the human in the loop?
A very human problem
When imaging scientists look down the microscope, they are often presented with a bewildering choice. Required to select specific cells from hundreds, thousands or millions, an unavoidably subjective process ensues. Due to time pressures and resource availability, it is likely that the scientist only has time to select a handful of cells for each condition. The question is, which of the many cells are the most appropriate to image in order to represent the problem at hand?
The subjectivity of scientists is discussed in the book ‘Thinking Fast and Thinking Slow’ by Nobel Prize-winning author, Daniel Kahneman: "Contrary to the rules of philosophers of science, who advise testing hypotheses by trying to refute them, people (and scientists, quite often) seek data that are likely to be compatible with the beliefs they currently hold”. Accordingly, it’s highly likely that the choices imaging scientists make will be unknowingly influenced to some degree by their hypothesis or the will of their supervisor. It should be the goal of every scientist to develop transparent methods that are open to critique by their contemporaries.
To address this bias issue in the realm of microscopy, I’m developing systems that will rapidly and intelligently survey the cells present across a slide, then provide information and insight regarding their overall distribution. This involves cellular identification using machine learning and computer vision, and characterisation through simple measurements such as intensity of size, or through more complicated higher-level cellular features. The idea is that, with an overall awareness of the cells present, individual cells can be selected fairly for detailed study. Figure 1 shows how bias could enter an experiment and how through automation and coarse characterisation, scientists can perform more robust experiments.
Figure 1: Visualisation of the risks of biased sampling. A and B) Two scientists will choose cells under the microscope based on a preconception of what they expect to see, and their choices maybe different. C) Using automated microscopy we can measure and visualise the whole distribution of cells (all points) and put observations in context (red and blue points). D) In real data (e.g. C127 cells treated with BrdU and stained with DAPI) there exists a bimodal distribution. Some cells have resisted the BrdU uptake and are brighter than those cells that have taken up the BrdU (green). In this example, only cells that have taken up the BrdU are of interest for subsequent measurements and comparisons. Coarse intensity measurements of all cells present can be used to advise a scientist of the overall distribution so they can make informed decisions for the remainder of their experiment.
The challenge of developing such technologies is achieving rapid imaging of large numbers of cells using hardware that’s compatible with the variety of specific and custom microscopes used in modern research practices. Furthermore, the system should be available for scientists from all backgrounds, not limited to those who understand high-end analysis or robotics.
In my recent preprint I outline a microscopy system that can automatically locate and image cells, then return a distribution based on their properties: AMCA, the Automated Microscope Control Algorithm. The technique uses regular microscopy equipment (a camera and automated stage) and the special ingredients: machine learning and computer vision.
This system can be trained to identify and image cells in 3-D, based on simple 2-D annotation of cells (which can be performed by anybody). The annotation is simple, involving drawing bounding boxes around cells, but it provides a system that can be trained to independently recognise and localise cells on a slide. Although it can take several hours to image a whole slide, once trained and validated, the system can be left unsupervised to image cells across the whole specimen at night or over a lunch break. Once completed, the scientist may want to review the performance of the algorithm in specific areas, or be taken to a specific area and decide whether to image in more detail. This process is supported through an augmented reality system, which allows the user to preview the outputs of the analysis directly as they look down the binocular of the microscope (Figure 2).
Figure 2: View down a microscope eye-piece as the focus is changed and the augmented reality system updates the output of the cell identification algorithm.
The future of AMCA et al
The future of this work is bold. I hope to develop a range of algorithms which can work on light-weight, compact, cheap and modular computers (like the Nvidia Jetson Nano system). With such technology it becomes feasible that every microscope would be used semi-autonomously, saving time and optimising resources.
I’m aiming to extend the capabilities of the imaging system to include dynamic imaging in response to visual cues. For example, a cell has with an unusual appearance, perhaps about to divide or undergo some other process, could be interesting to study it in greater detail. I hope that my system will be able to spot these dynamic processes, adding an important dimension to its capabilities. I hope this research will have its first meaningful impact within the MRC WIMM, and am excited to apply this system to your biological questions. I invite you to get in touch to collaborate and hopefully help make your imaging more efficient and objective.
This blog post was written by Dominic Waithe and edited by Alexandra Preston (Drakesmith group).