3D Tissue Imaging – How Much Data is Too Much Data?

Over the last decade we have gained the ability to transition from traditional two-dimensional slide-based histopathology for tissue characterization to a three-dimensional approach. This transition is possible due to advances in tissue clearing, fluorescent labeling, optical microscopy (e.g. light sheet, confocal, two-photon) and advanced serial sectioning devices (e.g. knife edge scanning microscopy).

However, before we dive into the specifics of 3D tissue imaging, it is important to recognize one very important detail – 3D tissue imaging (i.e. 3D histology) is not beneficial for all applications and will not replace slide-based imaging. 3D tissue imaging is generally only helpful for complex/heterogenous tissues or answering questions that require spatial information (e.g. antibody penetration) where traditional 2D tissue imaging is limited.

The application of 3D tissue imaging allows for whole tissues to be investigated in their entirety and for complex features (e.g. vasculature, neurons) to be quantitatively assessed. With high resolution optics and multi-channel imaging, terabytes of data can now be collected from a single tissue sample while providing a 3D context. Though this seems like a huge advantage for extracting actionable insights from tissues, this volume of data can create significant challenges for researchers for a variety of reasons such as data transfer and processing requirements.

We see this problem commonly and see it as a fundamental misunderstanding with how many researchers perceive 3D tissue imaging. While we can now generate terabytes of data from tissues, the question we should ask ourselves before doing this is:

“What data set do I need to generate to answer my specific research question?”

The misunderstanding here is that more data is always better where the reality is that most of the data we generate from 3D imaging is superfluous. The reason why this is so important for 3D tissue imaging is that we are adding a third-dimension to our imaging and thus when you move from a 10X objective to a 40X objective you will increase your image acquisition time and data density by a factor of 64. From a practical perspective, this means going from a 4-hour imaging session to a 5-day imaging session and if you are using an imaging core at $100 an hour this means $12,800 instead of $400.

For this reason, we always suggest that researchers work to acquire as little data as possible to address their research question. Start with a small region of interest and determine the minimum tissue volume required for your research as well as the minimum imaging parameters (objective, z-step size, pixel size, exposure time). From our experience, the only applications that require extremely large data sets are those which study extremely small features across large volumes or virtual reality applications where high-resolution renderings are required for an optimal user experience.  

 Precision cut lung slice labeled with DAPI, CD68 and lectin - Z-projection.

Precision cut lung slice labeled with DAPI, CD68 and lectin - Z-projection.

Michael Johnson