TY - JOUR
T1 - Deep learning enables reference-free isotropic super-resolution for volumetric fluorescence microscopy
AU - Park, Hyoungjun
AU - Na, Myeongsu
AU - Kim, Bumju
AU - Park, Soohyun
AU - Kim, Ki Hean
AU - Chang, Sunghoe
AU - Ye, Jong Chul
N1 - Publisher Copyright:
© 2022, The Author(s).
PY - 2022/12
Y1 - 2022/12
N2 - Volumetric imaging by fluorescence microscopy is often limited by anisotropic spatial resolution, in which the axial resolution is inferior to the lateral resolution. To address this problem, we present a deep-learning-enabled unsupervised super-resolution technique that enhances anisotropic images in volumetric fluorescence microscopy. In contrast to the existing deep learning approaches that require matched high-resolution target images, our method greatly reduces the effort to be put into practice as the training of a network requires only a single 3D image stack, without a priori knowledge of the image formation process, registration of training data, or separate acquisition of target data. This is achieved based on the optimal transport-driven cycle-consistent generative adversarial network that learns from an unpaired matching between high-resolution 2D images in the lateral image plane and low-resolution 2D images in other planes. Using fluorescence confocal microscopy and light-sheet microscopy, we demonstrate that the trained network not only enhances axial resolution but also restores suppressed visual details between the imaging planes and removes imaging artifacts.
AB - Volumetric imaging by fluorescence microscopy is often limited by anisotropic spatial resolution, in which the axial resolution is inferior to the lateral resolution. To address this problem, we present a deep-learning-enabled unsupervised super-resolution technique that enhances anisotropic images in volumetric fluorescence microscopy. In contrast to the existing deep learning approaches that require matched high-resolution target images, our method greatly reduces the effort to be put into practice as the training of a network requires only a single 3D image stack, without a priori knowledge of the image formation process, registration of training data, or separate acquisition of target data. This is achieved based on the optimal transport-driven cycle-consistent generative adversarial network that learns from an unpaired matching between high-resolution 2D images in the lateral image plane and low-resolution 2D images in other planes. Using fluorescence confocal microscopy and light-sheet microscopy, we demonstrate that the trained network not only enhances axial resolution but also restores suppressed visual details between the imaging planes and removes imaging artifacts.
UR - http://www.scopus.com/inward/record.url?scp=85131627261&partnerID=8YFLogxK
U2 - 10.1038/s41467-022-30949-6
DO - 10.1038/s41467-022-30949-6
M3 - Article
C2 - 35676288
AN - SCOPUS:85131627261
VL - 13
JO - Nature communications
JF - Nature communications
SN - 2041-1723
IS - 1
M1 - 3297
ER -