In response to the rapidly increasing demand for image clarity, smartphone and digital camera manufacturers have been exploring how to enhance pixels using AI models, a much less expensive approach than using high-end lenses. A similar idea has been applied by a Fudan research team led by YAN Bo, a professor from the School of Computer Science at Fudan University, to a common research tool in life sciences laboratories: the fluorescence microscope.
The team's research paper titled “Pretraining a Foundation Model for Generalizable Fluorescence Microscopy-Based Image Restoration” was published on the scientific journal Nature Methods on April 12, introducing a unified foundation model for fluorescence microscopy-based image restoration (UniFMIR) model which circumvents the limitations exhibited by the existing fluorescence microscopy image restoration (FMIR) methods.
A fluorescent microscope is used to observe molecules within cells by illuminating specimens, sometimes fluorochrome-labeled, with light of a specific wavelength such as UV. It allows images to have resolutions higher than those generated by ordinary optical microscopes, reaching the nanoscale for molecular observations. Since its emergence in 2006, super-resolution fluorescence microscopy has aided scientists globally in developing more targeted treatments for neurodegenerative disorders like Parkinson's, Alzheimer's and Huntington's diseases.
Over the past decade, scientists from life sciences and computer fields have been working together to explore AI pathways to enhance image quality to address challenges posed by optical instruments of fluorescent microscopes and the fluorescence sensitivity of biological samples. However, the variety of imaging modalities, biological samples and image restoration tasks have made it extremely difficult to solve fluorescence microscopy-based image restoration problems. Most scientists have chosen to solve one problem at a time by focusing on developing specific AI models for a single need.
To improve the generalizability of conventional FMIR models, Prof. Yan and his colleagues constructed the first unified foundation model for fluorescence microscopy-based image restoration (UniFMIR) model. The experimental results suggested the excellent performance of UniFMIR in five major tasks: image super-resolution, isotropic reconstruction, 3D image denoising, surface projection, and volumetric reconstruction.
UniFMIR employs feature enhancement modules based on an advanced Swin transformer structure to enhance the feature representations and to reconstruct general and effective features for high-quality fluorescence microscopy-based image restoration. By pretraining the model on a large-scale dataset and fine-tuning it on different subdatasets which cover various degradation conditions, UniFMIR demonstrates higher image restoration precision and better generalization than the prevailing models.
A fluorescence microscope equipped with UniFMIR will be a godsend in life science laboratories. Scientists will be able to observe the tiny structures and complex processes inside living cells more clearly, accelerating scientific discoveries and medical innovations in the fields of life sciences, medical research, and disease diagnosis worldwide. Moreover, in the fields of semiconductor manufacturing and new materials development, the results can be used to enhance the quality of observing and analyzing the micro-structures, thus optimizing the manufacturing process and improving the quality of products.
“In the future, scientists in life science labs can continue to enhance the image reconstruction capability of UniFMIR by expanding the training data in quantity and diversity. YAN Bo said with confidence on the potential of UniFMIR for broader experimental applications.
This research was completed by the Digital Media Laboratory of the School of Computer Science at Fudan University. Dr. MA Chenxi, a postdoctoral research fellow, and Dr. TAN Weimin, a research scientist, are co-first authors of the work, with Prof. YAN Bo as the corresponding author. The other author includes HE Rui’an, a doctoral student.
(END)
Presented by Fudan University Media Center
Writer: GONG Jiaxin
Editor:LI Yijie, WANG Mengqi