Organs On a Chip (OOCs) represent a sophisticated approach for exploring biological mechanisms and developing therapeutic agents. In conjunction with high-quality time-lapse microscopy (TLM), OOCs allow for the visualization of reconstituted complex biological processes, such as multi-cell-type migration and cell–cell interactions. In this context, increasing the frame rate is desirable to reconstruct accurately cell-interaction dynamics. However, a trade-off between high resolution and carried information content is required to reduce the overall data volume. Moreover, high frame rates increase photobleaching and phototoxicity. As a possible solution for these problems, we report a new hybrid-imaging paradigm based on the integration of OOC/TLMs with a Multi-scale Generative Adversarial Network (GAN) predicting interleaved video frames with the aim to provide high-throughput videos. We tested the performance of the predictive capability of GAN on synthetic videos, as well as on real OOC experiments dealing with tumor–immune cell interactions. The proposed approach offers the possibility to acquire a reduced number of high-quality TLM images without any major loss of information on the phenomena under investigation. © 2020, Springer-Verlag London Ltd., part of Springer Nature.

Multi-scale generative adversarial network for improved evaluation of cell–cell interactions observed in organ-on-chip experiments

De Ninno A.;Businaro L.
Membro del Collaboration Group
;
2021

Abstract

Organs On a Chip (OOCs) represent a sophisticated approach for exploring biological mechanisms and developing therapeutic agents. In conjunction with high-quality time-lapse microscopy (TLM), OOCs allow for the visualization of reconstituted complex biological processes, such as multi-cell-type migration and cell–cell interactions. In this context, increasing the frame rate is desirable to reconstruct accurately cell-interaction dynamics. However, a trade-off between high resolution and carried information content is required to reduce the overall data volume. Moreover, high frame rates increase photobleaching and phototoxicity. As a possible solution for these problems, we report a new hybrid-imaging paradigm based on the integration of OOC/TLMs with a Multi-scale Generative Adversarial Network (GAN) predicting interleaved video frames with the aim to provide high-throughput videos. We tested the performance of the predictive capability of GAN on synthetic videos, as well as on real OOC experiments dealing with tumor–immune cell interactions. The proposed approach offers the possibility to acquire a reduced number of high-quality TLM images without any major loss of information on the phenomena under investigation. © 2020, Springer-Verlag London Ltd., part of Springer Nature.
2021
Istituto di fotonica e nanotecnologie - IFN
Biomedical application; Deep learning; Generative adversarial network (GAN); Time-lapse microscopy; Video prediction
File in questo prodotto:
File Dimensione Formato  
Comes_MArtinelli_2021_NComp_and_ Appl_Multi-scale generative adversarial network.pdf

solo utenti autorizzati

Tipologia: Versione Editoriale (PDF)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 4.71 MB
Formato Adobe PDF
4.71 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/490303
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 16
  • ???jsp.display-item.citation.isi??? ND
social impact