Organs On a Chip (OOCs) represent a sophisticated approach for exploring biological mechanisms and developing therapeutic agents. In conjunction with high-quality time-lapse microscopy (TLM), OOCs allow for the visualization of reconstituted complex biological processes, such as multi-cell-type migration and cell–cell interactions. In this context, increasing the frame rate is desirable to reconstruct accurately cell-interaction dynamics. However, a trade-off between high resolution and carried information content is required to reduce the overall data volume. Moreover, high frame rates increase photobleaching and phototoxicity. As a possible solution for these problems, we report a new hybrid-imaging paradigm based on the integration of OOC/TLMs with a Multi-scale Generative Adversarial Network (GAN) predicting interleaved video frames with the aim to provide high-throughput videos. We tested the performance of the predictive capability of GAN on synthetic videos, as well as on real OOC experiments dealing with tumor–immune cell interactions. The proposed approach offers the possibility to acquire a reduced number of high-quality TLM images without any major loss of information on the phenomena under investigation. © 2020, Springer-Verlag London Ltd., part of Springer Nature.
Multi-scale generative adversarial network for improved evaluation of cell–cell interactions observed in organ-on-chip experiments
De Ninno A.;Businaro L.Membro del Collaboration Group
;
2021
Abstract
Organs On a Chip (OOCs) represent a sophisticated approach for exploring biological mechanisms and developing therapeutic agents. In conjunction with high-quality time-lapse microscopy (TLM), OOCs allow for the visualization of reconstituted complex biological processes, such as multi-cell-type migration and cell–cell interactions. In this context, increasing the frame rate is desirable to reconstruct accurately cell-interaction dynamics. However, a trade-off between high resolution and carried information content is required to reduce the overall data volume. Moreover, high frame rates increase photobleaching and phototoxicity. As a possible solution for these problems, we report a new hybrid-imaging paradigm based on the integration of OOC/TLMs with a Multi-scale Generative Adversarial Network (GAN) predicting interleaved video frames with the aim to provide high-throughput videos. We tested the performance of the predictive capability of GAN on synthetic videos, as well as on real OOC experiments dealing with tumor–immune cell interactions. The proposed approach offers the possibility to acquire a reduced number of high-quality TLM images without any major loss of information on the phenomena under investigation. © 2020, Springer-Verlag London Ltd., part of Springer Nature.File | Dimensione | Formato | |
---|---|---|---|
Comes_MArtinelli_2021_NComp_and_ Appl_Multi-scale generative adversarial network.pdf
solo utenti autorizzati
Tipologia:
Versione Editoriale (PDF)
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
4.71 MB
Formato
Adobe PDF
|
4.71 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.