(Giga)TIME for the era of AI: virtual multiplex immunofluorescence from routine H&E for large populationsDownload PDF Download PDF Research HighlightOpen accessPublished: 31 March 2026Shao-Ping Yang ORCID: orcid.org/0000-0002-4721-72961,2 &Dihua Yu ORCID: orcid.org/0000-0001-6231-93811,2 Signal Transduction and Targeted Therapy volume 11, Article number: 116 (2026) Cite this articleSubjectsCancer microenvironmentComputational biology and bioinformaticsIn a recent paper published in Cell, Valanarasu, Xu, Usuyama, Kim et al. introduced GigaTIME, a multimodal artificial intelligence (AI) framework that generates photorealistic virtual multiplex immunofluorescence (mIF) images from routine hematoxylin and eosin (H&E)-stained pathology slides.1 The study proposes that H&E-stained images, paired with large-scale multimodal AI training, can serve as a hypothesis-generating route to population-scale modeling of marker-like virtual mIF images that approximate the tumor immune microenvironment (TIME), enabling spatially informed association screening across real-world patient cohorts that would be impractical to profile with experimental multiplex imaging at comparable scale.TIME co-evolves with tumor initiation, progression, and metastasis, impacting treatment response and clinical outcomes. Tumor behavior reflects not only malignant cells but also immune infiltrates, stromal compartments, and their spatial organization. Because spatial context (cell states, proximity, and multicellular neighborhoods) can be missed by bulk measurements, multiplex spatial imaging has improved TIME characterization, but broad deployment remains constrained by cost, technical complexity, and throughput. In contrast, H&E staining is inexpensive and ubiquitous in clinical practice, yet it lacks direct information on protein state readouts and immune phenotypes. A key unmet need is a scalable, independently verifiable approach that links routine morphology to spatial tumor biology, enabling large-cohort studies with interpretable outputs.To that end, the authors trained a cross-modal translator using paired H&E and mIF data of lung cancer, totaling approximately 40 million cells and 21 protein markers, to learn mappings between morphological patterns and predicted protein-associated spatial signals (see Fig. 1). Model performance was evaluated primarily as virtual mIF channel fidelity on held-out paired H&E-mIF data, by comparing predicted versus measured marker maps using channel-wise agreement metrics (e.g., Dice score and Pearson correlation), alongside comparisons to unpaired translation baselines and tests outside the training distribution. The authors then applied the model to a large real-world cohort from Providence Health, encompassing 14,256 patients across 51 hospitals and more than 1000 clinics in seven U.S. states. This produced 299,376 virtual mIF slides spanning 24 cancer types and 306 subtypes, creating virtual mIF images of a large population for systematic interrogation of TIME features across disease contexts. In downstream analyses, the study identified 1234 statistically significant associations between predicted protein patterns and clinical variables including cancer biomarkers, stages, and patient survival, with additional corroboration in 10,200 TCGA patients. The study also reports evaluation in additional unseen tissue settings (e.g., breast and brain tissue microarrays not included in training), supporting possible transfer beyond the training distribution while preserving the need for careful calibration across organs and sites. Generally, this strategy illustrates how established image-to-image translation models, when trained on large paired datasets, can generate marker-like, spatially proxy maps to infer spatially resolved tumor-immune features from routine pathology at inference time, enabling population-scale analyses that would be difficult to perform with experimental multiplex imaging at a comparable scale. By bridging standard H&E morphology with multiplex protein information, the study opens the door to hypothesis generation around TIME patterns across diverse cancers in large cohorts, though follow-up orthogonal validation for biological and translational interpretation is required.Fig. 1Full size imageGigaTIME enables population-scale, multiplex-style TIME analyses from routine H&E. Routine H&E histology is inexpensive and widely available but does not directly report protein markers or immune phenotypes, whereas multiplex immunofluorescence (mIF) provides informative protein-state readouts but remains costly and labor-intensive. Trained on paired H&E-mIF sections, GigaTIME predicts virtual mIF marker maps (21 protein channels) from H&E and can be applied to large real-world cohorts to generate a “virtual population” for retrospective, spatially association screening and hypothesis prioritization. The figure was created with BioRender.comBeyond the exciting findings, a key open question is how strongly H&E morphology constrains protein marker signals, especially functional or activation states that may not couple well with visible structure. The study reports that translation performance varies among protein channels and is generally stronger for nuclear targets than for membrane or cytoplasmic markers, underscoring that some proteins may be intrinsically difficult to infer from morphology alone. Encouragingly, the authors show robustness through benchmarking against baselines, external corroboration in TCGA, and testing on other samples not used in training. Even so, virtual mIF outputs are model-derived surrogates and should be validated experimentally for any biological or translational claim. Looking ahead, the framework is poised to extend to larger marker panels and potentially other spatial modalities as more diverse paired datasets become available. Finally, although the virtual population analysis identifies 1234 significant associations and outcome-stratifying signatures, these remain correlational and require rigorous experimental validation to establish causality.This work extends a growing line of research that aims to infer underlying molecular and cellular information from routine histology, including early efforts in virtual IF imaging.2 In 2018, Burlingame and colleagues first introduced SHIFT, which used a conditional generative adversarial network (GAN) to translate H&E whole slide images into virtual IF pictures, based on the premise that clinically relevant IF patterns, including nuclear and tumor marker-associated distributions, are partially encoded in H&E morphological features and can be computationally reconstructed.3 Building on this conceptual foundation, GigaTIME extends prior efforts by leveraging paired, multi-marker supervision and deploying the approach at large population scale, shifting the goal from synthesizing a small number of virtual staining toward constructing a large cohort-level virtual spatial proteomics resource to interrogate the TIME.1,3 Conceptually, the shift is from “virtual staining” as visualization toward virtual spatial readouts as an association-screening resource, not de novo biomarker invention.Recent viewpoints declare that combining histology, spatial omics, and machine learning could enable direct inference of molecular and cellular programs from tissue images and accelerate precision oncology despite the recognized persistent hurdles in multimodal integration, benchmarking, and validation of clinical relevance.4 Coleman, Schroeder, and Li further emphasized that AI may expand the reach of spatial omics by increasing effective coverage and resolution and by enabling cross-modality integration, while emphasizing that rigorous validation and careful interpretation are essential for biological and translational applications.5Overall, GigaTIME points toward a pragmatic future for spatial oncology in the era of AI: not replacing multiplex experiments but amplifying their reach by inferring spatially resolved tumor-immune features from routine histology. If the cancer research community can embrace shared benchmarks, robust calibration, and rigorous biological and clinical validation, population-level virtual mIFs could bridge real-world pathology to mechanistic insight and to testable, spatially grounded immuno-oncology hypotheses, enabling a faster path to biomarker triage for clinical trials and ultimately, to more informed targeting of the TIME.ReferencesValanarasu, J. M. J. et al. Multimodal AI generates virtual population for tumor microenvironment modeling. Cell 189, 386–400.e319 (2026).Article CAS PubMed Google Scholar Latonen, L., Koivukoski, S., Khan, U. & Ruusuvuori, P. Virtual staining for histology by deep learning. Trends Biotechnol. 42, 1177–1191 (2024).Article CAS PubMed Google Scholar Burlingame, E. A., Margolin, A. A., Gray, J. W. & Chang, Y. H. SHIFT: speedy histopathological-to-immunofluorescent translation of whole slide images using conditional generative adversarial networks. Proc. SPIE Int Soc. Opt. Eng. 10581, 1058105 (2018).PubMed PubMed Central Google Scholar Pei, G., Liu, Y. & Wang, L. Spatially resolving cancer: from cell states to therapy. Trends Cancer 12, 20–33 (2026).Article CAS PubMed Google Scholar Coleman, K., Schroeder, A. & Li, M. Unlocking the power of spatial omics with AI. Nat. Methods 21, 1378–1381 (2024).Article CAS PubMed PubMed Central Google Scholar Download referencesAcknowledgementsOur work is partially supported by the National Institutes of Health Grants R01CA231149 (DY), R01CA266099 (D.Y.), R01CA270010 (D.Y.), DoD Breakthrough Awards BC231014 (D.Y.), Cancer Prevention and Research Institute of Texas (CPRIT) grant RP240214 (D.Y.), Robert J. Kleberg, Jr., and Helen C. Kleberg Foundation Award (D.Y.), Ting Tsung and Wei Fong Chao Research Fund (D.Y.). S.-P.Y. is a Graduate Scholar in the CPRIT Training Program (RP210028). D.Y. is the Hubert L. & Olive Stringer Distinguished Chair in Basic Science at UT MDACC.The figure accompanying this Research Highlight was created with BioRender.com.Author informationAuthors and AffiliationsDepartment of Molecular Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USAShao-Ping Yang & Dihua YuThe University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USAShao-Ping Yang & Dihua YuAuthorsShao-Ping YangView author publicationsSearch author on:PubMed Google ScholarDihua YuView author publicationsSearch author on:PubMed Google ScholarContributionsS.-P.Y.: Conceptualization; writing - original draft. D.Y.: Conceptualization; writing - editing, and finalizing. All authors have read and approved the article.Corresponding authorCorrespondence to Dihua Yu.Ethics declarationsCompeting interestsD.Y. is the editorial board member of Signal Transduction and Targeted Therapy, but has not been involved in the process of manuscript handling. S.-P.Y. declares no competing interests.Additional informationPublisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Rights and permissionsOpen Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.Reprints and permissionsAbout this article