2024 AWholeSlideFoundationModelforDi

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Image Data Encoding, Image Data Encoder, Image Transformer Model, LongNet.

Notes

Cited By

Quotes

Abstract

Digital pathology poses unique computational challenges, as a standard gigapixel slide may comprise tens of thousands of image tiles1, 2, 3. Prior models have often resorted to subsampling a small portion of tiles for each slide, thus missing the important slide-level context4. Here we present Prov-GigaPath, a whole-slide pathology foundation model pretrained on 1.3 billion 256 × 256 pathology image tiles in 171, 189 whole slides from Providence, a large US health network comprising 28 cancer centres. The slides originated from more than 30, 000 patients covering 31 major tissue types. To pretrain Prov-GigaPath, we propose GigaPath, a novel vision transformer architecture for pretraining gigapixel pathology slides. To scale GigaPath for slide-level learning with tens of thousands of image tiles, GigaPath adapts the newly developed LongNet5 method to digital pathology. To evaluate Prov-GigaPath, we construct a digital pathology benchmark comprising 9 cancer subtyping tasks and 17 pathomics tasks, using both Providence and TCGA data6. With large-scale pretraining and ultra-large-context modelling, Prov-GigaPath attains state-of-the-art performance on 25 out of 26 tasks, with significant improvement over the second-best method on 18 tasks. We further demonstrate the potential of Prov-GigaPath on vision–language pretraining for pathology7, 8 by incorporating the pathology reports. In sum, Prov-GigaPath is an open-weight foundation model that achieves state-of-the-art performance on various digital pathology tasks, demonstrating the importance of real-world data and whole-slide modelling.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2024 AWholeSlideFoundationModelforDiHoifung Poon
Furu Wei
Jianfeng Gao
Tristan Naumann
Sheng Wang
Rajesh Rao
Sheng Zhang
Shuming Ma
Wenhui Wang
Chunyuan Li
Hanwen Xu
Naoto Usuyama
Jaspreet Bagga
Cliff Wong
Zelalem Gero
Yu Gu
Yanbo Xu
Mu Wei
Jianwei Yang
Jaylen Rosemon
Tucker Bower
Soohee Lee
Roshanthi Weerasinghe
Bill J. Wright
Ari Robicsek
Brian Piening
Carlo Bifulco
Javier González
A Whole-slide Foundation Model for Digital Pathology from Real-world Data10.1038/s41586-024-07441-w2024