The difficulties encountered in AI applications in pathology are diverse and can occur at any stage of the workflow from the pre-analytical phase in the pathology laboratory to AI application [5]. At this point, artificial intelligence comes to the scene to simplify the diagnostic pipeline [4]. Similar to the microscopic examination of conventional glass slides, the resultant WSIs of AI-supported image analysis applications can be reviewed on computers by pathologists [5]. Artifacts can occur at all stages of the routine pathological process and leave traces on the final tissue slide. All these artifacts that interfere with pathological diagnosis are also a problem for AI applications [1,6,12,16,27].
These include all stages of the slide preparation in the preanalytical and analytical process, conversion of the tissue slides into a digital image file, processing of the digital final image file, and then its use and finalization in the AI application [28]. Since the tissue slide is the main source of information to be obtained from pathology, the quality of the slide is the most indispensable fact of diagnosis and the acquisition should be complete and accurate. In the research and development of algorithms, all phases must be carried out with special care and attention [8,27]. Finally, most of the difficulties of using the developed AI product for diagnostic support or research can be resolved with end-user training. Being aware of the challenges and pitfalls of digital pathology and artificial intelligence in their daily application or research will facilitate the future daily work of pathologists [9].
CHALLENGES
The challenges of digital pathology in daily routine can be
divided into the pre-acquisition, acquisition, and the postacquisition
period of WSI. Although the challenges are very
diverse and encountered at different stages of the process,
dividing them into three groups seems appropriate. The
difficulties in each phase also depend on the morphological
characteristics and the content of the tissue, as well as
the technical qualities of the hardware performing the
digitization [28,29], the digitization process, and the
final image file. During the conversion of glass slides
to WSI by scanning, all defects, unwanted tissues, and
artifacts that cause optical changes on the glass slide are
transferred to the digitized image [29]. This is one of the
main factors affecting the performance of AI applications.
In multicenter studies, additional problems arise from the
lack of uniformity in tissue preparation steps. Limitations
such as different privacy laws in different countries make
multinational studies difficult, even when the method of
section preparation method, differences between scanner,
patient population, and differences in disease distribution
are considered. To eliminate technical problems, models
are being developed for different centers to prepare and
process their data [30,31].
Pre-WSI Acquisition Process
The design of the AI application determines the technical
sequence and the associated challenges [32,33]. Case
selection is very important and should consider many
subtypes of disease and histologic subtypes, whether
neoplastic or non-neoplastic [4]. Biopsy samples of the
selected disease may be quantitatively or morphologically
limited or not sufficient for an AI study. For instance, in
the majority of nasopharyngeal carcinoma cases, punch
biopsy is performed in which the sampled tissue is small,
and almost all further treatment of the patient is planned
and carried out based on this biopsy. Nasopharyngeal
biopsies usually consist of small and fragmented tissues.
This reduces the data that can be used in the AI application
and limits the algorithm. Rather, the tissue on the slide
may be large, but the target tumor or disease may be very
small and/or diffuse. In this case, the annotation that needs
to be learned by the AI is tedious and takes a long time,
and the data containing the basic information is very small
[27]. Selected slides contain not only the targeted tumor
or lesion, but also include normal tissue, necrosis, cystic
spaces, and bleeding areas [33].
Tissue slide quality, collecting slides from slide archives, and selecting the appropriate slides are important prior to the WSI acquisition. After the slide preparation, many external effects create artifacts that may distort the image. Cracks, breaks, and scratches may occur on the slides due to physical exposure. Scratches occur on synthetic coatings during archiving or cleaning and these adversely impact the image quality. Depending on the defects on the slide, melioration may be required such as re-sectioning from the blocks, re-staining, re-covering slides, and cleaning dirty slides. As a result of inappropriate waiting conditions and duration, the slides may become contaminated, and dirty, the applied hematoxylin and eosin (H&E), immunohistochemistry (IHC) and histochemistry (HC) stains may fade, and the cover matrix may dry out. Glass slides with immunofluorescence (IF) and fluorescence in-situ hybridization (FISH) should be digitized without waiting. In addition, these slides are technically much more difficult to convert to WSI final image than H&E slides and require higher technical capability and costly equipment. In research designs on cytological material, there will be no successive imaging of the same cell in smears or liquidbased techniques. In algorithms in which information needs to be obtained with different stains or methods, cytological materials other than the cell block will not be suitable.
WSI Acquisition
Tissue slide dependent challenges
During the scanning of slides for WSI acquisition, artifacts
due to many reasons are included as pixels in the final file
as misleading information [28]. The quality of the tissue
slide is indispensable for the quality of the final digital
WSI [33]. All the problems in the analytical process in the
pathology lab affect the quality of the slide, which creates
difficulties for the pathologist to reach the final diagnosis.
The same losses in tissue slide quality render the final WSI
unusable and worthless in AI applications. The problems of
the tissue slide may be related to the tissue or may belong
to the slide parts other than the tissue. The main problems
are that the tissue on the slide is not in the same plane, loss
of integrity (scratches), folding, the thickness is too thin
or variable, and the stain is darker or pale. Tissue integrity
losses are transferred to WSI as they are, causing data and
result errors. The focus problem caused by the tissue on the
slide not being in the same plane and coverslip thickness
can be largely eliminated with the Z-stacking capability of
the scanning device [33].
Even if the tissue is ideally processed, sectioned, and stained, there will be difficulties if it is too fragmented, irregularly shaped, too small, or too large. If the tissue is too small or consists of scattered small fragments, annotation and patching will be difficult, laborious, and time-consuming [9,27]. Too many slides with wide tissue areas require too much work and too much time and result in too many large files, terabytes of data, in addition to insufficient data transfer rate, storage space, and GPUs. Also, highresolution images require high-quality and high-processing capacity hardware [4].
Device-dependent challenges
Acquiring high-resolution WSI is essential for accurate
diagnosis in digital pathology and AI applications [34].
During the acquisition of WSI, there may be problems
with the device due to the optical and hardware parts and
software that digitize the information from the optical
source and sensor. The optical system creates the optical
image of the tissue slide. The sensor digitizes the optical
data, and the hardware transforms the data into a final image
file. The light source and the properties of the transparent
parts (lenses, prisms, and mirrors) that make up the optical
system are very variable and have a direct impact on the
quality of the digitized image. Objective magnification is
one of the most important factors affecting the size of the
digitized image file. Microns per pixel are directly related to
optical magnification and pixel size [28,29]. Also, gathering the magnification information for each WSI is a challenge.
Even the biggest public dataset TCGA does not include this
information. Correct selection and adjustment of the light
source will eliminate color defects in the digital image and
the need for input normalization before AI application.
One of the other essential parts that determine the
resolution of the image is the sensor. Sensor characteristics
and pixel size not only select image resolution but may
result in different types of artifacts during image capture.
CCD, CMOS, and sCMOS sensor types show differences
in artifact production, imaging ability, and visualization of
fluorescent signals. A large sensor provides high resolution.
Even if the number, work capacity, and quality of WSI
acquisition devices are increasing day by day, high prices
constitute problems for the availability of the sources [28]
(Figure 1).
Although the high-resolution big-screen viewing monitor looks like the ultimate device in transferring the highresolution image to the evaluator, high-size image files also require high-capacity memory cards, graphics cards, and larger hard disks. The cost-benefit ratio must be considered regarding hardware, high-speed connection, information technology (IT) infrastructure, and a large amount of data storage costs (long-term storage of glass slides plus whole slide images) [14].
Final image file format, image viewing, transmission, and sharing are dominantly dependent on WSI-acquired scan devices [28]. The glass slides can be converted into various digital file types such as .jpg, .jpeg, .tiff, .tiff,. raw, .bif, .vms, .vmu, .ndpi, .scn, .isyntax, .mrxs, .svslide, and .svs based on the scanners [28,33]. These files can be visualized by open source WSI viewers such as QuPath, Cytomine, Orbit, ASAP, OpenSlide with OpenSeadragon, ImageJ of SlideJs, PMA.start, and caMicroscope. Open-source solutions are cost-effective for labs and researchers as they allow developers and software engineers to extend and integrate them with their apps. File types are directly related to the viewers that will use them and the file size. This, as expected, is associated with time and hardware overhead in the transmission and sharing of image files [9].
Post-WSI Acquisition
After getting digitized WSIs, storing them in a structured
database in digital pathology laboratories will create an
additional financial burden. The workload of storage and
digitization will be proportional to the number of patients
at the hospital. Providing storage space for digitized data
for which the size of one digital slide can range from 1 to 8
GB [35] is another problem, and long-term data protection and maintenance will have an additional cost. The search
in the database to select the most suitable and relevant
cases will be the next challenge which directly affects the
researcher. Determination of the aim of the study and
selecting the best fitting histomorphological, histochemical,
immunohistochemical, molecular, and oncological data
will highly affect the interpretability and impact of the
results [27,36].
Color Standardization and Normalization
Many standardization difficulties in the slide preparation
steps in the daily pathology practice cause color variance
in the glass slides [37]. The variance in the thickness
of tissue section, staining materials and methods,
illumination conditions, transparent parts, different cells,
and extracellular matrix components in different tissues,
scanner types, and final viewer devices are accounted for in
the main reasons for the difficulty in standardization [4,38].
Color differences due to the pre-scan process become
more evident, especially in multi-center studies and for
long-term archived slides, making color calibration and
standardization crucial for robust AI applications [8,33,39].
Apart from the color changes and distortions in the glass
slides, the colors obtained in the final image WSI may differ from the original slides due to capture parameters such
as illumination or any other display factors in the digital
systems themselves [38]. In terms of the uniformity of
terminology, we recommend using calibration terminology
to correct device-dependent color distortions during WSI
acquisition, standardization to correct color differences
in tissue slides, different datasets, and normalization to
correct color differences in the same datasets.
Color normalization is essential during the preparation of input and output data in AI applications to get the highperforming models [40]. However, AI stain normalization applications [39] face challenges especially for real-time applications: the memory and run-time bottlenecks associated with the processing of images in high resolution, e.g., 40x [34]. Moreover, stain normalization can be sensitive to the quality of the input images, e.g., when they contain stain spots or dirt. In this case, the algorithm may fail to accurately estimate the stain vectors and this causes inevitable artifacts during subsequent stitching [41].
Color standardization is one of the most frequent and complex subjects of digital pathology and AI. Color standardization in digital pathology and AI applications cover the final slide, after the scan, and beyond [42]. The necessity and method of color standardization should be decided according to the content of the AI application or study [41]. The main issues that the authorities should be aware of are: Does the end user see the final image with the corrected color intended and obtained? Does the end user, who sees the final image and decides on the color correction, really see the first and last image colors correctly? To prove this, the user must know the properties of all devices that affect the color from the slide to the final image and have calibrated or corrected them [43,44]. This is one of the factors that increase the workload is the necessity of checking the suitability of the color-standardized sections by the pathologists before they are used in the AI application. If the stages of the study are carried out in different institutions or device systems, it is almost impossible to correct the color with a single standard [39]. Does AI realize that training and cohort slide groups were different? If it does, is it really because the staining standards in different centers are different?
Artificial Intelligence Workflow
The typical WSI processing workflow starts with masking
the tissue and non-tissue regions. The size of the whole
slide images, reaching up to gigapixels, constitutes the biggest challenge [33]. The real time inference and training
phase get slower with the increase of the per slide and
overall size of the data [33,34]. When it is not possible to
use WSI due to memory hardware limitations, the images
divide into patches and output patches tile. According to
the aim of the study, it may not be appropriate to work with
patches. As the number of cases, the number of slides, the
tissue area in the slide, and the resolution increase, the data
to be processed increases proportionally. This increases
the number of patches. It becomes impossible for GPUs to
process the increasing amount of data. If a GPU with high
processing capacity cannot be available, the solution will
be to reduce the resolution of the patches. If the parameter
you want to find or evaluate is larger than the patch size, or
if the size is related or unrelated in more than one patch,
this method will result in false negative or positive results.
It should be considered in terms of the targeted evaluation
of the algorithm [27,45,46]. Depending on the algorithm,
the concordance between the output patches may be lost
at the boundary of the merge [15]. This causes a tiling
artifact and may appear as patch borders, color differences
between patches, and repetitive or disappearing areas in
the resulting WSI (Figure 2). Annotation and labeling of
the region of interest (ROI) is a frequently used method that reduces the total amount of data by restricting the
number of patches and the required GPU capacity. Most
of the methods are trained in a supervised manner and
require labels in slides or ROI level [18,19]. Constructing
a huge dataset with annotation is a labor-intensive process
that is planned and performed by pathologists [27].
The complexity of the structural/histological texture of
the tissue or tumor or non-targeted tissue components
(erythrocytes, histiocytes) also increases the discordance
rate [33]. For instance, the simultaneous presence of one or
more of the cellular (mitosis, apoptosis, Ki67 proliferation
index), textural (invasion, presence of in-situ cancer,
different differentiation), or stromal features that are
heterogeneously expressed in the tumor tissue will lead to
the debate in tumor type determination.
CONCLUSION and RECOMMENDATIONS
Pathologists, who have been already exposed to an intense
daily clinical workload, have little time to devote to research
and publish scientific papers on their specialty. Participation
in AI studies will bring an additional workload that is
not always intolerable for pathologists whose dedicated
field of study is not AI. Apart from this, the fear of being
replaced by AI applications causes resistance against the
integration of decision support tools into daily practice as
well as participating in AI research. Adapting pathologists
to digital platforms, whose daily practice at hospitals is
purely traditional light microscopic evaluation, will require
plenty of time and training. Being aware of the technical
and morphological difficulties and pitfalls that pathologists
may encounter in digital pathology and AI applications
will prevent erroneous results and unnecessary waste of
time and effort. Whether the availability of AI applications
by pathologists provides an advantage to the pathologist
or not is dependent on the rate of increase in application
efficiency, the associated time, and the context of work
performed. In this regard, the perception of difficulties
and pitfalls will facilitate the integration of the product and
contribute to a high level of utilization of the pathologist.
The most fundamental and indispensable requirement in digital pathology and AI applications is the accuracy and quality of WSI. The challenges for pathologists can be grouped into three: before, during and after the acquisition of the WSI, related to the AI algorithm. Those before acquisition are already in the daily routine of pathology and it is one of the main problems that have been worked on for many years for its standardization. A significant portion of the difficulties in acquiring are device-dependent and can be largely overcome with sufficient budget. Difficulties after WSI may be caused by the device and AI algorithm.
Although it is possible to overcome all challenges with device-dependent budgets and algorithm modifications, it is one of the most critical points for algorithm enthusiasts to have full cooperation between the pathologist and the AI software developer and for both to have knowledge of the others issues. Finally, ongoing hardware and AI software improvements and more affordable costs will help to overcome challenges more easily.
Funding
This review did not receive any specific grant from funding agencies
in the public, commercial, or not-for-profit sectors.
Conflict of Interest
Authors have no conflict of interest.
Authorship Contributions
Concept: KB, Design: KB, DD, KBO, Data collection or processing:
KB, DD, KBO, Analysis or Interpretation: KB, DD, KBO, Literature
search: KB, DD, KBO, Writing: KB, DD, KBO, Approval: KB, DD,
KBO.
1) Milačić VR, Miler A. Artificial intelligencemorphological
approach as a new technological forecasting technique.
International Journal of Production Research. 1986;24:1409-25.
2) Russell SJ. (Stuart Jonathan). Artificial Intelligence: A Modern
Approach. Upper Saddle River, NJ:Prentice Hall, 2010.
3) Chang HY, Jung CK, Woo JI, Lee S, Cho J, Kim SW, Kwak
TY. Artificial intelligence in pathology. J Pathol Transl Med.
2019;53:1-12.
4) Cui M, Zhang DY. Artificial intelligence and computational
pathology. Lab Invest. 2021;101:412-22.
5) Pantanowitz L. Digital images and the future of digital pathology.
J Pathol Inform. 2010;1:15.
6) Xu C, Jackson SA. Machine learning and complex biological
data. Genome Biol. 2019;20:76.
7) Bacus JV. Bacus JW. Method and apparatus for acquiring and
reconstructing magnified specimen images from a computercontrolled
microscope, US6101265A, August 23, 1996.
8) Niazi MKK, Parwani AV, Gurcan MN. Digital pathology and
artificial intelligence. Lancet Oncol. 2019;20:e253-61.
9) Hartman DJ, Van Der Laak JAWM, Gurcan MN, Pantanowitz L.
Value of public challenges for the development of pathology deep
learning algorithms. J Pathol Inform. 2020;11:7.
10) Liu S, Shah Z, Sav A, Russo C, Berkovsky S, Qian Y, Coiera E,
Di Ieva A. Isocitrate dehydrogenase (IDH) status prediction in
histopathology images of gliomas using deep learning. Sci Rep.
2020;10:7733.
11) Serag A, Ion-Margineanu A, Qureshi H, McMillan R, Saint
Martin MJ, Diamond J, OReilly P, Hamilton P. Translational
AI and deep learning in diagnostic pathology. Front Med
(Lausanne). 2019;6:185.
12) Madabhushi A, Lee G. Image analysis and machine learning in
digital pathology: Challenges and opportunities. Med Image
Anal. 2016;33:170-5.
13) Chang HY, Jung CK, Woo JI, Lee S, Cho J, Kim SW, Kwak
TY. Artificial Intelligence in Pathology. J Pathol Transl Med.
2019;53(1):1-12.
14) Rakha EA, Toss M, Shiino S, Gamble P, Jaroensri R, Mermel CH,
Chen PC. Current and future applications of artificial intelligence
in pathology: A clinical perspective. J Clin Pathol. 2021;74:409-14.
15) Lahiani A, Klaman I, Navab N, Albarqouni S, Klaiman E.
Seamless virtual whole slide image synthesis and validation using
perceptual embedding consistency. IEEE J Biomed Health Inform
2021;25:403-11.
16) Fontelo P, Faustorilla J, Gavino A, Marcelo A. Digital pathology
- implementation challenges in low-resource countries. Anal Cell
Pathol (Amst). 2012;35:31-6.
17) Sensu S, Erdogan N, Gurbuz YS. Digital era and artificial
intelligence in pathology: Basic information. Turkiye Klinikleri
J Med Sci. 2020;40:104-12.
18) Ciga O, Xu T, Martel AL. Self supervised contrastive learning for
digital histopathology. Mach Learn Appl. 2022;7:100198.
19) Ciga O, Martel AL. Learning to segment images with classification
labels. Med Image Anal. 2021;68:101912.
20) Tekin E, Yazıcı Ç, Kusetogullari H, Tokat F, Yavariabdi A, Iheme
LO, Çayır S, Bozaba E, Solmaz G, Darbaz B, Özsoy G, Ayaltı S,
Kayhan CK, İnce Ü, Uzel B. Tubule-U-Net: A novel dataset and
deep learning-based tubule segmentation framework in whole
slide images of breast cancer. Sci Rep. 2023;13:128.
21) Ozyoruk KB, Can S, Darbaz B, Başak K, Demir D, Gokceler
GI, Serin G, Hacisalihoglu UP, Kurtuluş E, Lu MY, Chen TY,
Williamson DFK, Yılmaz F, Mahmood F, Turan M. A deeplearning
model for transforming the style of tissue images from
cryosectioned to formalin-fixed and paraffin-embedded. Nat
Biomed Eng. 2022;6:1407-19.
22) Lipkova J, Chen TY, Lu MY, Chen RJ, Shady M, Williams M,
Wang J, Noor Z, Mitchell RN, Turan M, Coskun G, Yilmaz F,
Demir D, Nart D, Basak K, Turhan N, Ozkara S, Banz Y, Odening
KE, Mahmood F. Deep learning-enabled assessment of cardiac
allograft rejection from endomyocardial biopsies. Nat Med.
2022;28:575-82.
23) Ozer E, Bilecen AE, Ozer NB, Yanikoglu B. Intraoperative
cytological diagnosis of brain tumours: A preliminary study
using a deep learning model. Cytopathology. 2023;34:113-9.
24) Erdemoglu E, Serel TA, Karacan E, Köksal OK, Turan İ, Öztürk V,
Bozkurt KK. Artificial intelligence for prediction of endometrial
intraepithelial neoplasia and endometrial cancer risks in pre- and
postmenopausal women. AJOG Glob Rep. 2023;3:100154.
25) Cay N, Mendi BAR, Batur H, Erdogan F. Discrimination of
lipoma from atypical lipomatous tumor/well-differentiated
liposarcoma using magnetic resonance imaging radiomics
combined with machine learning. Jpn J Radiol. 2022;40:951-60.
26) Yılmaz B, Şahin S, Ergül N, Çolakoğlu Y, Baytekin HF, Sökmen
D, Tuğcu V, Taşçı Aİ, Çermik TF. 99mTc-PSMA targeted robotassisted
radioguided surgery during radical prostatectomy and
extended lymph node dissection of prostate cancer patients. Ann
Nucl Med. 2022;36:597-609.
27) Tizhoosh HR, Pantanowitz L. Artificial intelligence and digital
pathology: Challenges and opportunities. J Pathol Inform.
2018;9:38.
28) Patel A, Balis UGJ, Cheng J, Li Z, Lujan G, McClintock DS,
Pantanowitz L, Parwani A. Contemporary whole slide imaging
devices and their applications within the modern pathology
department: A selected hardware review. J Pathol Inform.
2021;12:50.
29) Sellaro TL, Filkins R, Hoffman C, Fine JL, Ho J, Parwani AV,
Pantanowitz L, Montalto M. Relationship between magnification
and resolution in digital pathology systems. J Pathol Inform.
2013;4:21.
30) Saldanha OL, Quirke P, West NP, James JA, Loughrey MB,
Grabsch HI, Salto-Tellez M, Alwers E, Cifci D, Ghaffari Laleh N,
Seibel T, Gray R, Hutchins GGA, Brenner H, van Treeck M, Yuan
T, Brinker TJ, Chang-Claude J, Khader F, Schuppert A, Luedde
T, Trautwein C, Muti HS, Foersch S, Hoffmeister M, Truhn D,
Kather JN. Swarm learning for decentralized artificial intelligence
in cancer histopathology. Nat Med. 2022;28(6):1232-9.
31) Allen TC. Digital pathology and federalism. Arch Pathol Lab
Med. 2014;138:162-5.
32) Moxley-Wyles B, Colling R, Verrill C. Artificial intelligence in
pathology: an overview. Diagnostic Histopathology, 2020;11:51320.
33) Smith B, Hermsen M, Lesser E, Ravichandar D, Kremers W.
Developing image analysis pipelines of whole-slide images: Preand
post-processing. J Clin Transl Sci. 2020;5:e38.
34) Manuel C, Zehnder P, Kaya S, Sullivan R, Hu F. Impact of
color augmentation and tissue type in deep learning for
hematoxylin and eosin image super resolution. J Pathol Inform.
2022;13:100148.
35) Pantanowitz L, Valenstein PN, Evans AJ, Kaplan KJ, Pfeifer JD,
Wilbur DC, Collins LC, Colgan TJ. Review of the current state of
whole slide imaging in pathology. J Pathol Inform. 2011;2:36.
36) Försch S, Klauschen F, Hufnagl P, Roth W. Artificial intelligence
in pathology. Dtsch Arztebl Int. 2021;118:194-204.
37) Al-Janabi S, Huisman A, Vink A, Leguit RJ, Offerhaus GJ, ten Kate
FJ, van Diest PJ. Whole slide images for primary diagnostics of
gastrointestinal tract pathology: A feasibility study. Hum Pathol.
2012;43(5):702-7.
38) Inoue T, Yagi Y. Color standardization and optimization in whole
slide imaging. Clin Diagn Pathol. 2020;4:10.15761/cdp.1000139.
39) Janowczyk A, Basavanhally A, Madabhushi A. Stain
Normalization using Sparse AutoEncoders (StaNoSA):
Application to digital pathology. Comput Med Imaging Graph.
2017;57:50-61.
40) Poole CJ, Hill DJ, Christie JL, Birch J. Deficient colour vision and
interpretation of histopathology slides: Cross sectional study.
BMJ. 1997;315:1279-81.
41) Mao X, Wang J, Tao X, Wang Y, Li Q, Zhou X, Zhang Y. Single
Generative Networks for Stain Normalization and Quality
Enhancement of Histological Images in Digital Pathology, 2021
14th International Congress on Image and Signal Processing,
BioMedical Engineering and Informatics (CISP-BMEI), Shanghai,
China, 2021:1-5, doi: 10.1109/CISP-BMEI53629.2021.9624221.
42) Zanjani FG, Zinger S, Bejnordi BE, Laak JAWM, With PHN.
Stain normalization of histopathology images using generative
adversarial networks. 2018 IEEE 15th International Symposium
on Biomedical Imaging (ISBI 2018) April 4-7, 2018, Washington,
DC, USA
43) U.S. Department of Health, Food Human Services, Center for
Devices Drug Administration, Radiological Health. Office of In
Vitro Diagnostics, Radiological Health Division of Molecular
Genetics, Molecular Pathology Pathology, and Cytology Branch.
Technical performance assessment of digital pathology whole
slide imaging devices. https://www.regulations.gov/document/
FDA-2015-D-0230-0018, pages 1-27, 2016.
44) Bautista PA, Hashimoto N, Yagi Y. Color standardization in
whole slide imaging using a color calibration slide. J Pathol
Inform. 2014;5:4.