Skip to main content

Identification of Barrett's esophagus in endoscopic images using deep learning



Development of a deep learning method to identify Barrett's esophagus (BE) scopes in endoscopic images.


443 endoscopic images from 187 patients of BE were included in this study. The gastroesophageal junction (GEJ) and squamous-columnar junction (SCJ) of BE were manually annotated in endoscopic images by experts. Fully convolutional neural networks (FCN) were developed to automatically identify the BE scopes in endoscopic images. The networks were trained and evaluated in two separate image sets. The performance of segmentation was evaluated by intersection over union (IOU).


The deep learning method was proved to be satisfying in the automated identification of BE in endoscopic images. The values of the IOU were 0.56 (GEJ) and 0.82 (SCJ), respectively.


Deep learning algorithm is promising with accuracies of concordance with manual human assessment in segmentation of the BE scope in endoscopic images. This automated recognition method helps clinicians to locate and recognize the scopes of BE in endoscopic examinations.

Peer Review reports


Barrett's esophagus (BE) is a precancerous state caused by damages to the inner lining of the squamous esophageal mucosa, characterized by a change of the normal stratified squamous epithelium lining esophagus to a metaplastic columnar epithelium with goblet cells [1]. BE is the only known histological precursor of esophageal adenocarcinoma (EAC) [2].]. It has been reported that EAC is associated with high mortality (5-year survival rate < 20%) and increasing incidences [3,4,5,6]. EAC patients with a prior diagnosis of BE normally have better outcomes than patients without a prior diagnosis of BE [7]. Therefore, early detection and appropriate treatment of BE are crucial for effective prevention of the development of EAC. At present, the most common screening method for BE is a pathological biopsy using samples obtained through esophagoscopy (ESO). However, due to the individual variations of the shapes, appearances, and textures of BE, accurate identification and location of the BE scope are still challenging. Moreover, the locating of BE relies on the individual experience of endoscopists, which might further introduce variations and bias. These problems might cause time consumption and misjudgments, probably led to delays in the identification or misdiagnosis of BE, and finally influenced the follow-up treatments. Therefore, to overcome these difficulties, it is necessary to further improve the efficiency and accuracy of BE identifying and locating under endoscopic examinations.

Recent years have witnessed tremendous development in artificial intelligence (AI), especially the emerging deep learning (DL) has achieved unprecedented successes in various domains with groundbreaking performance on par with human capabilities [8]. More recently, there is a trend of applying DL in healthcare and clinical applications [9, 10]. As a subbranch of AI, DL utilizes multiple layers of neurons to extract abstract patterns from data. In image analysis, DL shows encouraging potentials in tasks of segmentation, classification, and prediction [11,12,13]. A growing body of literature developed DL methods in analyses of medical images such as ultrasound, CT, MRI, X-ray [14,15,16].

Recently, DL has been gradually utilized in endoscopic image analysis of colon, stomach, and intestine, etc., with encouraging performance in identifying and diagnosing diseases such as tumors, polyps, and ulcers [17,18,19]. Meanwhile, several studies applied DL in the classification and segmentation of esophageal lesions [20,21,22,23,24,25,26]. However, there is still a lack of reports of developing DL methods dedicated to BE identification. Mendel et al. adopted a migration-based learning approach to segment endoscopic images included cancer and BE [20]. Wu et al. used a convolutional neural network (CNN) to segment endoscopic images of cancer, BE, and inflammation [21]. In a recently published study, a depth estimator network was used to measure C&M scores including the BEA [27]. Because of the importance of early diagnosis of BE in the prevention of EAC, it is worth further investigating DL in BE diagnosis with a sufficient BE sample size [28]. Previous study suggested that in Asia including China, short-segment Barrett’s esophagus was more common [29]. Moreover, for the cases of short-segment BE, it is relevantly easier to make accurate endoscopic diagnosis. Therefore, we focused on patients with BE less than 7 cm in this work.

The objective of this study was to develop a fully automated DL method for early-accurate segmentation and identification of BE in endoscopic images. We included 443 endoscopic images from 187 BE patients. The DL method could accurately identify and segment the scopes of BE, which could further facilitate the following endoscopic surveillance and treatment of BE.

Materials and methods

The overall workflow of this study was illustrated in Fig. 1. First, patients were included, and the endoscopic images were obtained. Next, the BE regions were annotated in the endoscopic images by experts. Based on the raw images and annotation information, the deep learning segmentation algorithms were trained and evaluated in training and validation datasets, respectively. Finally, the performance was summarized and reported.

Fig. 1
figure 1

Overall workflow of this study

Patient characteristics

In our retrospective study, a total of 187 patients examined using endoscopy between January 2015 and June 2019 were included in the Hospital of Chengdu Office of People’s Government of Tibetan Autonomous Region. The BE conditions were confirmed by pathological examinations. All data were anonymized, and an Ethics Approval was granted by the Ethics Committee of Hospital of Chengdu Office of People’s Government of Tibetan Autonomous Region (No. 201920).

Image acquisition

We obtained 443 endoscopic images from a total of 187 clinical cases (Table 1), and the instruments used in the examinations were Olympus GIF-HQ290, GIF-Q260 gastroscope (Olympus Company, Japan). The esophagus was cleaned and examined with white light, narrow band imaging, and staining endoscopy. The BE scope was recorded according to the Prague classification system. The endoscope was positioned proximally to the GEJ, and the endoscopic image was taken. Meanwhile, the biopsy samples were obtained using biopsy forceps, and the final diagnoses were proved by pathologists.

Table 1 Patient characteristics (training set and test set)

Image annotation

To obtain the ground truth of the BE scopes in images, we invited two senior endoscopists with over 15 years’ experience to manually draw the outlines of the scopes using one in-house developed software. More specifically, the rims of the GEJ and SCJ were delineated to define the BE scopes. The experts were trained to follow the same quality standard before conducting the tasks. The first expert annotated all images, and the results were confirmed by the second expert. For any disagreement, the two experts discussed and made necessary new annotations until consensus was reached. The annotation information was later extracted to generate segmentations as ground truths for later DL algorithm training and evaluation.

Deep learning algorithm

In this study, we developed a DL algorithm in a neural network structure of fully convolutional networks (FCN) [30]. As shown in Fig. 2, the neural network adopted several layers of fully convolutional neural network layers to extract abstract feature maps of an input image. After the downsampling, the deconvolutional neural network layers were appended to conduct the upsampling to generate the output image in the same size as the input image. Skip architectures were fused to both deep and shallow layers to achieve semantic segmentation at the pixel level. Furthermore, FCN is capable of processing images of any size, which allows FCN to be more suitable for medical images of various sizes. In the training stage, each image was input into the FCN, and a corresponding mask was generated to indicate the segmentation. The segmentation was compared against the ground truth obtained by experts. The loss formation was used to train the FCN. When all images in the training set were used to update the network, a trained FCN was obtained and passively used to generate segmentations for any inputs.

Fig. 2
figure 2

Schema of the FCN algorithm structure. Multiple full convolution layers with ReLU activation functions were used with deconvolution layers with skips. The images were input into the FCN and the segmentations were obtained as output masks in the same sizes

Since the BE scopes usually have two rims, namely GEJ and SCJ. We considered two approaches. Firstly, we trained two FCN networks independently to achieve the segmentation for GEJ and SCJ. In other words, two independent FCN networks were trained and evaluated using the annotations to obtain the rims of GEJ and SCJ. Secondly, we segmented the GEJ and SCJ using one single trained network. We reported and compared the performance of the two approaches. The obtained segmentations were further visualized for examinations.

To train and test the developed DL algorithm, we randomly divided the collected 443 images from 187 patients into two independent subsets according to patients. This approach ensured no images from a given individual patient appeared in both training and testing sets. In result, we obtained two subsets, namely a training set (n = 150, 354 images, 80%) and a test set (n = 37, 89 images, 20%) (Table 2). According to the Prague Classification, we divided the test set into 16 groups for analysis. The DL algorithm was first trained using the annotated images in the training set. Afterward, the trained DL algorithm was evaluated in the test set.

Table 2 Patients were randomly divided into one training set (80%) and one test set (20%) according to patients

The FCN neural network was implemented in the programming language of Python (3.7.3) using publicly available libraries of PyTorch (1.1.0), CUDA (10.1), and NumPy (1.16.2). The algorithm was trained and evaluated in a DL server equipped with a Tesla P40 graphic processing unit (GPU) running the operating system CentOS Linux (7.6.1810). Though a DL server was utilized in this study, it’s believed that a conventional workstation nowadays could be used to deploy the trained DL algorithm and generate segmentations within an acceptable time.

Statistical analysis

In line with previous studies of image segmentation, the metric of intersection over union (IOU) was used to measure the performance of the DL algorithms. Intuitively, IOU indicated how well the predicted segmentation overlapped with the ground truth. A larger value of IOU closes to one indicates a favorable segmentation performance for a given algorithm. We also reported the Dice similarity coefficient (DSC), which is similar to IOU and widely appears in literature [31]. However, we used IOU as the major measurement for its simplicity and wide acceptance in literature. Therefore, the overall performance was reported as the averaged IOU and DSC in the test set.



As described above, the DL algorithms were developed to obtain the GEJ and SCJ separately in two approaches. In Table 3, we reported the results of segmentation achieved by FCN for GEJ and SCJ in the test set. We found that the approach of separately segmenting GEJ and SCJ using two FCN networks was optimal than the approach of using one single network. To illustrate the results of DL in the identification of BE scopes, we visualized the segmentations of both DL and experts for representative samples in Fig. 3. As shown, DL was capable to accurately identify the GEJ and SCJ of BE scopes. Experts examined the DL results for all images in the test set and concluded that the agreement between DL results and expert annotations was satisfying. For those cases with smaller values of IOU, the overall shapes of GEJ and SCJ obtained by DL were also acceptable. By investigating the average IOU values for each subset and the whole set, we found that there is no significant difference among subsets and the whole set. In the subsets, average IOU values ranged from 0.32 to 0.68 for the GEJ and from 0.60 to 0.94 for the SCJ, respectively.

Table 3 Performance of DL algorithm achieved in the test set in the tasks of identifying the GEJ and the SCJ of the BE scopes
Fig. 3
figure 3

Examples of results obtained by DL algorithms versus expert annotations of four patients. Each column belongs to one patient. The upper row and lower row were GEJ and SCJ, respectively. The first two columns were taken using white light imaging, while the last two columns were taken using narrow band imaging. The expert annotations were marked as white. The DL obtained GEJ was marked as blue (upper row). The IOUs for GEJ were 0.79 (A), 0.76 (B), 0.66 (C), and 0.66 (D). The DL obtained SCJ was marked as green (lower row). The IOUs for SCJ were 0.91 (E), 0.88 (F), 0.91 (G), and 0.94 (H)


EAC is the main histological type of esophageal cancer in the west [32], and BE is the only known histological precursor of EAC. Recently, several reports indicated that the incidences of BE and Barrett’s esophageal adenocarcinoma (BEA) were rising in Asia [33, 34]. Previous studies have shown that prior diagnosis, surveillance [7, 35,36,37], and appropriate treatment practices [38, 39] of BE can reduce the risk of EAC progression and improve survival. Endoscopic biopsy is the most commonly used method for diagnosis and monitoring of BE [40], and endotherapy such as endoscopic resection and esophageal ablation becomes the standard of care for BE [41]. These measures all require endoscopists to accurately identify the scopes of BE under endoscopic examination [42, 43]. This process relies on the experience of individuals with inevitable misjudgments, variations, and time consumption. Moreover, the diversities in shapes, appearances, and textures of BE contribute to the difficulties of accurate segmentation of BE scopes.

Therefore, in this study, we proposed and developed a DL method to automatically segment the BE scopes in endoscopy, which could further improve early-accurate diagnosis and treatments of BE. We collected 443 images from 187 patients and invited experts to manually annotate BE scopes. We constructed one training set and one test set to develop and evaluate the DL methods.

Mendel et al. included 100 endoscopic images, including 50 cancer cases and 50 BE cases of 39 patients [20]. Using a migration-based learning approach, they reported a sensitivity of 0.94 and specificity of 0.88. However, the BE cases were relatively insufficient. Wu et al. developed neural networks to segment 797 endoscopic images of cancer, BE, and inflammation cases [21]. Sharib et al. used a depth estimator network to measure C&M values in 194 high-definition videos from 131 BE patients [27]. Compared to the above works, our study dedicated to the automated identification and location of BE scopes in endoscopic images using DL. Additionally, we developed DL methods to accurately identify both GEJ and SCJ. It is worth mentioning that our approach of separately segment GEJ and SCJ using two DL networks outperformed the traditional approach using one single DL network was inspiring for similar medical image analysis tasks. These efforts could assist endoscopists in the diagnosis of BE efficiently and improve the accuracy of diagnosis of BE.

However, there are still several limitations in this study. Firstly, we only focused on developing DL to automated segment the scopes of BE under esophagoscopic examination but did not differentiate from other esophageal lesions. In the future, we would include other esophageal lesions and extend the present DL framework to classify and diagnose types of BE. Secondly, this is a retrospective study from a single center. The results could be further validated in prospective studies using external cohorts. Thirdly, the developed DL methods in this study are still in rapid evolution with more emerging advanced DL algorithms. It’s expectable to evaluate new DL algorithms to diagnose BE in endoscopic images to further improve the performance. Current guidelines recommend that the diagnosis of BE should be based on the presence of SCJ of 1 cm proximal to the EGJ, with biopsy results consistent with those of intestinal [44,45,46]. AI could accurately quantify the elevation of SCJ and objectively evaluate BE, thereby avoiding over-diagnosis and over-follow-up. The application of AI to exclude the elevation of SCJ more than 1 cm is worth further investigation.

In this study, we carried out the recognition and segmentation of the BE scope in endoscopic images using DL. Specifically, FCN neural networks were developed and evaluated. The DL methods achieved satisfying performance in the segmentation of GEJ and SCJ, indicating their promising potentials in clinical BE evaluations.

Availability of data and materials

The datasets during and/or analyzed during the current study available from the corresponding author on reasonable request.



Barrett's esophagus


esophageal adenocarcinoma




gastroesophageal junction


squamous-columnar junction


artificial intelligence


deep learning


fully convolutional networks


convolutional neural network


intersection over union


Dice similarity coefficient


Barrett’s esophageal adenocarcinoma


  1. Iyer PG, Kaul V. Barrett Esophagus. Mayo Clin Proc. 2019;94(9):1888–901.

    Article  PubMed  Google Scholar 

  2. Peters Y, Al-Kaabi A, Shaheen NJ, Chak A, Blum A, Souza RF, Di Pietro M, Iyer PG, Pech O, Fitzgerald RC, et al. Barrett oesophagus. Nat Rev Dis Primers. 2019;5(1):35.

    Article  PubMed  Google Scholar 

  3. Launoy G, Bossard N, Castro C, Manfredi S. Group GE-W: Trends in net survival from esophageal cancer in six European Latin countries: results from the SUDCAN population-based study. Eur J Cancer Prev. 2017.

    Article  PubMed  Google Scholar 

  4. Njei B, McCarty TR, Birk JW. Trends in esophageal cancer survival in United States adults from 1973 to 2009: A SEER database analysis. J Gastroenterol Hepatol. 2016;31(6):1141–6.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Anderson LA, Tavilla A, Brenner H, Luttmann S, Navarro C, Gavin AT, Holleczek B, Johnston BT, Cook MB, Bannon F, et al. Survival for oesophageal, stomach and small intestine cancers in Europe 1999–2007: results from EUROCARE-5. Eur J Cancer. 2015;51(15):2144–57.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  6. Thrift AP. Global burden and epidemiology of Barrett oesophagus and oesophageal cancer. Nat Rev Gastroenterol Hepatol. 2021.

    Article  PubMed  Google Scholar 

  7. Verbeek RE, Leenders M, Kate FJWT, van Hillegersberg R, Vleggaar FP, vanBaal JWPM, van Oijen MGH, Siersema PD. Surveillance of Barrett’s esophagus and mortality from esophageal adenocarcinoma: a population-based cohort study. Off J Am Coll Gastroenterol. 2014;109(8):1215–22.

    Article  Google Scholar 

  8. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–44.

    Article  CAS  PubMed  Google Scholar 

  9. Faust O, Hagiwara Y, Hong TJ, Lih OS, Acharya UR. Deep learning for healthcare applications based on physiological signals: a review. Comput Methods Program Biomed. 2018;161:1–13.

    Article  Google Scholar 

  10. Purushotham S, Meng C, Che Z, Liu Y. Benchmarking deep learning models on large healthcare datasets. J Biomed Inform. 2018;83:112–34.

    Article  PubMed  Google Scholar 

  11. Aoki T, Yamada A, Aoyama K, Saito H, Tsuboi A, Nakada A, Niikura R, Fujishiro M, Oka S, Ishihara S, et al. Automatic detection of erosions and ulcerations in wireless capsule endoscopy images based on a deep convolutional neural network. Gastrointest Endosc. 2019;89(2):357-363.e352.

    Article  PubMed  Google Scholar 

  12. Coudray N, Ocampo PS, Sakellaropoulos T, Narula N, Snuderl M, Fenyö D, Moreira AL, Razavian N, Tsirigos A. Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning. Nat Med. 2018;24(10):1559–67.

    Article  CAS  PubMed  Google Scholar 

  13. Jiang F, Grigorev A, Rho S, Tian Z, Fu Y, Jifara W, Adil K, Liu S. Medical image semantic segmentation based on deep learning. Neural Comput Appl. 2018;29(5):1257–65.

    Article  Google Scholar 

  14. Ait Skourt B, El Hassani A, Majda A. Lung CT Image segmentation using deep neural networks. Procedia Comput Sci. 2018;127:109–13.

    Article  Google Scholar 

  15. Cheng J-Z, Ni D, Chou Y-H, Qin J, Tiu C-M, Chang Y-C, Huang C-S, Shen D, Chen C-M. Computer-aided diagnosis with deep learning architecture: applications to breast lesions in US images and pulmonary nodules in CT scans. Sci Rep. 2016;6(1):24454.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  16. Mittal M, Goyal LM, Kaur S, Kaur I, Verma A, Jude Hemanth D. Deep learning based enhanced tumor segmentation approach for MR brain images. Appl Soft Comput. 2019;78:346–54.

    Article  Google Scholar 

  17. Ling T, Wu L, Fu Y, Xu Q, An P, Zhang J, Hu S, Chen Y, He X, Wang J, et al. A deep learning-based system for identifying differentiation status and delineating the margins of early gastric cancer in magnifying narrow-band imaging endoscopy. Endoscopy. 2021;53(5):469–77.

    Article  PubMed  Google Scholar 

  18. Zheng W, Zhang X, Kim JJ, Zhu X, Ye G, Ye B, Wang J, Luo S, Li J, Yu T et al. High accuracy of convolutional neural network for evaluation of helicobacter pylori infection based on endoscopic images: preliminary experience. Clin Transl Gastroenterol 2019, 10(12):e00109.

  19. Urban G, Tripathi P, Alkayali T, Mittal M, Jalali F, Karnes W, Baldi P. Deep learning localizes and identifies polyps in real time With 96% accuracy in screening colonoscopy. Gastroenterology. 2018;155(4):1069–78.

    Article  PubMed  Google Scholar 

  20. Mendel R, Ebigbo A, Probst A, Messmann H, Palm C. Barrett’s esophagus analysis using convolutional neural networks. Bildverarbeitung für die Medizin 2017:80–85

  21. Wu Z, Ge R, Wen M, Liu G, Chen Y, Zhang P, He X, Hua J, Luo L. ELNet: Automatic classification and segmentation for esophageal lesions using convolutional neural network. Med Image Anal. 2021;67:101838.

    Article  PubMed  Google Scholar 

  22. de Groof AJ, Struyvenberg MR, Fockens KN, van der Putten J, van der Sommen F, Boers TG, Zinger S, Bisschops R, de With PH, Pouw RE et al. Deep learning algorithm detection of Barrett's neoplasia with high accuracy during live endoscopic procedures: a pilot study (with video). Gastrointest Endosc 2020, 91(6):1242–1250.

  23. Liu G, Hua J, Wu Z, Meng T, Sun M, Huang P, He X, Sun W, Li X, Chen Y: Automatic classification of esophageal lesions in endoscopic images using a convolutional neural network. Ann Transl Med 2020, 8(7):486.

  24. de Groof J, van der Sommen F, van der Putten J, Struyvenberg MR, Zinger S, Curvers WL, Pech O, Meining A, Neuhaus H, Bisschops R, et al. The Argos project: The development of a computer-aided detection system to improve detection of Barrett’s neoplasia on white light endoscopy. United Eur Gastroenterol J. 2019;7(4):538–47.

    Article  Google Scholar 

  25. Hong J, Park BY, Park H. Convolutional neural network classifier for distinguishing Barrett's esophagus and neoplasia endomicroscopy images. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society IEEE Engineering in Medicine and Biology Society Annual International Conference 2017, 2017:2892–2895.

  26. Gehrung M, Crispin-Ortuzar M, Berman AG, O’Donovan M, Fitzgerald RC, Markowetz F. Triage-driven diagnosis of Barrett’s esophagus for early detection of esophageal adenocarcinoma using deep learning. Nat Med. 2021.

    Article  PubMed  Google Scholar 

  27. Ali S, Bailey A, Ash S, Haghighat M, Leedham SJ, Lu X, East JE, Rittscher J, Braden B. A pilot study on automatic three-dimensional quantification of Barrett’s esophagus for risk stratification and therapy monitoring. Gastroenterology. 2021;161(3):865–78.

    Article  CAS  PubMed  Google Scholar 

  28. Kinjo T, Kusano C, Oda I, Gotoda T. Prague C&M and Japanese criteria: shades of Barrett’s esophagus endoscopic diagnosis. J Gastroenterol. 2010;45(10):1039–44.

    Article  PubMed  Google Scholar 

  29. Shiota S, Singh S, Anshasi A, El-Serag HB. Prevalence of Barrett’s esophagus in asian countries: a systematic review and meta-analysis. Clin Gastroenterol Hepatol. 2015;13(11):1907–18.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Long J, Shelhamer E, Darrell T: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition: 2015; 2015: 3431–3440.

  31. Milletari F, Navab N, Ahmadi S-A: V-net:fully convolutional neural networks for volumetric medical image segmentation. In: 2016 fourth international conference on 3D vision (3DV): 2016: IEEE; 2016: 565–571.

  32. Coleman HG, Xie SH, Lagergren J. The epidemiology of esophageal adenocarcinoma. Gastroenterology. 2018;154(2):390–405.

    Article  PubMed  Google Scholar 

  33. Wu JC. Gastroesophageal reflux disease: an Asian perspective. J Gastroenterol Hepatol. 2008;23(12):1785–93.

    Article  PubMed  Google Scholar 

  34. Hongo M, Nagasaki Y, Shoji T. Epidemiology of esophageal cancer: Orient to occident. Effects of chronology, geography and ethnicity. J Gastroenterol Hepatol. 2009;24(5):729–35.

    Article  PubMed  Google Scholar 

  35. Bhat SK, McManus DT, Coleman HG, Johnston BT, Cardwell CR, McMenamin U, Bannon F, Hicks B, Kennedy G, Gavin AT, et al. Oesophageal adenocarcinoma and prior diagnosis of Barrett’s oesophagus: a population-based study. Gut. 2015;64(1):20–5.

    Article  PubMed  Google Scholar 

  36. Cooper GS, Yuan Z, Chak A, Rimm AA. Association of prediagnosis endoscopy with stage and survival in adenocarcinoma of the esophagus and gastric cardia. Cancer. 2002;95(1):32–8.

    Article  PubMed  Google Scholar 

  37. Cooper GS, Kou TD, Chak A. Receipt of previous diagnoses and endoscopy and outcome from esophageal adenocarcinoma: a population-based study with temporal trends. Am J Gastroenterol. 2009;104(6):1356–62.

    Article  PubMed  Google Scholar 

  38. Phoa KN, van Vilsteren FG, Weusten BL, Bisschops R, Schoon EJ, Ragunath K, Fullarton G, Di Pietro M, Ravi N, Visser M, et al. Radiofrequency ablation vs endoscopic surveillance for patients with Barrett esophagus and low-grade dysplasia: a randomized clinical trial. JAMA. 2014;311(12):1209–17.

    Article  CAS  PubMed  Google Scholar 

  39. van Munster S, Nieuwenhuis E, Weusten B, Alvarez Herrero L, Bogte A, Alkhalaf A, Schenk B, Schoon E, Curvers W, Koch A, et al. Long-term outcomes after endoscopic treatment for Barrett’s neoplasia with radiofrequency ablation ± endoscopic resection: results from the national Dutch database in a 10-year period. Gut. 2021.

    Article  PubMed  Google Scholar 

  40. Eluri S, Shaheen NJ. Barrett’s esophagus: diagnosis and management. Gastrointest Endosc. 2017;85(5):889–903.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Prasad GA, Wu TT, Wigle DA, Buttar NS, Wongkeesong LM, Dunagan KT, Lutzke LS, Borkenhagen LS, Wang KK. Endoscopic and surgical treatment of mucosal (T1a) esophageal adenocarcinoma in Barrett’s esophagus. Gastroenterology. 2009;137(3):815–23.

    Article  PubMed  Google Scholar 

  42. Fitzgerald RC, Saeed IT, Khoo D, Farthing MJ, Burnham WR. Rigorous surveillance protocol increases detection of curable cancers associated with Barrett’s esophagus. Dig Dis Sci. 2001;46(9):1892–8.

    Article  CAS  PubMed  Google Scholar 

  43. Abela JE, Going JJ, Mackenzie JF, McKernan M, O’Mahoney S, Stuart RC. Systematic four-quadrant biopsy detects Barrett’s dysplasia in more patients than nonsystematic biopsy. Am J Gastroenterol. 2008;103(4):850–5.

    Article  PubMed  Google Scholar 

  44. Shaheen NJ, Falk GW, Iyer PG, Gerson LB. ACG clinical guideline: diagnosis and management of Barrett’s Esophagus. Am J Gastroenterol. 2016;111(1):30–50.

    Article  CAS  PubMed  Google Scholar 

  45. Fitzgerald RC, di Pietro M, Ragunath K, Ang Y, Kang JY, Watson P, Trudgill N, Patel P, Kaye PV, Sanders S, et al. British Society of Gastroenterology guidelines on the diagnosis and management of Barrett’s oesophagus. Gut. 2014;63(1):7–42.

    Article  PubMed  Google Scholar 

  46. Weusten B, Bisschops R, Coron E, Dinis-Ribeiro M, Dumonceau JM, Esteban JM, Hassan C, Pech O, Repici A, Bergman J, et al. Endoscopic management of Barrett’s esophagus: European Society of Gastrointestinal Endoscopy (ESGE) Position Statement. Endoscopy. 2017;49(2):191–8.

    Article  PubMed  Google Scholar 

Download references


This study is supported by the Innovation Method Program of the Ministry of Science and Technology of the People’s Republic of China (M112017IM010700), The Key Research and Development Project of Science & Technology Department of Sichuan Province (2020YFS0324), Natural Science Foundation of Tibet Autonomous Region (XZ2019ZR G-130), The Applied Basic Research Project of Science & Technology Department of Luzhou city (2018-JYJ-45).

Author information

Authors and Affiliations



WP, XL, WW, LZ, JW, TR, CL, ML, SS and YT conceived and designed the study, and were responsible for the final decision to submit for publication. All authors were involved in the development, review, and approval of the manuscript. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Chao Liu, Muhan Lv, Song Su or Yong Tang.

Ethics declarations

Ethics approval and consent to participate

This retrospective study conforms to the ethical guiding principle of the Declaration of Helsinki and was approved by the Ethics Committee of The Hospital of Chengdu Office of People’s Government of Tibetan Autonomous Region (No. 201920). According to the national legislation and institutional requirements, Informed consent was waived by the Ethics Committee of The Hospital of Chengdu Office of People’s Government of Tibetan Autonomous Region because of the retrospective nature of this study.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pan, W., Li, X., Wang, W. et al. Identification of Barrett's esophagus in endoscopic images using deep learning. BMC Gastroenterol 21, 479 (2021).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: