IntroductionAllergic rhinitis (AR) is an IgE-mediated inflammatory response in the nasal mucosa following exposure to certain antigens. These triggers sneezing, nasal congestion, and runny nose1. Recently, the incidents of allergic rhinitis have been increasing due to increased concentrations of PM 10 and 2.5 dust caused by air pollution2. According to the Korea National Health Insurance Service, allergic rhinitis patients and related medical expenses constantly increase, resulting in a significant burden3.Diagnosing allergic rhinitis typically involves taking a patient’s medical history, skin prick tests, and measuring specific IgE antibodies. Taking a medical history is straightforward and cost-effective but can be imprecise due to its reliance on subjective symptoms and family health background. Skin prick tests, while quick [4], may not be suitable for all patients due to age, skin conditions, or certain medications4. Furthermore, specific IgE antibody tests are not limited by the subject’s condition [6] but are expensive and time-consuming5,6. Skin prick tests and specific IgE tests can analyze the causative antigens of allergic rhinitis and are qualitative diagnostic methods, but they are invasive and expensive. Moreover, in the case of objective measurement methods, the relationship with the patient’s symptoms is still ambiguous. Consequently, the diagnoses that rely on patient history are subjective and need more precision and objectivity for accurate diagnosis.Guidelines for diagnosing allergic rhinitis include nasal examinations7. A typical sign of allergic rhinitis, observed through nasal endoscopy, is the appearance of pale and swollen inferior turbinates8. Chronic edema and pale mucosa may occur in patients who have suffered long-term AR. This distinction is not a quantitative diagnostic value. Nevertheless, to the best of our knowledge, there have been no studies that have quantitatively assessed diagnosis using nasal endoscopy. Although previous studies have applied deep learning to the diagnosis of allergic rhinitis using CT or MRI, such approaches have not been reported with nasal endoscopy.Studies using RGB color analysis have found that these turbinates in affected patients display increased levels of green and blue hues9. Additionally, HSL color analysis indicates that the turbinates are paler and smoother than those of non-affected individuals10. Despite various diagnostic approaches, allergic rhinitis diagnosis remains challenging due to the subjective nature of clinical histories, invasiveness and limitations of skin prick and IgE antibody tests, and the qualitative nature of current nasal endoscopy evaluations. Thus, a substantial research gap exists in developing standardized, non-invasive, and quantitative diagnostic methods for allergic rhinitis that remain cost-effective and efficient.This study aims to investigate AR diagnostic devices, measure and quantify the nasal endoscopy image via optical analysis, and compare it to those in AR patients. Our approach involves two main stages: feature extraction and classification. Initially, we analyze the color distribution of the inferior turbinates using the Lab color space, comparing allergic rhinitis patients to non-affected individuals. To understand the endoscopy images and effectively recognize turbinate, we extract meaningful feature vectors in two ways. One uses a histogram, and the second uses a pre-trained convolutional neural network (CNN). These features are then evaluated utilizing support vector machines (SVM) and fully connected layers (FC) as classifiers.MethodAcquisition of nasal endoscopy imagesThe data used in this study was approved by the Institutional Review Board of Gumi Hospital, Sunchunhyang University. Images of the left and right nasal passages were collected who underwent nasal endoscopy between March 2019 and March 2020. Written informed consent was received from all patients. The subjects were then subjected to the Multiple Allergen Simultaneous Test (MAST), a specific IgE antibody test, to collect information on the presence of allergic rhinitis. Exclusion criteria were patients who had undergone surgery, had a history of sinusitis, or had a nasal tumor.To keep the images consistent, all images were taken with the same white balance adjustment. All images were acquired with the same white balance settings before acquisition and adjusted to equalize the endoscope distance to minimize image distortion. Only images from a single physician were analyzed to reduce inter-physician variation.The nasal endoscopic images have a resolution of 640 × 480 and a circular view of the nasal endoscopic area on a black background. The left and right nasal cavities are vertically symmetrical and include the nasal septum and the inferior turbinate (IT) in one image. The nasal septum is the middle membrane that divides the two nasal cavities and appears as a smooth wall on endonasal imaging, while the inferior turbinate is curved (Fig. 1).Fig. 1Nasal structures of nasal endoscopy image.Full size imageHistogram features from the inferior turbinate area using the lab color spaceRGB colors have the disadvantage of being inconsistent and can be affected by ambient light or environment. There are alternatives called HSL color space or Lab color space. The International Commission on Luminance prescribes lab color space, where L represents brightness, a represents the degree of reddish green, and b represents the degree of yellowish blue. Lab color model can be consistent regardless of monitor changes or printer color variations. Similar to the human visual system.This study compares the color distribution of the inferior turbinates of normal and allergic rhinitis groups in the Lab color space. To this end, we implemented a simple s/w to extract histograms of the inferior turbinate region, as shown in Fig. 2.Fig. 2Screenshot of Lab histogram analysis s/w.Full size imageFig. 3Flowchart of extracting the histogram in the Lab color model.Full size imageThis s/w consists of four main steps (Fig. 3). First, we set up a grid of 6 × 5 cells on the inferior turbinate area. Then, we exclude cells that do not contain the inferior turbinates. Since the structure of the nasal cavity is different for each person and the point at which the nasal endoscope is taken is different, the shape and area of the inferior turbinates shown in each image are very different. As a result, the inferior turbinate region is often not included within the 30 cells of the grid created initially in the first step. Therefore, we manually specify and exclude cells that do not contain the inferior turbinate region. The inferior turbinate region was masked as a polygon, and its bounding box was divided into a 6 × 5 grid. Only cells whose centers were within the polygon were included in the histogram calculation, ensuring that only pixels from the inferior turbinate region were analyzed automatically.We then calculate the average pixel value of the Lab channel within the cell. Finally, we compute a histogram from the average values. The histogram is computed for the a and b channels representing colors only. We carefully determined that the number of histogram bins should be 5, with the range of each bin as follows: 5 or less, 6 to 10, 11 to 15, 16 to 20, and 21 or more for the channel a, and − 5 or less, -4 to 0, 1 to 5, 6 to 10, and 11 or more for the channel b.CNN features from the inferior turbinate area using the RGB color spaceWe extract CNN features for better representation. We employ the Inception v34 model that has been pre-trained on ImageNet weights. We selected Inception v3 pre-trained on ImageNet due to its efficient handling of multi-scale features. Given the variability in the nasal endoscopy images (scale variations due to differing distances between the camera and turbinate regions), Inception v3’s multi-kernel architecture effectively captures features at multiple scales, thus making it ideal for our study. Considering that the Inception model was originally trained on RGB color images, we opted to use the RGB color model for consistency. Figure 4 illustrates the process of acquiring image patches for the model inputs.Fig. 4Inferior turbinate area in green and random box in red for patch and (b) its patch images.Full size imageFirst, we apply a median filter to extract features focusing on the color information of the inferior turbinates (as shown in (a) in Fig. 4). We use a median filter as a blurring technique because nasal endoscopy images often contain foreign bodies, such as runny nose secretions and light reflections from endoscope illumination. Notably, the blurring does not result in the loss of information since we use color information. The filter size was chosen to be 21, a value determined through simple experimentation, as it is sufficiently larger than the block size. Subsequently, the inferior turbinate region is labeled with polygons to crop patches for the Inception model. Next, we collect 100 patches of size 32 × 32 from random locations within the labeled region ((b) in Fig. 4). The patches we obtained are normalized to have values between 0 and 1 by dividing by the maximum value of the color space range, and passed through the Inception model to extract 2048-dimensional features.This feature extraction method is simpler than histogram-based methods as it only involves basic polygon labeling without requiring meticulous bin and range settings.Classification modelsIn this paper, we analyze the binary classification performance of allergic rhinitis’s presence or absence using SVMs11 and fully connected classifiers. We use the SVM classifier with the Radial Basis Function (RBF)12 kernel. For our fully connected classifier, we designed a simple fully connected network, as shown in Fig. 5, leveraging a pre-trained model for feature extraction. It comprises a fully connected layer with dropout13 for regularization, followed by a softmax layer for the final classification. It includes two dense layers and a dropout layer, a widely used regularization technique, forming a typical and basic classifier design.Fig. 5A simple, fully connected classifier.Full size imageResultsNasal endoscopy image datasetAfter conducting endoscopic imaging on 150 patients, we selected good-quality images from 46 patients. 18 images were taken from 18 individuals in the normal group, and 74 images were taken from 28 individuals in the allergic rhinitis group, totaling 92 images. Figure 6 presents the images obtained for the dataset. Notably, our data set has a significant data imbalance problem. To address this issue, we employed meticulous and thoughtful techniques during image preprocessing and analysis to ensure the reliability and validity of our findings despite the imbalance. The varying imaging distances caused scale differences in nasal cavity images, which the Inception model effectively handled using multiple kernel sizes.Fig. 6Nasal endoscopy images of our data set.Full size imageComparison result of lab color distributions between normal and allergic rhinitis groupsWe compared the color distributions in the Lab color space between the normal and allergic rhinitis groups using 92 images, as detailed in the Methods section. Figure 7 clearly demonstrates the differences between these two distributions.Fig. 7Color distribution of normal and allergic rhinitis groups in Lab color space.Full size imageIn channel a, the normal group exhibits a higher proportion of histogram values of 16 or higher compared to the allergic rhinitis group. Conversely, in channel b, the allergic rhinitis group shows a higher proportion of histogram values of 0 or lower than the normal group. These observations confirm that the color distribution in the inferior turbinates differs between the normal and allergic rhinitis groups.Experimental strategies for addressing class imbalance problemAs previously mentioned, the dataset suffers from severe class imbalance. To address this issue, we implemented three strategies in our experiments. Firstly, we repeated experiments by the following cross-validation: randomly dividing the data in ratios of 7:3, 6:4, and 5:5 to create training and validation datasets. The proportion and class weight of data for each split is detailed in Table 1. Secondly, we conducted additional experiments employing class weights. For each class weight \(\:{\text{w}}_{i}\), we calculate as follows:$$\:{w}_{i}=\frac{T}{C\times\:{C}_{i}}$$(1)where \(\:\text{T}\) is the total number of images, \(\:\text{C}\) is the total number of classes, and \(\:{C}_{i}\) is the number of images in the \(\:i\)-th class on training data set. Finally, we adapted F1-score as well as accuracy as performance metrics. F1-score is a statistical measure for a binary classification model when the class distribution is uneven.Table 1 Number of images and class weight per class by data split ratio.Full size tableExperiments with the SVM classifier are conducted with parameter C set at 1, 10, 100, 1000, and 10,000, and gamma at 0.1, 0.01, 0.001, 0.0001, and 0.00001. For the fully connected classifier, we use a learning rate of 0.00001, cross-entropy as the loss function, and Adam optimization as the optimizer.Experimental results on classificationFirst, we compared the performance of histogram features and CNN features using the SVM classifier. Tables 2 and 3 represent the experimental results for the SVM classifier applied to histogram features and CNN features, respectively. Additionally, Table 2 includes the performance across three different color spaces. For each image, we extracted 100 patches and performed classification individually on each patch. Final classification results per image were obtained through majority voting among the patches, ensuring robust classification by aggregating patch-level predictions.Table 2 Performance of histogram features in Ab color space using SVM classifier.Full size tableTable 3 Performance of CNN features in RGB, lab, and Ab color spaces using SVM classifier.Full size tableTables 2 and 3 demonstrate the improvement in performance as the proportion of training data increases. Notably, both tables indicate that the application of class weights had no significant effect. Furthermore, Histogram-based features consistently outperformed CNN-extracted features when classified by SVM, likely due to their direct and targeted representation of color distribution differences, which are critical in identifying allergic rhinitis conditions from limited training samples. All the models with class weights showed a slight decrease in performance except for the model using CNN features in the 7:3 split on RGB color space. Regarding color spaces, we can compare the ab color space, which solely contains color components, against Lab and RGB, both of which include color and brightness components. Table 3 shows that it is difficult to determine the superiority of ab or Lab; however, the RGB color space was clearly superior to both spaces in the case of CNN features.Table 2 was produced by selecting the highest values from those obtained by varying the SVM parameters, as shown in Fig. 8. The highest accuracy of 89.66% and an F1-score of 0.9388 were achieved with C = 100 and gamma = 0.001.Fig. 8Heatmaps on test accuracy and F1-score by the SVM parameters C and gamma for histogram features in the 7:3 split without applying class weight. The boxes highlighted in red indicate the highest values.Full size imageSecondly, we compared the performance of CNN features across three color spaces using fully connected networks, with data divided in ratios of 7:3, 6:4, and 5:5, as shown in Table 4. Multiple data splits were employed to ensure robustness and consistency of results under varying training and testing conditions.Table 4 Performance of CNN features in RGB, lab, and Ab color spaces using fully connected networks.Full size tableThe highest accuracy of 93.1% and F1-score of 0.9545 were significantly better than those achieved with the SVM classifier. Unlike the SVM case, we observed significant performance improvements with class weights in all cases but one case. This outcome demonstrates that class weights in neural networks effectively address data imbalance problems. Similar to previous experiments, the RGB color space clearly exhibited the highest performance among the color spaces.DiscussionsAllergic rhinitis, a chronic condition that significantly impacts a patient’s quality of life, is currently diagnosed using either non-invasive or invasive methods, both of which are time-consuming and costly. This paper presents a novel approach to a deep learning-based allergic rhinitis diagnosis model that utilizes nasal endoscopic images. This method offers a less time-consuming, cost-effective, non-invasive, and qualitative alternative to the existing diagnostic methods.Allergic rhinitis affects 10–20% of the population, with 4–8% annual growth rates. Despite the surge in patent applications since 2015, advancements in diagnostic technologies have been relatively weak (especially in the AR and nasal obstruction diagnostic/decision category, outsourced patent research, and patent attorneys).HSV color spaces are alternative representations of the RGB (red, green, blue) color model, designed in the 1970s by computer graphics researchers to more closely align with how human vision perceives color-making attributes14. The colors of each hue are arranged in a radial slice, saturation consists of dimensions resembling various tints of brightly colored paint, and value mentions dimensions resembling the mixture of those paints with varying amounts of black or white paint. Lab color spaces can be closer to human vision than HSV and consistent regardless of monitor changes or printer color variations.By analyzing the inferior turbinate color distribution in the Lab color space, we found that the inferior turbinate color distribution of the normal and allergic rhinitis groups differed. Based on this, we extracted CNN features by extracting histogram features and passing them through a pre-trained Inception v3 model on ImageNet weights4,15. We conducted comparative experiments using SVM and fully connected classifiers to analyze which feature extraction method is more efficient and which learning technique performs better. SVM is widely regarded as an effective binary classifier. SVM classifiers have the advantage of effectively determining optimal decision boundaries even when sample sizes are limited, thus ensuring reliable generalization from limited training data. Accordingly, we selected SVM as the representative machine learning classifier.The experimental results show that the best accuracy of histogram features with the SVM classifier is 0.8966 and F1-score is 0.9388, while the best accuracy of CNN features with the SVM classifier is 0.8821 and F1-score is 0.928, which shows that histogram features perform better. Using a fully connected classifier on the CNN features achieved the best results, with the best accuracy of 0.9310 and F1-score of 0.954511,15. Although the histogram features performed better than those extracted from the CNN trained for general purposes, they require manual effort in the inferior turbinate detection process. Although CNN features perform worse than histogram features, we found that better performance can be achieved by applying a fully connected classifier with class weights15.It is important to acknowledge the limitations of this study, which include factors such as the nasal endoscopy angle, color adjustment, and analysis program. These limitations highlight the need for continued research and innovation in these areas, underscoring the potential for further advancements in allergic rhinitis diagnosis and treatment. We acknowledge that this study does not empirically validate an optimal method for AR diagnosis. However, our findings demonstrate that pre-trained ImageNet features outperform histogram-based features in the context of allergic rhinitis diagnosis.We adopted an alternative approach to minimize this bias as much as possible, conducting multiple experiments with different random split ratios (e.g., 7:3, 6:4, 5:5). We ensured patient-wise separation to minimize overlap. Although this is not a traditional n-fold cross-validation method, it represents the most appropriate strategy given the characteristics of our dataset. Furthermore, for this study, we demonstrate that our deep learning-based feature extraction and classifier architecture outperforms conventional hand-crafted features and machine learning-based classifiers.Although this is not a traditional n-fold cross-validation method, it represents the most appropriate strategy given the characteristics of our dataset. Furthermore, this study demonstrated that our deep learning-based feature extraction and classifier architecture outperforms conventional hand-crafted features and machine learning-based classifiers. In this study, the small dataset size (92 images) limited the reliability of interpretability results from explainable AI (XAI) techniques such as Grad-CAM, LIME, and SHAP. In future studies with large-scale and multicenter datasets, it would be necessary to supplement the analysis with sensitivity, specificity, ROC AUC (with 95% confidence intervals), and appropriate statistical tests.In general, when the amount of data is limited, confidence intervals are appropriate. However, in the case of neural networks, it is common not to report them when the number of test samples is sufficiently large. In our CNN + FC feature extraction and classification approach, we extract 100 additional patches per image (as described below in Fig. 4). Therefore, testing on just 10 images effectively results in classification being performed on 1,000 patches. The final performance is then evaluated by aggregating the classification results of these patches using a voting mechanism. In other words, since the classifier operates at the patch level, the number of test samples is sufficiently large to evaluate the model’s performance reliably. In addition, our results rely primarily on accuracy and F1-score due to our initial experimental scope. We chose accuracy as a clear, easily interpretable measure of overall model performance and F1-score as it effectively balances precision and recall, providing valuable insight, particularly for imbalanced datasets such as ours.Nasal endoscopy is likely to vary slightly from doctor to doctor, which can lead to slight angular variations. A certain depth or distance is important because the amount of light can vary. To reduce these errors, a single ENT surgeon performed the study, and white balance and brightness adjustments were performed under the same conditions before the images were acquired. The image was also acquired with a constant depth control to control the amount of light entering the image. Nevertheless, some issues will inevitably lead to differences, which need further study. One solution is the recent development of artificial intelligence in image adjustment, which can be used to adjust the angle slightly. Additional research is needed.Image analysis, particularly in medical diagnostics, is advancing rapidly. Our study utilized SVM alongside various variables but explored other image analysis methods, including VGG, Inception, Xception, ResNet, DenseNet, and IncRes.The images collected in this study were not captured from a fixed distance. Instead, the distance to the target varied depending on the shape of the patient’s nasal cavity and the conditions at the time of capture. This led to scale variations of the nasal cavity within the images. Inception, characterized by its use of multiple kernel sizes, is particularly effective in handling such scale variations of target objects.Various models, such as VGG, Xception, ResNet, and Inception, can be considered for CNN-based feature extraction in transfer learning. These models differ in usability, the number of parameters, and the characteristics of the extracted features, and their performance may vary depending on the specific image classification task.In this study, we used features extracted directly from a pre-trained CNN rather than fine-tuning the CNN with our dataset. We aimed to demonstrate that general-purpose image features learned from ImageNet could outperform even carefully crafted histogram-based features. Therefore, the objective of this study was to obtain general image features.Despite these innovations, our findings indicated that newer pre-trained models could have yielded better performance and produced consistent results. For future investigations into mucous membrane analysis, such as those conducted in this study, alternative models might be more effective, or there may be a need to develop new pre-trained models specifically tailored for this application. In future studies, the imbalance problem could be addressed more effectively by applying GAN-based augmentation, SMOTE variants, stratified K-fold cross-validation, and bootstrapping after securing a large-scale dataset.In summary, as demonstrated in this study, the optical analysis of nasal endoscopy images could be a valuable adjunct to non-invasive measurement methods for allergic rhinitis. Currently, the diagnosis of allergic rhinitis is primarily subjective; however, this study may help provide an objective diagnostic method. The potential benefits of this new method include improved efficiency, reduced costs, and enhanced patient comfort. However, further studies are necessary to fully evaluate and validate these new methods.ConclusionThis study introduced a novel approach to diagnosing allergic rhinitis using nasal endoscopy images. Our approach analyzed the color distribution of the inferior turbinates within the LAB color space, extracted important features from endoscopy images using both CNN feature extraction and histograms, and performed classification through SVM and fully connected classifiers. Our findings indicated that while histogram features combined with SVM classifiers showed high accuracy and F1 scores, the best results were obtained using CNN features with a fully connected classifier, achieving 90.8% diagnostic accuracy. This suggests that deep learning frameworks can enhance diagnostic accuracy and efficiency when properly tuned and applied to specific medical imaging tasks. However, the study also recognized limitations due to the inherent variability in nasal endoscopy procedures, such as differences in angle and lighting conditions, which can affect image analysis. Future work will address these challenges by refining image capture consistency and exploring the use of advanced image processing technologies.Data availabilityThe datasets used and/or analysed during the current study available from the corresponding author on reasonable request.AbbreviationsAR:(Allergic rhinitis)SVM:(support vector machines)CNN:(convolutional neural network)SPT:(skin prick tests)MAST:(Multiple Allergen Simultaneous Test)IT:(inferior turbinate)RBF:(Radial Basis Function)RGB:(red, green, blue)HSV:(hue, saturation, value)LAB:(CIELAB)FC:(fully connected layers)VGG:(Visual Geometry Group)ReferencesPawankar, R., Bunnag, C., Khaltaev, N. & Bousquet, J. Allergic rhinitis and its impact on asthma in Asia Pacific and the ARIA update 2008. World Allergy Organ. J. 5 (Suppl 3), S212–S217 (2012).Article PubMed PubMed Central Google Scholar Lee, K-I., Chung, Y-J. & Mo, J-H. The impact of air pollution on allergic rhinitis. Allergy Asthma Respir. Dis. 9, 3. https://doi.org/10.4168/aard.2021.9.1.3 (2021).Yoo, K. H. et al. Burden of respiratory disease in korea: an observational study on allergic rhinitis, asthma, COPD, and rhinosinusitis. Allergy Asthma Immunol. Res. 8 (6), 527–534 (2016).Article PubMed PubMed Central Google Scholar Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. & Wojna, Z. Rethinking the Inception Architecture for Computer Vision. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)2016. pp. 2818-26.Yang, S. I. et al. KAAACI allergic rhinitis guidelines: part 1. Update in pharmacotherapy. Allergy Asthma Immunol. Res. 15 (1), 19–31 (2023).Article PubMed PubMed Central Google Scholar Kim, Y. H. & Kim, K-S. Diagnosis and treatment of allergic rhinitis. J. Korean Med. Assoc. 53(9), 780–790 (2010).Brozek, J. L. et al. Allergic rhinitis and its impact on asthma (ARIA) guidelines-2016 revision. J. Allergy Clin. Immunol. 140 (4), 950–958 (2017).Article PubMed Google Scholar Seidman, M. D. et al. Clinical practice guideline: allergic rhinitis. Otolaryngol. Head Neck Surg. 152 (1 Suppl), S1–43 (2015).PubMed Google Scholar Joko, H., Hyodo, M., Gyo, K. & Yumoto, E. Chromametric assessment of nasal mucosal color and its application in patients with nasal allergy. Am. J. Rhinol. 16 (1), 11–16 (2002).Article PubMed Google Scholar Bae, S. & Jun, Y. J. Optical analysis of nasal endoscopic images from a patient with severe acute respiratory syndrome coronavirus 2. J. Rhinology. 29 (2), 96–100 (2022).Article Google Scholar Scholkopf, B. & Smola, A. J. Learning with Kernels: Support Vector Machines (MIT Press, 2001).Machine Learning and Its Applications2001.Srivastava, N., Hinton, G. E., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014).MathSciNet MATH Google Scholar Ibraheem, N. A., Hasan, M. M., Khan, R. Z. & Mishra, P. K. Understanding color models: a review. ARPN J. Sci. Technol. 2 (3), 265–275 (2012).Google Scholar O’Shea, K. & Nash, R. An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458. (2015).Download referencesAcknowledgementsThis research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2019R1I1A3A01063980).FundingThis research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education (2019R1I1A3A01063980).Author informationAuthors and AffiliationsKumoh National Institute of Technology, Gumi, South KoreaJaepil Ko & MinHye KangUijeongbu Eulji Medical Center, Otorhinolaryngology-head and Neck Surgery, Eulji University, Uijeongbu-si, South KoreaYoung Joon JunAuthorsJaepil KoView author publicationsSearch author on:PubMed Google ScholarMinHye KangView author publicationsSearch author on:PubMed Google ScholarYoung Joon JunView author publicationsSearch author on:PubMed Google ScholarContributionsConceptualization: Young Joon JunData curation: MinHye Kang, JaePil Ko, Young Joon JunFunding acquisition: Young Joon JunMethodology—clinical: Young Joon JunMethodology—computing: MinHye Kang, JaePil KoProject administration: JaePil Ko, Young Joon Jun. Visualization: MinHye Kang, JaePil Ko, Young Joon JunWriting—original draft: MinHye Kang, Young Joon JunWriting—review & editing: JaePil Ko, Young Joon Jun.Corresponding authorCorrespondence to Young Joon Jun.Ethics declarationsCompeting interestsThe authors declare no competing interests.Ethics committee approval and subject consentThe study was approved by SoonChunHyang University Gumi Hospital and Uijeongbu Eulji University Hospital Institutional Review Board (IRB). Patients received informed consent.All research has been performed in accordance with the Declaration of Helsinki.Additional informationPublisher’s noteSpringer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Rights and permissionsOpen Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.Reprints and permissionsAbout this article