1 Introduction

COVID-19 was classified as a dangerous infection by the Internationally Concerned Public Health Emergencies (PHEIC), and The WHO. Primary respiratory illness, namely, Severe Acute Respiratory Syndrome (SARS-CoV), was caused, which results in breathing failure and death. Several deaths are caused by other general lung disorders, such as viral and bacterial pneumonia. These pneumonia infections induce contagious sickness on one or else both sides of the lungs by arranging discharge and various fluids in the air sacs. The side effects of viral pneumonia appear gradually and are mild. Nevertheless, bacterial pneumonia is more dangerous, particularly in children. Several lung lobes could be affected by this kind of pneumonia. Pneumonia is classified into more than 20 types, where, in the open source repository, 21 kinds of pneumonia infection images are present; in addition, these images are deployed for training the system along with classifying and differentiating the pneumonia infections from COVID-19 [1, 2].

The real-time polymerase chain response (RT-PCR) measure of the sputum is the highest quality level for diagnosing normal pneumonia illnesses and Corona viruses. Nevertheless, for asserting positive COVID-19 cases, those RT-PCR tests depicted high misleading negative levels. Conversely, for classifying the health status of tainted patients such as youngsters and pregnant ladies, radiological assessments utilizing chest radiographs together with computed tomography (CT) filters are currently being utilized despite the potential side effects of ionizing radiation exposure. For screening, analysis, and progress evaluation of patients with COVID-19, a powerful methodology was depicted by CT imaging. As per clinical examinations, during this pandemic episode, a positive chest radiograph image might block the requirement for CT scans and decrease clinical weight on CT suites. For limiting the gamble of Corona virus contamination, the usage of versatile chest radiography was recommended by the American College of Radiology (ACR). Interruption of the radiological service might be caused by the decontamination of CT rooms after scanning COVID-19 patients.

The COVID-19 peculiarities and the reaction of the public along with governments to it have specific relevance for people living with an eating disorder and those who care for them. People with an eating disorder possess an intricate problematic relationship with food that will be augmented currently by food insecurity along with panic buying. In the upcoming days, there will be no doubt that plenty of research documenting what effect COVID-19 will have on the eating disorder community as of a clinician together with patient perspective. The availability of eating disorders could be exhibited by the imaging findings; also, recognizing those findings permits the radiologist to donate to this insidious condition diagnosis and alert the referring caregiver.

Moreover, to check patients and somewhat costly clinic bills, high-portion openness is required for CT screening. In clinics, usual radiograph machines are accessible and convenient; in addition, the clinical focus is to give a speedy output for the patients' lungs as two-dimensional (2D) pictures. For confirming positive COVID-19 cases, chest radiographs examine and present the primary tool for clinicians. Here, the aim is just to enhance the presentation of utilizing chest Radiograph checks to affirm the patients with exceptionally thought COVID-19 or other pneumonia illnesses, to be specifically viral (Non-COVID-19) or bacterial contaminated. Nevertheless, due to the low-exposure portion to the patients, radiograph pictures are still contrast limited, which causes prompting challenges in diagnosing delicate tissues or infected regions in the patient's chest. To beat these constraints of chest radiographs and assist radiologists in identifying expected infections in low-contrast, a functional answer was presented by the computer-aided diagnosis (CAD) frameworks.

To perform interventional undertakings such as growth division and 3D representation of imperative organs, the CAD frameworks joined progressed parts of PC advancements with late picture handling calculations. For different clinical applications such as brain cancer order or division, negligibly intrusive aortic valve implantation, and recognizing aspiratory sicknesses, Artificial Intelligence (AI) has been broadly applied to advance the demonstrative presentation of numerous CAD frameworks. In the field of AI, the DL approaches have emerged to be the most developed strategies. In view of the past preparation such as human feeling grouping and computer vision applications in a medical procedure, they can learn patterns and features from marked (or labeled) information to be prepared to naturally perform explicit assignments. In numerous uses of computer vision along with sensitive clinical applications, CNNs present a significant branch of DL methods somewhat recently. In several utilizations of radiology, for breaking down single and multi-modular clinical pictures, CNNs have been deployed. Here, in Python, by employing the Tinker library, the GUI was developed by means of which the accessibility of the application will be easy to operate.

2 Literature Survey

DL techniques, which can self-learn hidden patterns within data to make predictions, have accelerated computational power. DL architectures are hugely deployed by the research community on medical image processing together with computer vision in their implementations. Shanjiang Tang et al. presented an EDL-COVID, which is an ensemble DL system, by employing DL and ensemble learning. Snapshot modeling and model ensemble is the ‘2’ stages encompassed in the common training stream for EDL-COVID. Nevertheless, a huge time was required by the model training technique; in addition, a more significant than average amount of computing resources, making DL ensembling become a long and critical calculation strategy [3]. Tabik et al. partitioned COVID good information grounded on the mostly Network (COVID-SDNet) to augment the speculation capacity of COVID-classification models [4].

Ahammed, K et al. developed an early detection of COVID-19 cases as of chest X-ray images by deploying machine learning (ML), and DL approaches with 94.03% accuracy [5]. Karen Panetta et al. explored an ML system in which a special shape-subordinate Fibonacci-p was arranged for the textural feature and designs centered on the descriptors. Kaggle and COVIDGR are the ‘2’ deployed data sets [6].

Several authors worked on transfer learning in which there is a lack of images; in addition, by employing the TL, superior performance could be attained. On COVID-19 infection’s automatic detection, Elene et al. achieved 98%. Hoo-Chang Shin et al. presented Deep CNN for CAD applications along with evaluated CNN performance on ‘2’ different CAD applications on classification with transfer learning. Majeed et al. and Misra et al. implemented transfer learning methodologies for COVID-19 classification as of chest X-ray images [7,8,9]. AfsharShamsi et al. applied a deep uncertainty-aware transfer learning structure for COVID-19 detection. CT-Scan along with X-ray images were chosen; in addition, four CNN architectures were implemented to extract deep features from the clinical images [10]. For the COVID-19 method, "Mingdong Zhang" set a modern Residual Learning assignment Detection (RLDD), which could distinguish positive COVID-19 cases from heterogeneous respiratory organ pictures. For enhancing the potency of COVID-19, the RLDD is coordinated into the applying programming connection point (API) and inserted into the instrument [11].

Systems were implemented grounded on CNN models with ML or DL or TL in most of the approaches. BoranSekeroglu et.al and Pillalamarry Mahesh developed a system grounded on CNN along with acquired enhanced accuracy in the detection of COVID-19 as of chest radiograph images [12, 13].

DL could be regarded as a subset of ML. Several authors presented DL approaches to acquire enhanced accuracies for the COVID-19 classifications task. Chalapathiraju et al. developed a detailed review of DL approaches implemented for other clinical data [14]. Sakib et al. explored DL-centric fresh system for COVID-19 classification from the radiograph images. Bhawna Nigam et al. presented a DL-centric multi-class classification and attained an accuracy of 79–93% grounded on the architecture with ‘3’ classes, namely, pneumonia, COVID, and others. Dandi Yang et al. applied DL for enhancing the model with a pre-trained CNN model and attained 96%; in addition, the X-ray together with CT-scan images was deployed [15,16,17]. Several authors applied their approach to CT-scan images. Guangyu Jia et al., with a Dynamic CNN architecture and Yu-Huan Wu et al. with joint classification and segmentation, are proposed centered on CT-scan images [18, 19]. Thiyagarajan Padmapriya deployed X-ray and CT scan images and attained excellent accuracy. In model performance, the parameters and hyper parameters are significant [20]. Shreeraj Jadhav et al. applied the system to CT scans along with X-ray images [21]. Navchetan Awasthi et al. developed a lightweight, versatile, efficient DL model to detect COVID-19 exploitation respiratory organ ultrasound pictures. COVID-19, pneumonia, and sound, which are ‘3’ entirely unexpected classes were encased during this assignment. The absolute best accuracy of 83.2% could be acquired; in addition, requires instructing time of exclusively 24 min. When analogized to the further best-playing network, the arranged Mini-COVID Net has 4.39 times a smaller variety of parameters inside the network [22].

Rathnamma V Mydukuri presented Deming's least-square regressed feature selection along with a Gaussian neuro-fuzzy multi-layered data classifier for early COVID prediction, which attained an accuracy of about 56% [23]. EmreAvuçlu et al. applied a fresh system for the most accurate diagnosis possible in medical diagnostics utilizing the COVID-19 data set and ML algorithms [24]. Gomes, J. C et al developed an intelligent tool to support the COVID-19 diagnosis by X-ray images’ texture analysis [25]. Similarly, for acquiring accuracy in the classification task of COVID-19 as of X-ray and CT scan images that are hugely categorized into 2–3 categories such as normal, pneumonia, together with COVID-19, several approaches and tools are presented.

3 Methodology

For categorizing radiograph images as COVID-19 patients or any other Pneumonia patients, the proposed system uses transfer learning with CNN. Initially, the data set of images employed for the training and testing is described for performing this classification task.

From the open-source repository Kaggle.com, the required data set is collected. To remove the duplication of data, it is preprocessed into the data set. Grounded on the transfer learning, the feature extraction was performed. The batch size, the number of epochs, and transforms will be defined once this task is performed. Thus, the training process will be performed. To appraise the outcome, the metrics will be evaluated to make a comparison with the other approaches.

In Fig. 1, the technique’s sequence diagram is depicted; in addition, the operations of this model have occurred in the represented order. This sequence diagram is meant for the user interface-based execution.

Fig. 1
figure 1

Flow chart of the proposed model

Data set Here, the radiograph images data set is deployed. 793 radiograph images with COVID-19 and other pneumonia-type images and 1340 radiograph images of healthy patients are encompassed in the data set. Here, kaggel.com is the source from where the radiograph images are collected and Kaggle.com is an open-source platform freely available to the general public and community who do research in the respective field [26]. Twenty-one different types of viral, bacterial, fungal, and other pneumonia infections-related images are encompassed in the data set. The total radiograph images are merged and noisy, duplicate images will be removed as of the data set in the Preprocessing of the model. In Fig 2, the sample of the data set is depicted.

Fig. 2
figure 2

Sample images of proposed model

Pre-processing is undergone by the images after collecting the data set from the open sources, where the duplicates and blurred images are removed by the index’s values. After the completion of image processing, the images get processed, where the images will convert to Grayscale images, and the size of the images will be resized. Here, CNN is trained on Image Net and is wielded for feature extractors for the radiograph images. The convolution2D, max pooling, and dense and flattened layers are encompassed in the proposed CNN. 2,133 training samples are included of which 793 are COVID-19 and other pneumonia affected CXR images, and 1340 are healthy CXR images. Resizing and normalizing will be performed in image processing. All the images will be resized as the scale of the images will vary, and respective images will go for normalization to attain rescaling the pixels from [0,255] to [0, 1].

Training and implementation details In the system design, training the model will play an important role. Adaptive moment estimation (Adam), which is a method for stochastic optimization, is the optimizer used to calculate learning rates. Likewise, categorical_cross_entropy is wielded as the loss function. Activation functions and hyper parameters such as learning rate and batch size are required for training the system. The activation function chosen for the network is termed the ReLU. The learning rate is 0.001, the batch size is 16, and the learning rate parameter beta is 0.9. With all these, the network is trained for 10 epochs. The network attained an accuracy of 92%.

4 Results

In Fig 3, the proposed system's GUI representation is shown. By employing the Tkinter library in python, the model is implemented as a GUI application. The standard GUI library for Python is termed the Tkinter. While fused with Tkinter, Python exhibits a rapid and easy way to generate GUI applications. By employing matplotlib, which is a plotting library for the Python programming language along with offers an object-oriented API for embedding plots into applications deploying GUI Tkinter, the respective plots are generated. Moreover, the method used a radiograph images data set containing 793 radiograph images with COVID-19 and other pneumonia type images and 1340 radiograph images of healthy patients. For training and testing, 1706 (634 of COVID-19 and another pneumonia type, and 1072 of healthy patients) and 426 images were wielded.

Fig. 3
figure 3

GUI representation of proposed model

The proposed model’s GUI is depicted in Fig. 3 in which all the code was embedded behind the buttons. With the available training images as of the "Upload Training Samples" button, the system can be selected and trained. With 21 different types of viral infections, the proposed system is trained.

From Fig. 4, images will go through preprocessing, where the duplicates and blurred images are removed after uploading the training images. The proposed CNN will be applied to the prepossessed images by clicking on the "Build CNN COVID-19 Model" after preprocessing. Images present in training samples will be taken by the CNN model; in addition, filtration will be applied to those training sample input images to select the features which are important and the other features which are not so important will be removed. The collection of these selected features will be performed by the max pooling and passes them from one CNN layer to another and the DENSE layer will be involved for further required filtration. The prediction layer across the output is meant to classify and predict the disease images out of 21 classes. The system can process the testing once the model is applied to the training Images.

Fig. 4
figure 4

Representation of uploading training samples

By clicking on—upload test samples, the testing images can be uploaded, as depicted in Fig. 5. Then, the test sample was provided to the model. From Fig. 6, the model will predict the disease infected on the chest X-ray images.

Fig. 5
figure 5

Representation of uploading testing samples

Fig. 6
figure 6

Predicting test image as pneumonia_viral_COVID-19

In Figs. 6 and 7, the predicted result is displayed on the screen. "The graph" button could be clicked to get the Accuracy and Loss plot, which is shown in Fig 8. At the end of the operation, the graph for accuracy together with loss was provided for the given test sample. To improve the predictions, this accuracy and loss graph will be taken as feedback.

Fig. 7
figure 7

Predicting test image as pneumonia

Fig. 8
figure 8

Representation graph for accuracy and loss

Accuracy is depicted by the green line and LOSS is signified by the blue line in Fig. 8. Epoch/iteration, along with accuracy and loss values, is delineated by the x-axis and y-axis. Ten iterations are deployed for building CNN, where each iteration increases, accuracy get increases, and LOSS get decreases. The application can be completed by clicking the "close" button once the execution is completed.

5 Conclusion

Implementing a GUI-based model for classifying and distinguishing COVID-19 along with other Pneumonia from radiograph images in an early stage is the goal. By investigating different types of pneumonia infection-related open-source radiograph images, a DL-centric approach is proposed to predict COVID-19 early and automatically. 87% accuracy was attained by CNN. On the other hand, 92% accuracy is attained by the proposed system. Similarly, enhanced efficiency was not presented by other DL methods, and mostly, they classify up to three classes in which this system classifies about 21 classes with excellent efficiency. For the early detection of COVID-19 as of radiograph images, the proposed GUI model is quite helpful for the community; in addition, could control and prevent community transmission.