J Cancer Prev 2022; 27(3): 192-198
Published online September 30, 2022
https://doi.org/10.15430/JCP.2022.27.3.192
© Korean Society of Cancer Prevention
John Nisha Anita1 , Sujatha Kumaran2
1Department of Electronics and Communication Engineering, 2Department of Electrical and Electronics Engineering, Sathyabama Institute of Science and Technology, Chennai, India
Correspondence to :
John Nisha Anita, E-mail: nishusuban@gmail.com, https://orcid.org/0000-0003-4777-2123
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
The meningioma brain tumor detection and segmentation method is a complex process due to its low intensity pixel profile. In this article, the meningioma brain tumor images were detected and tumor regions were segmented using a convolutional neural network (CNN) classification approach. The source brain MRI images were decomposed using the discrete wavelet transform and these decomposed sub bands were fused using an arithmetic fusion technique. The fused image was data augmented in order to increase the sample size. The data augmented images were classified into either healthy or malignant using a CNN classifier. Then, the tumor region in the classified meningioma brain image was segmented using an connection component analysis algorithm. The tumor region segmented meningioma brain image was compressed using a lossless compression technique. The proposed method stated in this article was experimentally tested with the sets of meningioma brain images from an open access dataset. The experimental results were compared with existing methods in terms of sensitivity, specificity and tumor segmentation accuracy.
Keywords: Meningioma, Tumor, Brain image, Sub bands
The abnormal brain cell development in human brain produces tumors and they were categorized into many types based on their location, size and its various properties. Mostly, the brain tumors were categorized into Glioma, Glioblastoma and Meningioma. The Meningioma brain tumors were non-aggressive type of cancer than other types of tumors Glioma and Glioblastoma. Every year, average 2,700 people in the United States were affected by meningioma tumors and the survival rate of the patient was about 63.8%, as reported in National Cancer Institute. Meningioma tumors were called as primary central nervous system tumors and they were formed in the region of meninges which connects human spinal cord and brain region. It spreads to the other part of the brain, nerves and blood vessels in the brain. The developing and spreading rate of this tumor was slow when compared with other types of tumors. They develop in human brain without generating any symptoms in many years. It produces symptoms in moderate and severe stage based on the human body health condition. Meningioma brain tumors were mostly occurred in women patients and old aged patients. The blurriness, headache, memory loss and hearing loss were the common symptoms of these meningioma tumors [1].
The meningioma brain tumors were classified into three grades (Grade-I, Grade-II, and Grade-III) based on the location and size of the tumors. The slow developing of tumor was categorized as Grade I and this was mostly occurred tumor types in patients. The mid-grade type of meningioma tumor was categorized as Grade II, which having the high chance of occurrence after it was surgically removed from the brain. The fast developing of tumor was categorized as Grade III and they were rarely occurred in patients. In this article, Grade-I meningioma tumors were detected and segmented using deep learning technique.
Ragupathy et al. [2] proposed meningioma brain tumor detection method using both machine and deep learning classification models. The fuzzy logic were designed and applied on the source brain image to detect the edges in the images and the detected edges were enhanced using fuzzy rules. Then, the machine learning classifier co-adaptive neuro fuzzy interference. Then, the machine learning classifier co-adaptive neuro fuzzy interference and deep learning model convolutional neural networks (CNN) were applied on the enhanced brain image for classifying the brain image into either meningioma or non-meningioma brain images. The authors obtained 98.9% of sensitivity (SEN), 99.4% of specificity (SPE), and 99.3% of tumor segmentation accuracy (TSA). Irmak et al. [3] developed a fully optimized framework structure for the detection of tumor regions in brain MRI images. The authors used deep CNN architecture for classifying the brain images into either normal or abnormal based on the intrinsic feature maps which were produced by the CNN structure during training phase of the classification process. The authors obtained 97.0% of SEN, 98.7% of SPE and 98.1% of TSA on the set of meningioma brain MRI images from Nanfang dataset. Sajjad et al. [4] applied extensive data augmentation method for increasing the training brain image samples in order to increase the learning rate. The authors then used deep CNN architecture for the classifications of multi grade meningioma brain tumors. The activation function of this method produced non-linear responses of each Convolutional layer. The authors obtained 88.4% of SEN, 96.1% of SPE and 94.5% of TSA on the set of meningioma brain MRI images from Nanfang dataset.
Bhavani et al. [5] used support vector machine (SVM) classification algorithm for the classification of tumor affected brain images. The authors tested their tumor detection process using various SVM kernels in order to obtain high classification rate. This proposed method was applied and tested on the various dataset brain images in order to validate the effectiveness of the developed brain tumor detection framework. The authors obtained 93.1% of SEN, 94.2% of SPE and 95.8% of TSA. Thillaikkarasi et al. [6] constructed an efficient brain tumor detection framework using deep CNN architecture with respect to kernel based structure and multi class SVM structure. The authors classified the source brain images into either normal or tumor affected cases using the developed brain tumor detection approach. The authors obtained 96.2% of SEN, 97.1% of SPE and 97.5 % of TSA.
The tumor regions were effectively detected and segmented using various classification approaches as stated in Mengqiao et al. [7], Mohsen et al. [8], Mlynarski et al. [9], and Toğaçar et al. [10] . The implementation of CNN for brain tumor detection were studied and analyzed by Abiwinanda et al. [11], Deepak et al. [12], Seetha et al. [13].
The meningioma and non-meningioma (healthy) brain images were accessed from Nanfang dataset. The dataset was constructed and maintained by Nanfang General and Medical Research Hospital located in China [14]. Tesla image capturing camera was used to capture the meningioma and non-meningioma brain images with the image size about 512×512 pixels as image width and height. From this Nanfang dataset, 571 meningioma brain images and 650 non-meningioma brain images were accessed for simulating the proposed system. This dataset was split into training and testing modules. The training module consists of 300 meningioma brain images and 300 non-meningioma brain images. The testing module consists of 271 meningioma brain images and 350 non-meningioma brain images. All these brain images were cross-checked and manually verified by two independent experts in this field.
In this article, we did not involve any personal medical images. All the brain images used in this article were accessed from the license free open access dataset and hence there was no need for getting approval and most of the researchers in this filed used this open access dataset for their study.
The meningioma brain tumor images were detected and tumor regions were segmented using a CNN classification approach. The source brain MRI images were decomposed using discrete wavelet transform (DWT) and these decomposed sub bands were fused using a arithmetic fusion technique. The fused image was data augmented in order to increase the sample size. The data augmented images were classified into either healthy or malignant using a CNN classifier. Then, the tumor region in the classified meningioma brain image was segmented using a connection component analysis algorithm. The tumor region segmented meningioma brain image was compressed using a lossless compression technique. The entire processing methodology is illustrated in Figure 1.
Most of the brain MRI images from open access brain image datasets had low resolution. The difference of pixels between tumor region and its surrounding regions were low, hence the detection process of tumor region in source brain image was complex. In order to overcome such limitation in a conventional method, an image fusion approach was used in this article. The pixel resolutions in source brain MRI image were improved using a region based image fusion approach. In this article, the source brain images were decomposed using DWT with Daubechies wavelet (db4) and ‘Symlet’ mode. This mode was selected in order to obtain high fusion results. This DWT method decomposes the source brain images into low frequency and high frequency sub bands, as illustrated in the following equations:
Where, LL represents a low frequency sub band and LH, HL and HH represent high frequency sub bands respectively and ‘sym’ represents Symlet mode of Daubechies wavelet.
The low pass sub bands from source brain images M1 and M2 were fused using the following equations:
While LL1 and LL2 were the low pass sub bands from M1 and M2 brain images, respectively and k1 and k2 represent the fusion index, the level of fusion of low pass sub bands was based on the value of fusion index. The values of fusion index lie between 0 and 1. After testing with several iterations, the value of k1 and k2 were chosen as 0.5 and 0.7, respectively in order to obtain the high level of fusion coefficients.
The high pass sub bands from source brain images M1 and M2 were fused using the following steps:
1) The Eigen values of the high pass sub bands LH1, HL1 and HH1 were determined using the following equations:
2) The Eigen values of the high pass sub bands
3) The maximum high pass sub band belonging to source brain image M1 by selecting the maximum Eigen value is depicted in the following equations:
4) The maximum high pass sub band belonging to source brain image M2 obtained by selecting the maximum Eigen value is depicted in the following equations:
The coefficient sub bands
Finally, inverse DWT (idwt) was applied between the obtained LL and HH as depicted in the following equation:
Finally, inverse DWT (idwt) was applied between the obtained LL and HH as depicted in the following equation:
The deep learning classifier requires a large number of meningioma and non-meningioma brain images for obtaining a high classification rate. Hence, a data augmentation method was used in this article to increase the number of samples for the deep learning process. In this paper, time-scale left and time-scale right functions were used as the data augmentation methods which significantly increased the number of meningioma and non-meningioma brain images. Totally, 300 meningioma and 300 non-meningioma brain images were used for training the CNN architecture in this article. The time-scale left functions produced 600 images and time-scale right function produced 600 images. Hence, the total number of images for training was about 1,800 including the source images and data augmented images.
The fused image was classified into either ‘non-meningioma’ or ‘meningioma’ image using the classification process. Many researchers, for the past two decades, used machine learning approaches SVM, neural networks and adaptive neuro fuzzy inference system classifier to classify the brain images. The meningioma and non-meningioma identification rates of these machine learning approaches were not optimal for further tumor diagnosis process. Also, these machine learning approaches required the larger number of training images. The design of these algorithms for meningioma brain tumor detection process was complex. In order to overcome these limitations in the conventional machine learning approaches, the CNN classifier was used in this section to classify the source brain image into either ‘non-meningioma’ or ‘meningioma’. In this article, VGG-16 CNN architecture was used and it was derived from AlexNet CNN architecture by increasing the size of kernels and number of filters in each layer.
The conventional VGG-16 architecture was designed with 13 convolutional (C) layers, 5 pooling (P) layers and 3 dense layers, as illustrated in Figure 2A. This conventional VGG-16 architecture consists of 6 modules with different number of filters. The first module consists of two C layers with 64 filters each, the second module consists of two C layers with 128 filters each, the third module consists of three C layers with 256 filters each, the fourth module consists of three C layers with 512 filters each and fifth module consists of three C layers with 512 filters each. The sixth module consists of three dense layers, where the first dense layer was designed with 4,096 neurons, the second dense layer was designed with 4,096 neurons and the third dense layer was designed with 1,000 neurons with Softmax activation function.
The modified VGG-16 architecture was derived from the conventional VGG-16 architecture by reducing the number of filters and the number of neurons in each dense layer. The modified VGG-16 architecture was designed with 12 C layers, 6 P layers and 4 dense layers, as illustrated in Figure 2B. This modified VGG-16 architecture consists of 7 modules with the different number of filters. The first module consists of two C layers with 32 filters each, the second module consists of two C layers with 64 filters each, the third module consists of two C layers with 128 filters each, the fourth module consists of two C layers with 128 filters each, fifth module consists of two C layers with 256 filters each and the sixth module consists of two C layers with 512 filters each. The seventh module consists of 4 dense layers, where the first dense layer was designed with 1,024 neurons, the second dense layer was designed with 1,024 neurons and the third dense layer was designed with 1,024 neurons and the forth dense layer was designed with two neurons with Softmax activation function. The first neuron in the fourth dense layer represents the non-meningioma brain image and the second neuron in the fourth dense layer represents the meningioma brain image. Figure 3A and 3B represent, respectively the non-meningioglioma and meningioma brain image classification results of modified VGG-16 architecture.
The tumor regions in classified meningioma brain image were segmented using a morphological segmentation approach. This approach consists of two functional modules as morphological dilation and morphological erosion. The dilation was designed with ‘disk’ structuring element with 1 mm radius and the erosion was designed with ‘disk’ structuring element with 2 mm radius. The dilation and erosion functions were applied on the classified meningioma brain image and the tumor regions were detected using the following steps:
1) The dilation function of the classified meningioma brain image was calculated using the following equation:
Where,
2) The erosion function of the classified meningioma brain image was calculated using the following equation:
3) The tumor region was segmented using the following equation:
4) The tumor pixels were detected and removed in the segmented tumor image (S) using the threshold function (t). The pixels in S was compared with the value of ‘t’. The value of each pixel was set to zero if the value of the pixel was greater than threshold ‘t’, else the pixel was considered as tumor pixel. In this paper, the value of ‘t’ was chosen as 120 after several iterations.
The source meningioma brain image and segmented tumor image are depicted in Figure 4A and 4B, respectively.
The telecommunication methodology requires the segmented tumor images to be compressed and transferred to distance locations. The medical experts in distance location diagnose the segmented tumor regions for further surgery to save the life of the patient. In real time, hospitals have the greater number of images (data) for transmission which consumes more memory. In order to overcome such limitation, the segmented tumor images were compressed using a lossless compression algorithm and the compressed images were transferred to distance locations. Figure 5A shows the uncompressed meningioma brain image and Figure 5B shows the compressed meningioma brain image by the proposed method.
The proposed method uses a simple fast and adaptive lossless image compression algorithm. Lossless compression approach of Starosolski et al. [1] was used to compress the tumor segmented images. The compression ratio (CR) was a parameter which was used to analyze the performance of the lossless compression algorithm. It was defined as the ratio between the size of the compressed image and the size of the uncompressed image as depicted in the following equation:
In this section, the average size of the uncompressed image was about 17 KB (Fig. 4A) and the average size of the compressed image was about 7 KB (Fig. 4B). Hence, the average CR of the proposed method was about 41%.
In this paper, the proposed meningioma brain tumor detection method was simulated using a MATLAB R2020 (Mathworks, Portola Valley, CA, USA) simulating tool. Intel Core i5 Processor (Santa Clara, CA, USA) with 4 GB internal RAM and 1 TB hard disk was used as a hardware resource to perform simulation in this work. The performance efficiency of the proposed method was analyzed using the parameters meningioma identification rate (MIR) and the non-meningioma identification rate (NMIR). The MIR was defined as the ratio of the correctly identified meningioma images to the total meningioma image count, as depicted in Equation 20. The NMIR was defined as the ratio of the correctly identified non-meningioma images to the total non-meningioma image count, as depicted in Equation 21. Both MIR and NMIR were measured in percentage and have the value between 0 and 100. The performance efficiency of the proposed brain tumor detection methodology was high if the values of both MIR and NMIR were high.
The proposed system achieves 99.2% of MIR by correctly identifying 269 meningioma images over 271 meningioma brain images. The proposed system achieves 98.2% of NMIR by correctly identifying 344 non-meningioma images over 350 non-meningioma brain images. Therefore, the Accuracy (Acc) of the proposed methodology is computed using the average between the parameters MIR and NMIR, using the following equation:
The accuracy of the proposed meningioma brain tumor classification system stated in this paper is about 98.7%.
Where GTP defines the pixels which were being correctly segmented as tumor, GTN defines the pixels which were being correctly segmented as non-tumor, GFP defines the pixels which were being non-correctly segmented as tumor and GFN defines the pixels which were being non-correctly segmented as non-tumor.
The proposed meningioma tumor detection method was tested on the set of brain MRI images from open access dataset and the experimental results were tabulated in Table 1 with respect to SEN, SPE and TSA. The proposed meningioma brain tumor detection method stated in this article obtained 99.17% of SEN, 99.55% of SPE and 99.44% of TSA on the set of 10 meningioma brain images from the open access dataset. Similar experimental results were also obtained for 271 meningioma brain images from the same dataset. These experimental results were obtained with respect to the ground truth images produced by the expert radiologist.
Table 1 .. Experimental results of the proposed meningioma image detection
Meningioma image number | SEN (%) | SPE (%) | TSA (%) |
---|---|---|---|
1 | 99.4 | 99.6 | 99.5 |
2 | 99.5 | 99.5 | 99.3 |
3 | 98.7 | 99.5 | 99.1 |
4 | 99.8 | 99.7 | 99.6 |
5 | 99.6 | 99.8 | 99.7 |
6 | 99.4 | 99.6 | 99.3 |
7 | 99.5 | 99.1 | 99.5 |
8 | 98.4 | 99.4 | 99.5 |
9 | 98.3 | 99.6 | 99.6 |
10 | 99.1 | 99.7 | 99.3 |
Average | 99.1 | 99.5 | 99.4 |
TSA, tumor segmentation accuracy; SEN, sensitivity; SPE, specificity.
Table 1 shows the simulation results of the first 10 meningioma brain images from 571 meningioma brain images. Similar simulation results were obtained by applying the proposed methodology on all 571 meningioma brain images.
Table 2 shows the comparisons of proposed meningioma tumor segmentation methods. In this paper, the proposed meningioma brain tumor segmentation method was compared with existing methods with Ragupathy et al. [2], Irmak et al. [3] and Sajjad et al. [4] in terms of SEN, SPE and TSA. Ragupathy et al. [2] obtained 98.9% of SEN, 99.4% of SPE and 99.3% of TSA. Irmak et al. [3] obtained 97.0% of SEN, 98.7% of SPE and 98.1% of TSA. Sajjad et al. [4] obtained 88.4% of SEN, 96.1% of SPE and 94.5% of TSA. From Table 2, the proposed meningioma brain tumor detection method using the modified VGG-16 CNN architecture segments the tumor regions in meningioma brain image more accurately than the existing tumor segmentation methods.
Table 2 .. Comparisons of proposed meningioma tumor segmentation methods
Methodology | SEN (%) | SPE (%) | TSA (%) |
---|---|---|---|
Proposed work (in this article) | 99.1 | 99.5 | 99.4 |
Ragupathy et al. [2] | 98.9 | 99.4 | 99.3 |
Irmak et al. [3] | 97.0 | 98.7 | 98.1 |
Sajjad et al. [4] | 88.4 | 96.1 | 94.5 |
Receiver operating characteristics (ROC) is used to analyze the exactness of the proposed meningioma tumor detection system. ROC is determined between the values sensitivity and 1-specificity. The average accuracy of the proposed meningioma tumor detection system is about 99.2. The ROC of the proposed meningioma tumor detection system is about 99.2, which is similar to the experimental results obtained in this article, hence the results are validated. In this article, meningioma brain tumors were detected and segmented using a modified VGG-16 architecture. The low pixel profile brain MRI images were enhanced using the DWT based image fusion method. The fused images were data augmented in order to increase the sample images. These data augmented brain images were classified using the modified VGG-16 CNN architecture which classifies the brain MRI images into either meningioma or non-meningioma brain image. The tumor regions in classified meningioma brain image were segmented using a morphological approach. These segmented tumor regions were compressed lossless compression method for telemedicine applications. The proposed meningioma brain tumor detection method stated in this awwrticle obtained 99.17% of SEN, 99.55% of SPE and 99.44% of TSA.
None.
No potential conflicts of interest were disclosed.
Eui-Yeun Yi, Kyung-Suk Han, and Yung-Jin Kim
Journal of Cancer Prevention 2014; 19(4): 247-252 https://doi.org/10.15430/JCP.2014.19.4.247Keun-Ok Jung, Kang-Youn Lee, Seong-Kap Rhee and Kun-Young Park
Journal of Korean Association of Cancer prevention 2002; 7(2): 134-142Eui-Yeun Yi, Jung-A Kang, Hyun Seok Song, Hkyu-Yang Yi2, Sung-En Yoo and Yung-Jin Kim
Cancer prevention research 2006; 11(3): 183-191