TY - JOUR
T1 - Segmentation of lung cancer-caused metastatic lesions in bone scan images using self-defined model with deep supervision
AU - Cao, Yongchun
AU - Liu, Liangxia
AU - Chen, Xiaoyan
AU - Man, Zhengxing
AU - Lin, Qiang
AU - Zeng, Xianwu
AU - Huang, Xiaodi
N1 - Funding Information:
This work was supported by the Youth PhD. Foundation of Education Department of Gansu Province (2021QB-063), Key R&D Plan of Gansu Province (21YF5GA063), the Fundamental Research Funds for the Central Universities (31920220020, 31920220054, 31920210013), the Natural Science Foundation of Gansu Province ( 20JR5RA511 ), the National Natural Science Foundation of China ( 61562075 ), the Gansu Provincial First-class Discipline Program of Northwest Minzu University ( 11080305 ), and the Program for Innovative Research Team of SEAC ([2018] 98).
Publisher Copyright:
© 2022 The Author(s)
PY - 2023/1
Y1 - 2023/1
N2 - To automatically identify and delineate metastatic lesions in low-resolution bone scan images, we propose a deep learning-based segmentation method in this paper. In particular, the view aggregation in this method uses a pixel-wise addition to enhance the regions with high uptake of the radiopharmaceutical. The operation of view aggregation augments images for the lesion segmentation task. By following the structure of the encoder-decoder with deep supervision, our model is an end-to-end segmentation network that consists of two sub-networks of feature extraction and pixel classification. As such, the hieratical features of bone scan images can be learned by the feature extraction sub-network. The pixels in metastasis areas within a feature map are then identified and delineated by the pixel classification sub-network. The results of experiments on clinical bone scan images show that the proposed model performs well in segmenting metastatic lesions automatically, obtaining a mean score of 0.6556 on DSC (Dice Similarity Coefficient). However, more bone scan images enable our model to learn better representative features of metastatic lesions, for further improving the performance of deep learning-based lesion segmentation.
AB - To automatically identify and delineate metastatic lesions in low-resolution bone scan images, we propose a deep learning-based segmentation method in this paper. In particular, the view aggregation in this method uses a pixel-wise addition to enhance the regions with high uptake of the radiopharmaceutical. The operation of view aggregation augments images for the lesion segmentation task. By following the structure of the encoder-decoder with deep supervision, our model is an end-to-end segmentation network that consists of two sub-networks of feature extraction and pixel classification. As such, the hieratical features of bone scan images can be learned by the feature extraction sub-network. The pixels in metastasis areas within a feature map are then identified and delineated by the pixel classification sub-network. The results of experiments on clinical bone scan images show that the proposed model performs well in segmenting metastatic lesions automatically, obtaining a mean score of 0.6556 on DSC (Dice Similarity Coefficient). However, more bone scan images enable our model to learn better representative features of metastatic lesions, for further improving the performance of deep learning-based lesion segmentation.
KW - Bone scan
KW - Skeletal metastasis
KW - Lung cancer
KW - Image segmentation
KW - Convolutional neural network
UR - http://www.scopus.com/inward/record.url?scp=85136492043&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85136492043&partnerID=8YFLogxK
U2 - 10.1016/j.bspc.2022.104068
DO - 10.1016/j.bspc.2022.104068
M3 - Article
SN - 1746-8094
VL - 79
JO - Biomedical Signal Processing and Control
JF - Biomedical Signal Processing and Control
M1 - 104068
ER -