TY - JOUR
T1 - Vision-based robust lane detection and tracking in challenging conditions
AU - Sultana, Samia
AU - Ahmed, Boshir
AU - Paul, Manoranjan
AU - Islam, Muhammad Rafiqul
AU - Ahmad, Shamim
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2023
Y1 - 2023
N2 - Lane marking detection is fundamental for both advanced driving assistance systems and traffic surveillance systems. However, detecting lane is highly challenging when the visibility of a road lane marking is low, obscured or often invisible due to real-life challenging environment and adverse weather. Most of the lane detection methods suffer from four types of challenges: (i) light effects i.e. shadow, glare of light, reflection etc. created by different light sources like streetlamp, tunnel-light, sun, wet road etc.; (ii) Obscured visibility of eroded, blurred, dashed, colored and cracked lane caused by natural disasters and adverse weather (rain, snow etc.); (iii) lane marking occlusion by different objects from surroundings (wiper, vehicles etc.); and (iv) presence of confusing lane like lines inside the lane view e.g., guardrails, pavement marking, road divider etc. In this paper, we proposed a simple, real-time, and robust lane detection and tracking method to detect lane marking considering the abovementioned challenging conditions. In this method, we introduced three key technologies. First, we introduce a comprehensive intensity threshold range (CITR) to improve the performance of the canny operator in detecting different types of lane edges e.g., clear, low intensity, cracked, colored, eroded, or blurred lane edges. Second, we propose a two-step lane verification technique, the angle-based geometric constraint (AGC) and length-based geometric constraint (LGC) followed by Hough Transform, to verify the characteristics of lane marking and to prevent incorrect lane detection. Finally, we propose a novel lane tracking technique, to predict the lane position of the next frame by defining a range of horizontal lane position (RHLP) along the x axis which will be updated with respect to the lane position of previous frame. It can keep track of the lane position when either left or right or both lane markings are partially and fully invisible. To evaluate the performance of the proposed method we used the DSDLDE (Lee and Moon, 2018) and SLD (Borkar et al., 2009) dataset with 1080× 1920 and 480× 720 resolutions at 24 and 25 frames/sec respectively where the video frames containing different challenging scenarios. Experimental results show that the average detection rate is 97.55%, and the average processing time is 22.33 msec/frame, which outperforms the state-of-the-art method.
AB - Lane marking detection is fundamental for both advanced driving assistance systems and traffic surveillance systems. However, detecting lane is highly challenging when the visibility of a road lane marking is low, obscured or often invisible due to real-life challenging environment and adverse weather. Most of the lane detection methods suffer from four types of challenges: (i) light effects i.e. shadow, glare of light, reflection etc. created by different light sources like streetlamp, tunnel-light, sun, wet road etc.; (ii) Obscured visibility of eroded, blurred, dashed, colored and cracked lane caused by natural disasters and adverse weather (rain, snow etc.); (iii) lane marking occlusion by different objects from surroundings (wiper, vehicles etc.); and (iv) presence of confusing lane like lines inside the lane view e.g., guardrails, pavement marking, road divider etc. In this paper, we proposed a simple, real-time, and robust lane detection and tracking method to detect lane marking considering the abovementioned challenging conditions. In this method, we introduced three key technologies. First, we introduce a comprehensive intensity threshold range (CITR) to improve the performance of the canny operator in detecting different types of lane edges e.g., clear, low intensity, cracked, colored, eroded, or blurred lane edges. Second, we propose a two-step lane verification technique, the angle-based geometric constraint (AGC) and length-based geometric constraint (LGC) followed by Hough Transform, to verify the characteristics of lane marking and to prevent incorrect lane detection. Finally, we propose a novel lane tracking technique, to predict the lane position of the next frame by defining a range of horizontal lane position (RHLP) along the x axis which will be updated with respect to the lane position of previous frame. It can keep track of the lane position when either left or right or both lane markings are partially and fully invisible. To evaluate the performance of the proposed method we used the DSDLDE (Lee and Moon, 2018) and SLD (Borkar et al., 2009) dataset with 1080× 1920 and 480× 720 resolutions at 24 and 25 frames/sec respectively where the video frames containing different challenging scenarios. Experimental results show that the average detection rate is 97.55%, and the average processing time is 22.33 msec/frame, which outperforms the state-of-the-art method.
KW - angle based geometric constraint (AGC)
KW - canny edge detector
KW - Comprehensive intensity threshold range (CITR)
KW - intelligent vehicles
KW - lane detection and tracking
KW - length based geometric constraint (LGC)
KW - novel lane tracking technique: defining range of horizontal lane position (RHLP)
KW - ROI
UR - http://www.scopus.com/inward/record.url?scp=85164424141&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85164424141&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2023.3292128
DO - 10.1109/ACCESS.2023.3292128
M3 - Article
AN - SCOPUS:85164424141
SN - 2169-3536
VL - 11
SP - 67938
EP - 67955
JO - IEEE Access
JF - IEEE Access
ER -