Video rain-streaks removal by combining data-driven and feature-based models

Muhammad Rafiqul Islam, Manoranjan Paul

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)
1 Downloads (Pure)


Video analytics and computer vision applications face challenges when using video sequences with low visibility. The visibility of a video sequence is degraded when the sequence is affected by atmospheric interference like rain. Many approaches have been proposed to remove rain streaks from video sequences. Some approaches are based on physical features, and some are based on data-driven (i.e., deep-learning) models. Although the physical features-based approaches have better rain interpretability, the challenges are extracting the appropriate features and fusing them for meaningful rain removal, as the rain streaks and moving objects have dynamic physical characteristics and are difficult to distinguish. Additionally, the outcome of the data-driven models mostly depends on variations relating to the training dataset. It is difficult to include datasets with all possible variations in model training. This paper addresses both issues and proposes a novel hybrid technique where we extract novel physical features and data-driven features and then combine them to create an effective rain-streak removal strategy. The performance of the proposed algorithm has been tested in comparison to several relevant and contemporary methods using benchmark datasets. The experimental result shows that the proposed method outperforms the other methods in terms of subjective, objective, and object detection comparisons for both synthetic and real rain scenarios by removing rain streaks and retaining the moving objects more effectively.

Original languageEnglish
Article number6856
Pages (from-to)1-19
Number of pages19
Issue number20
Publication statusPublished - 15 Oct 2021


Dive into the research topics of 'Video rain-streaks removal by combining data-driven and feature-based models'. Together they form a unique fingerprint.

Cite this