Lane Detection in Low-light Conditions Using an Efficient Data Enhancement : Light Conditions Style Transfer release_tora2ebkl5ewxe6ka5b7dp2ofq

by Tong Liu, Zhaowei Chen, Yi Yang, Zehao Wu, Haowei Li

Released as a article .

2020  

Abstract

Nowadays, deep learning techniques are widely used for lane detection, but application in low-light conditions remains a challenge until this day. Although multi-task learning and contextual information based methods have been proposed to solve the problem, they either require additional manual annotations or introduce extra inference computation respectively. In this paper, we propose a style-transfer-based data enhancement method, which uses Generative Adversarial Networks (GANs) to generate images in low-light conditions, that increases the environmental adaptability of the lane detector. Our solution consists of three models: the proposed Better-CycleGAN, light conditions style transfer network and lane detection network. It does not require additional manual annotations nor extra inference computation. We validated our methods on the lane detection benchmark CULane using ERFNet. Empirically, lane detection model trained using our method demonstrated adaptability in low-light conditions and robustness in complex scenarios. Our code for this paper will be publicly available.
In text/plain format

Archived Files and Locations

application/pdf  3.3 MB
file_bmnqlkyxz5cmphfcikjb7bbll4
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-02-04
Version   v1
Language   en ?
arXiv  2002.01177v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 09977a11-d9fa-4f12-bd33-08e4a415c381
API URL: JSON