Lane line detection is a very critical element for Advanced Driver Assistance Systems (ADAS). Although, there has been significant amount of research dedicated to the detection and localization of lane lines in the past decade, there is still a gap in the robustness of the implemented systems. A major challenge to the existing lane line detection algorithms stems from coping with bad weather conditions (e.g. rain, snow, fog, haze, etc.). Snow offers an especially challenging environment, where lane marks and road boundaries are completely covered by snow. In these scenarios, on-board sensors such as cameras, LiDAR, and radars are of very limited benefit. In this research, the focus is on solving the problem of improving robustness of lane line detection in adverse weather conditions, especially snow. A framework is proposed that relies on using Vehicle-to-Infrastructure (V2I) communication to access reference images stored in the cloud. These reference images were captured at approximately the same geographical location when visibility was clear and weather conditions were good. The reference images are used to detect and localize lane lines. The proposed framework then uses image registration techniques to align both the sensed image (adverse weather) and the reference image. Once the two images are aligned, the lane line information from the reference image is then superimposed on the local map built by the ADAS or Autonomous driving system. A real-world experiment is designed to evaluate the error in localizing the lane lines using the proposed framework in comparison to ground truth data. The measurements and evaluations are based on data gathered from a test vehicle. The vehicle is equipped with a monocular camera, forward looking radar, LiDAR, and GPS/IMU. The initial results show good potential for improving upon current state-of-the art approaches used in today’s automotive industry.