07 Jan 2024

DriverSense: A Multi-Modal Framework for Advanced Driver Assistance System


Authors :- DA Vyas, M Chaturvedi
Publication :- 16th International Conference on COMmunication Systems & NETworkS (COMSNETS)

Driver distraction, lack of concentration, and increased stress levels are the primary reasons for road accidents. The Advanced Driver Assistance System (ADAS) is used in high-end cars to improve the safety of drivers. These systems use an in-vehicle camera to track the eye movement of a driver and analyze the facial expressions to classify the driver's attention level, mood and/or drowsiness. However, these computer vision-based solutions do not classify the stress and distraction level of a driver which are the major causes of fatality. Further, no ADAS is available to the best of our knowledge for two-wheeler (2W) and three-wheeler (3W) vehicle drivers who represent 80% of the driver population in India. These drivers are exposed to more distractions, stress, and risk due to an open environment. Accordingly, this paper proposes DriverSense, a wearable device based vehicle independent framework for ADAS that uses mobility sensors (e.g., gyroscope, accelerometer, GPS, etc.) and physiological sensors (e.g., EEG, PPG, ECG, SPO2, etc.) to detect aggressiveness and stress level of drivers in real time. The framework employs edge computing to reduce communication delays, and machine learning models for computing inferences related to driver behavior. The results of preliminary experiments conducted using the existing datasets and the data of mobility sensors collected in Ahmedabad city of India are encouraging. The aggressiveness of the drivers could be classified with 86.05% accuracy, whereas the stress level was identified with 88.24% accuracy using existing machine learning algorithms.

DOI Link :- https://doi.org/10.1109/COMSNETS59351.2024.10427459