Feature-based detection and classification of sleep disorders using multispectral imaging
Sleep apnea syndrome (SAS) is a sleep-related breathing disorder (SRBD) characterized by repetitive breathing interruptions during sleep, resulting in daytime drowsiness, concentration difficulties, and increased risk of cardiovascular diseases. It affects approximately 30-50 % of the male and 15-25 % of the female population on a moderate level. SAS is diagnosed in specialized sleep laboratories via polysomnography (PSG). A PSG involves a high number of contact-based sensors, which may lead to patient discomfort and measurement bias. Therefore, contactless alternatives to PSG are a promising way to overcome these drawbacks. This work introduces novel methods for diagnosing and characterizing SAS with a contactless optical approach. The diagnosis and characterization of SAS is based on the detection and classification of nocturnal respiratory and oxygen desaturation events. Respiratory events are classified according to the amplitude (apnea or hypopnea) and the physiological source (obstructive or central) of the event. Afterwards, the classified events are used for estimating important sleep metrics such as the apnea hypopnea index (AHI), obstructive apnea index (oAI), central apnea index (cAI), oxygen desaturation index (ODI) and SAS severity. The methods are built on the analysis of multispectral images in the near infrared (NIR) and far infrared (FIR) spectra. The NIR spectrum is used to extract remote photoplethysmography (rPPG) signals at 780 and 940 nm from a region of interest (ROI) on the forehead, while the FIR spectrum is used to extract a respiratory airflow signal induced by breathing-related temperature variations in the subnasal region. Utilizing the extracted signals, feature-based and deep learning-based methods are designed for event detection and classification. The feature-based method relies on generating a set of manually extracted features which are based on expected physiological processes, biosignal morphology and demographic patient data. In contrast, the deep learning-based method employs state-of-the-art deep neural networks to directly classify the events using the three input signals. Furthermore, a patient study is conducted to evaluate the proposed methods. The study yielded 23 measurements of symptomatic SAS patients. The feature-based methods outperform the deep learning-based methods for both respiratory event and oxygen desaturation event detection. The results of the feature-based methods are as follows. The classification accuracy between normal breathing, hypopneas, and apneas based on the leave-one-patient-out cross-validation (LOPOCV) metric is 99.5 %, and between obstructive and central apneas it is 98.8 %. The estimations of the AHI, oAI, and cAI result in a mean absolute error (MAE) of 1.5, 0.7, and 0.3 events per hour and a Pearson correlation of 0.9981, 0.9989, and 0.9950, respectively. The detection accuracy of oxygen desaturation events is 95.4 %. The ODI is estimated with an MAE of 2.9 events per hour and a Pearson correlation of 0.9900. In conclusion, the combination of multimodal data analysis and feature engineering enables the diagnosis and characterization of SAS with a camera-based system. The results suggest that the presented method may be used as a PSG substitute for diagnosing SAS and characterizing it based on respiratory and oxygen desaturation events.
Preview
Cite
Rights
Use and reproduction:
This work may be used under a
.