Datasets from Youngjun’s research group
To foster work in areas: Computational Physiology, Machine Learning for Psychophysiology and Healthcare, Affective computing, Brain-Machine Interfaces and Thermal Imaging,
we open our collected datasets to research communities.
Please contact us ( youngjun[dot]cho[at]ucl[dot]ac[dot]uk ) to get permission and password of each dataset. Please send us a separate email for each dataset you want to use with the following information. Just copy and paste the below and add your answers.
* Title: [“Dataset name”] Database Request
* Your name / Your research institute (name, city, country, web address if available)
* Some words on your research and how the database would be used in it
* How you heard of the database (colleague, papers, etc.)
* Please state that you agree to the terms of use outlined below:
We just ask that you don’t give out the database access information so that we are able to keep track of who uses it. We also ask that you cite us in any publications that may stem from the use of this database.
1. Robust Tracking of Respiratory Rate (Collection Period: July 2016 – April 2017)
Datasets: [Dataset 1], [Dataset 2], [Dataset 3]
We ask you to cite us in any publications that may stem from the use of this database. For citation, please cite the following paper:
– Youngjun Cho, Simon J. Julier, Nicolai Marquardt, and Nadia Bianchi-Berthouze, “Robust tracking of respiratory rate in high-dynamic range scenes using mobile thermal imaging,” Biomedical Optics Express 8(10), 4480-4503 (2017)
Description:
1. Dataset 1: controlled respiration in environments with non-constant temperature
The aim of the first experiment was to carry out a systematic evaluation of our approach in environments with different temperature values and dynamics. 5 healthy adults (2 female) (aged 29-38 years, M=31.4, SD=3.78) were recruited from the university subject pool. Following the protocol used in Gastel et al. (2016), participants were asked to maintain a stable posture and breath according to a set of breathing patterns presented to them on a screen. Figure 5 in Cho et al. (2017) shows the design for this experiment. All the participants were given a thermal camera attached to an Android smartphone to record their face and an additional smartphone that provided the breathing patterns. The three guiding breathing patterns composed of slow (10 breaths/min), normal (15 bpm) and fast speed (30 bpm). Each breathing pattern lasted 30 seconds. The guiding breathing patterns were displayed dynamically on the screen. Participants were given a 60 seconds-training period. Taking advantage of mobile thermal imaging, participants were able to monitor themselves by aiming the camera at their face. The distance between the face and device ranged from 35cm to 55cm. The recordings were repeated in four different places: a controlled room (“Place A”), entrance of the building (wind from outside and heat from inside) (“Place B”), a street corner (windy) (“Place C”) and park (“Place D”) in winter. The collected dataset consists of approximately 80 minutes – recordings (5 participants x 4 places x 4 minutes). The person was asked to remain as still as possible.
2.Dataset 2: unconstrained respiration during desk activity with natural motion artifacts
The aim of the second experiment was to test our approach in more realistic unconstrained sedentary desk activities. 10 healthy adults (6 female) (aged 24-31 years, M=28.4, SD=2.17) from a variety of ethnical backgrounds (skin colour: from pale white to black) were recruited from the university subject pool. The pool includes people from outside the university. The experiment was conducted in a quiet laboratory room in summer and simulated desk activity behaviour, consisting of three phases lasting 2 minutes each: i) sitting and conversation, ii) reading a news article on the screen and iii) surfing the internet with the keyboard and mouse. As described in Figure 6 in Cho et al. (2017), the mobile thermal camera was installed near each participant’s face using a shoulder rig and the distance from the face ranged from 35cm to 50cm to account for the spatial resolution (160×120) of the camera. To ensure natural motion artefacts, people were told to act naturally, i.e. no movement constraints were imposed. The collected dataset includes a variety of movement situations, such as head rotations due to people walking behind them with temporary disappearance of the nostril from the thermal camera view. The change in a participant’s position from phase i) to ii) ensured changes in global temperature variance around the nostril-ROI. The experiment resulted in 60 minutes (10 participants x 6 minutes) of thermal video recording of spontaneous breathing patterns, natural movements in sedentary contexts and changes in ambient temperature.
*An example clip to show the performance of the proposed method (from Dataset2 – unconstrained respiration with natural motion artefact)
3. Dataset 3: unconstrained respiration in fully mobile context and varying thermal dynamic range scenes
The last experiment aimed to measure the respiration patterns for people undertaking natural, unrestricted actions. In order to enable mobility, the thermal camera was attached to a headset-microphone-shaped rig whose distance from the face ranged between 20cm and 30cm as shown in Figure 6 in Cho et al. (2017). We recruited 8 healthy adults (5 female), aged 23-31 years (M=27.0, SD=2.93) from various ethnical backgrounds. To simulate a variety of fully unconstrained situations, the experiment had two main sessions: i) indoor physical activity and ii) outdoor physical activity. The first session consisted of three tasks of 2 minutes each: walking through a corridor, standing in a dark room while doing small movement and climbing and descending stairs. The second session was carried out outdoor on a street pavement and in a windy park to involve varying thermal dynamic range scenes. During the session, subjects were guided to walk slow, walk fast, and stroll in natural paces. Each walking pattern lasted 2 minutes. All sessions were run in the summer. The final dataset includes thermal imaging sequences of approximately 96 minutes (8 participants x 2 sessions x 3 activities x 2 minutes).
References:
[1] Youngjun Cho, Simon J. Julier, Nicolai Marquardt, and Nadia Bianchi-Berthouze, “Robust tracking of respiratory rate in high-dynamic range scenes using mobile thermal imaging,” Biomed. Opt. Express 8, 4480-4503 (2017) – DOI: 10.1364/BOE.8.004480.
2. DeepBreath (Collection Period: March 2017 – May 2017)
Datasets: [Dataset A (main): Respiration Variability Spectrogram] [Dataset B (supplementary): Breathing Raw Signal – recovered through mobile thermal imaging]
We ask you to cite us in any publications that may stem from the use of this database. For citation, please cite the following papers:
– Youngjun Cho, Nadia Bianchi-Berthouze, and Simon J. Julier. “DeepBreath: Deep Learning of Breathing Patterns for Automatic Stress Recognition using Low-Cost Thermal Imaging in Unconstrained Settings.” In proceedings of the 7th International Conference on Affective Computing and Intelligent Interaction, ACII 2017
– Youngjun Cho, Simon J. Julier, Nicolai Marquardt, and Nadia Bianchi-Berthouze, “Robust tracking of respiratory rate in high-dynamic range scenes using mobile thermal imaging,” Biomedical Optics Express 8(10), 4480-4503 (2017)
Description:
Two widely used tasks for inducing mental stress (cognitive-load induced stress) were selected for our purpose: the Stroop Colour Word test (denoted as Stroop) and the mathematics test (denoted as Math). Each task has both an easy and a difficult session to ensure a good spread of induced stress levels within each task. This was important as the tasks differed in the amount of verbal output and behaviour they required. In the Stroop task, all participants had to name the colour of a word. In the easy session of the first task, the meaning of a word and its font colour were all congruent (e.g., the word red written in red). In the difficult session, these were incongruent (e.g., the word yellow, but written in red). The Math task required the participants to repeatedly subtract (mentally) a certain number (e.g., 1, 13) from a four digit number (e.g., 5000): while the subtracted number was set to a two digit number (e.g., 13) in the difficult session of the second task, that was set to 1, transforming the subtraction test to an easy counting-down test for the easy session. It was expected that the difficult sessions in Math and Stroop tests would lead to higher cognitive load than the easy sessions. After each answer, participants received sound feedback to inform them whether the answer was correct or not. Before and after each of the task sessions, the participants were asked to fill a short questionnaire (denoted as Q in Figure 4 [1]) to report their stress level. All the tests were programmed and run in MATLAB (2015b, The MathWorks). The program sources are publicly released at http://youngjuncho.com/2017/ACII2017-open-sources/.
* This current dataset: N=8 (aged 18-53 years, M=30.75, SD=10.22) – we have currently been extending this experiment (December 2017 – May 2018).
Detailed information about each dataset is described in each zip file (Please read README.md)
References:
[1] Youngjun Cho, Nadia Bianchi-Berthouze, and Simon J. Julier. “DeepBreath: Deep Learning of Breathing Patterns for Automatic Stress Recognition using Low-Cost Thermal Imaging in Unconstrained Settings.” In proceedings of the 7th International Conference on Affective Computing and Intelligent Interaction, ACII 2017 – Preprint: https://arxiv.org/abs/1708.06026
[2] Youngjun Cho, Simon J. Julier, Nicolai Marquardt, and Nadia Bianchi-Berthouze, “Robust tracking of respiratory rate in high-dynamic range scenes using mobile thermal imaging,” Biomedical Optics Express 8(10), 4480-4503 (2017) – DOI: 10.1364/BOE.8.004480.
3. Deep Thermal Imaging (Collection Period: April / August 2017)
Datasets: [DeepTherm I: Indoor materials] [DeepTherm II: Outdoor materials]
We ask you to cite us in any publications that may stem from the use of this database. For citation, please cite the following papers:
– Youngjun Cho, Nadia Bianchi-Berthouze, Nicolai Marquardt, and Simon J. Julier. “Deep Thermal Imaging: Proximate Material Type Recognition in the Wild through Deep Learning of Spatial Surface Temperature Patterns.” In proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 2018 (In Press)
Additionally, if you refer to “mobile thermal imaging” or “quantization for thermal imaging”, the journal article below should be highly informative.
– Youngjun Cho, Simon J. Julier, Nicolai Marquardt, and Nadia Bianchi-Berthouze, “Robust tracking of respiratory rate in high-dynamic range scenes using mobile thermal imaging,” Biomedical Optics Express 8(10), 4480-4503 (2017)
Description:
[Full descriptions will be available after when the CHI paper is formally published]
API for Deep Thermal Imaging is now fully available for download at http://youngjuncho.com/2018/deepthermalimagingapi/.
Detailed information about each dataset is described in each zip file (Please read README.md)
References:
[1] Youngjun Cho, Nadia Bianchi-Berthouze, Nicolai Marquardt, and Simon J. Julier. “Deep Thermal Imaging: Proximate Material Type Recognition in the Wild through Deep Learning of Spatial Surface Temperature Patterns.” In proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 2018 (In Press)
[2] Youngjun Cho, Simon J. Julier, Nicolai Marquardt, and Nadia Bianchi-Berthouze, “Robust tracking of respiratory rate in high-dynamic range scenes using mobile thermal imaging,” Biomedical Optics Express 8(10), 4480-4503 (2017) – DOI: 10.1364/BOE.8.004480.
4. iBVP Dataset: RGB-Thermal Videos of human faces with high-resolution signal-quality labels (Collection Period: 2022 – 2023)
Access to this Dataset: Please email the principal investigator (youngjun[dot]cho[at]ucl[dot]ac[dot]uk).
Description:
The iBVP dataset is a collection of synchronized RGB and thermal infrared videos with PPG ground-truth signals acquired from an ear. The PPG signals are marked with manual signal quality labels, as well as with the SQA-PhysMD model trained and validated to conduct dense (per-sample) signal-quality assessment. The data acquisition was conducted with an objective of inducing real-world variations in psycho-physiological states, as well as head movement. Each participant experienced four conditions, including (a) rhythmic slow breathing and rest – “A”, (b) an easy math task – “B”, (c) a difficult math task – “C”, and (d) a guided head movement task – “D”. Cognitively challenging math tasks with varying degrees of difficulty levels were chosen, as these have been reported to alter the physiological responses. To randomize the sequence of conditions, “A” – “D” and “B” -“C” were interchanged. The study protocol was approved by the University College London Interaction Centre ethics committee. Data was collected from 33 healthy participants (23 females) representing multiple ethnic groups and skin types with age ranging from 18 to 45 (27 ± 5.8).
RGB and thermal cameras were positioned in front of the participant at around a 1 m distance. Logitech BRIO 4K UHD webcam was used to capture RGB video frames with 640 × 480 resolution at 30 frames per second (FPS), while thermal infrared frames were captured with 30 FPS using thermal camera (A65SC, FLIR system) having 640 × 512 resolution. Customized software was developed using C++ programming language, which utilized the FLIR-provided Spinnaker library for acquiring thermal infrared frames, while using OpenCV library functions for RGB frames. With a total of 127 sessions, each lasting for 3 minutes, the dataset comprise 381 minutes of RGB–Thermal video recording in total.
For Citation:
Joshi, J. N., & Cho, Y. (2024). iBVP Dataset: RGB-Thermal rPPG Dataset With High Resolution Signal Quality Labels, Electronics (in press)