SEARCH WITHIN CONTENT
Citation Information : International Journal on Smart Sensing and Intelligent Systems. Volume 14, Issue 1, Pages 1-10, DOI: https://doi.org/10.21307/ijssis-2021-015
License : (BY-NC-ND-4.0)
Received Date : 15-April-2021 / Published Online: 28-July-2021
Many people suffer from movement disabilities and would benefit from an assistive mobility device with practical control. This paper demonstrates a face-machine interface system that uses motion artifacts from electroencephalogram (EEG) signals for mobility enhancement in people with quadriplegia. We employed an Emotiv EPOC X neuroheadset to acquire EEG signals. With the proposed system, we verified the preprocessing approach, feature extraction algorithms, and control modalities. Incorporating eye winks and jaw movements, an average accuracy of 96.9% across four commands was achieved. Moreover, the online control results of a simulated power wheelchair showed high efficiency based on the time condition. The combination of winking and jaw chewing results in a steering time on the same order of magnitude as that of joystick-based control, but still about twice as long. We will further improve the efficiency and implement the proposed face-machine interface system for a real-power wheelchair.
Brain-computer interfaces (BCIs) have been markedly developed to serve paralysis patients via assistive technology (Brunner et al., 2014). BCIs can be categorized as invasive or noninvasive. Most BCIs are noninvasive systems. Electroencephalography (EEG) measures the field potentials produced by neurons from the scalp, and it has been widely used in clinical applications and BCI systems (Abdulkader et al., 2015; Nicolas-Alonso and GomezGil, 2012). Currently, brain acquisition technology is developing rapidly. Neuroheadsets (Chamola et al., 2020) based on dry electrodes can acquire EEG signals and other relevant signals, such as electrooculogram and facial electromyogram (EMG) signals (Jang et al., 2016; Šumak et al., 2019; Yulianto et al., 2020). The Emotiv and NeuroSky companies have presented dry electrode systems for entertainment and other applications (Brunner et al., 2014; Yulianto et al., 2020). BCI devices and applications have mainly been used for smart homes, control of prosthetic devices, such as arm and hand exoskeletons, artificial arms, and power wheelchairs, and assistive and rehabilitation devices (Ben Taher et al., 2015; Long et al., 2012). In addition, BCIs can be beneficial for people with quadriplegia paralysis (severe disabilities). For people with hemialgia or paraplegia paralysis, an MYO gesture armband (Chu et al., 2020) and video-based human action recognition (Sarabu and Santra, 2021) can be suitable to extend their activity.
Currently, hybrid BCIs can yield high efficiency in practical devices and systems to serve people with severe disabilities. An improvement over the conventional BCI has been proposed by combining it with other BCI modalities. Electrooculography (EOG) measures potential changes, while controlling eye movements such as wink and blinks. EOG is widely utilized in cooperation with EEG-based BCI systems (He et al., 2020; Punsawad et al., 2010; Yang et al., 2016). A facial EMG signal measures changes in electrical potential that occur when facial, jaw, and tongue movements are executed. BCI-based assistive technology has been developed to serve disabled patients who have lost movement ability in their upper or lower limbs. A wheelchair is an assistive mobility device that can increase the level interaction between patient abilities and the external environment. Paralysis is the most common neural disorder that causes the loss of control of one or more muscles in the body. Because the different types of paralysis are a challenge in BCI development, we have tried to create a BCI-based assistive technology strategy for tetraplegia, especially in terms of mobility enhancement. Previous research has demonstrated many techniques and modalities that can be employed to build assistive mobility devices for patients with all paralysis types. Artifacts are other internal biomedical signals and other external signals that interfere with EEG signals within the same frequency range (Brunner et al., 2014). For example, facial and head movements are some of the most common signals that appear when people blink or move their eyeballs or eyelids. A hybrid BCI (Amiri et al., 2013; Richard et al., 2015) is a highlighting technique that improves the interaction performance of the given system by combining multiple or different input channels with BCI channels. The modalities of hybrid BCIs consist of (i) hybrid BCIs that combine multiple brain signals; (ii) combination of brain activity with other physiological signals such as EMG, EOG, and electrocardiogram (ECG) signals; and (iii) a combination of two BCI channels or a combination of a BCI with special assistive input devices (e.g., joysticks, smart wheelchair systems, etc.) (Hernandez-Ossa et al., 2017; Richard et al., 2015; Tang et al., 2018; Yang et al., 2016).
At present, there are few assistive devices for patients with quadriplegia paralysis on the market. Nevertheless, biomedical signal acquisition techniques and devices have been continuously developed for medical applications, such as a biosignal-based wearable device with a wireless biomedical sensor network (WBSN) for home healthcare. Therefore, we aim to develop a BCI system that can integrate a WBSN and serve a patient with quadriplegia paralysis in daily activities. In this paper, we develop a practical BCI system using EEG motion artifacts from a neuroheadset for assistive mobility device control in patients with quadriplegia paralysis. By employing EEG artifacts to control an electric wheelchair, a simulator is proposed. We design a control creation and translation strategy of EEG artifacts and motor imagery for a user-friendly BCI-controlled electric wheelchair simulator. The efficiency of the system and the user are verified. To evaluate the EEG headset, it is compared with previous work that involved an electrode placed on the skin.
The paper can be divided into four main sections, of which the first section is the introduction. The second section will describe research methods and includes four parts, i.e., (i) the proposed system, (ii) signal acquisition and preprocessing, (iii) feature extraction and algorithms, and (iv) command translations. The third section presents experimental results and discussions to demonstrate the efficiency of the proposed system and algorithm from the second section for online testing. In the last section, the outcome and outlook of the proposed system will be presented as a conclusion, and future work will be suggested.
In this work, we propose a human-machine interface system by using EEG artifacts obtained from an Emotiv EPOC X neuroheadset. The main idea is to use EEG artifacts that are generated from eye winking and jaw chewing to control the direction of a wheelchair. Four commands for direction control consisting of going forward, turning left, turning right, and reversing were created by employing an EEG artifact-based face-machine interface with two proposed command strategies. For the first command modality, we set a forward translation by using both jaw chewing (turning left by jaw chewing on the left, turning right by jaw chewing on the right) and eye winks (reversing by winking both eyes). The second modality consists of forward commands generated by jaw chewing (left or right or both), turning left with a left eye wink, and turning left with a left eye wink, as well as a backward command generated by winking both eyes. In the idle state, the wheelchair is stopped. However, in an emergency, the user winks both eyes three times to toggle off the wheelchair controller system, and the wheelchair stops immediately; winking three times again reenables the system. An overview of the proposed system for real-time simulated wheelchair control is shown in Figure 1. The process consists of preprocessing, algorithms, and command translation. A simple method is utilized for EEG feature extraction and classification. The details of each part are presented in the second section (Table 1).
In this paper, we used the Emotiv EPOC X neuroheadset to acquire EEG signals from channels AF3, F7, F3, and FC5, which exhibit strong features when the left eye is winked. For right eye winks, EEG signals from channels AF4, F8, F4, and FC6 also demonstrated a strong feature. Winking with both eyes generated EEG signal patterns from channels AF3, F7, F3, FC5 AF4, F8, F4, and FC6, as shown in Figure 3. Moreover, in this study, we utilized another EEG artifact: the signals induced by jaw movements. During chewing, including chewing on both the left and right sides of the jaw, the EEG channels that exhibited patterns during eye winking again showed clear features but different patterns, as shown in Figures 3 and 4. The process of the proposed face-machine interface system is shown in Figure 5.
Following the determination of EEG features, we selected four channels from among the eight total EEG channels for each participant during preprocessing by using the channel amplitudes. The EEG signals from F7 and F8 were employed for left and right eye wink detection. EEG signals from FC5 and FC6 were used to capture signals while jaw chewing. Using real-time processing, the EEG signals were used to detect actions, and commands were issued every second for the direction control of the virtual wheelchair.
Before executing the proposed face-machine interface system, threshold parameters must be acquired during left and right eye winks as follows:
The features acquired during facial movements, J L and J R , are defined as the feature parameters of left and right jaw chewing, respectively, and W L and W R are defined as the feature parameters of left and right eye winking, respectively; these four parameters are calculated as follows:
As the feature parameters were acquired, simple decision rules were used to compare the real-time features and threshold parameters that were obtained. The classification decisions for the seven commands were produced as follows:
|if J L >T JL & J R >T JR ,||Decision is “Com#1”|
|if J L >J R & J L >T JL ,||Decision is “Com#2”|
|if J R >J L & J R >T JR ,||Decision is “Com#3”|
|if W L >T WL & W R >T WR ,||Decision is “Com#4”|
|if W L >W R & W L >T WL ,||Decision is “Com#5”|
|if W R >W L & W R >T WR ,||Decision is “Com#6”|
Figure 5 presents the flowchart of the classification algorithm for command creation. Conditional statements (if-statements) and iterative statements (while-loops) were used to check conditions by comparing feature and threshold parameters. Then, the command will be translated to control the direction of the simulated wheelchair, as shown in Figure 6.
For command translations, the first proposed command modality, we controlled the forward movement by jaw chewing (turning left by moving the left side of the jaw and turning right by moving the right side of the jaw) and the backward movement by winking with both eyes. Moreover, we created a second modality that uses both eyes and the jaw; this modality is similar to the first proposed command, but we changed the command activities from those utilized in the previous command. We used the jaw to control the forward movement with normal chewing and used eye winking for turning left and right. For backward commands, we used the winking of both eyes, as shown in Figure 6.
Eight healthy participants (five men and three women, mean age 29 ± 5.3 years), all without any BCI experience, participated in the experiments. We used the proposed algorithms to automatically generate commands and calculate the resulting accuracy rates. In total, each participant performed two trials (24 commands). Before testing, each participant completed a training session for 15 min, and then the participant performed the experiment. The command sequence was defined as in Table 2.
Table 3 shows that the maximum accuracy achieved using the first proposed modality was 95.8%, while the maximum accuracy yielded using the second control modality was 100%. The average accuracy produced using the first control modality was 92.2%, and the average accuracy obtained using the second control modality was 96.9%; the second control modality can yield a higher accuracy rate than the first control modality. Low accuracy may have occurred because some participants could not separate left and right chewing. The performance of the EEG neuroheadset for a face-machine interface system similar to that used in previous work using EMG signals measured from facial muscles by directly placed surface electrodes was 99.3% as measured by an algorithm evaluation (Jang et al., 2016). Therefore, the EEG artifacts from the Emotiv neuroheadset can be extracted by the proposed algorithm for simulated wheelchair control.
Normally, the user’s confidence level has a relationship with the result. Before starting Experiment II, we tried to control the participant’s confidence by achieving a greater than 85% accuracy in Experiment I and allotted 20 min for a training session. We also recorded the time of each participant for steering the simulated power wheelchair using a joystick for the user and system evaluations.
Each participant was tested with three modalities to freely control a virtual wheelchair, as shown in Figure 7a. Each route was performed three times for each modality. The times taken from start to stop were recorded to evaluate the proposed control modalities and the resulting user performances. An example of the experiment is illustrated in Figure 7b.
Figures 8 and 9 present efficiency comparisons between the proposed control modalities and a joystick based on the time required to steer the simulated power wheelchair along route 1 and route 2, respectively. For route 1, the average time required by the joystick (conventional control modality) was 55 sec, the average time required by the first control modality was 156 sec, and time required by the second control modality was 122 sec. The least amount of time taken using the first control modality was 118 sec, that using the second control modality was 107 sec, and that using joystick control took only 47 sec. For route 2, the average time taken by the joystick control modality was 57 sec, the average time required by the first control modality was 160 sec, and the time required by the second control modality was 127 sec. The least amount of time taken using the first control modality was 102 sec, that using the first control modality was 63 sec, and that using joystick control took only 47 sec.
Comparing all modalities, we found that the second control modality could achieve a higher efficiency than that of the first control modality for all tested routes but lower efficiency than that of the joystick. The difference between the average times taken by the second control modality and the joystick on route 1 was 67 sec, and that on route 2 was 70 sec. Participants 1 and 2 with BCI experience demonstrated a high efficiency when using the second control modality; the efficiency was close to that achieved using joystick control. However, some participants may have difficulty performing and may need more time for training. Efficiency comparisons with previous works in real-time discontinuous control (Jang et al., 2016) showed that the proposed system can produce an elapsed time and command transfer rate similar to those of previous works. According to the results, the proposed face-machine interface can be further implemented in a real-power wheelchair.
In this work, we proposed utilizing EEG artifacts obtained from an Emotiv neuroheadset for human-machine interface system-based practical machine control. The advantages of the EEG neuroheadset are that it is flexible and easy to install for signal acquisition. For the proposed control modalities, we employed eye winking and jaw chewing to create seven command channels. The two control modalities were demonstrated via simulated wheelchair control. Incorporating eye winking and jaw chewing into the system can result in high efficiency, and this approach can be developed to increase the efficiency further until it is close to that of using joystick control. Nevertheless, some limitations of the use of the proposed real-time face-machine interface system to control a simulated wheelchair are as follows. (i) The system required the training of some users who had difficulty controlling only the left or right sides of the eye and jaw movements to generate clear features for achieving high user and system performances. (ii) Over a long period of time, the system required adaptive threshold calibration and detection of the fatigue period to avoid a high error rate. (iii) Following the initial verification of the proposed system with only the directional control of the simulated wheelchair, we aimed to further enable additional speed control. We expect that the face-machine interface system can achieve performance equivalent to using a joystick and hands-free control. For future applications, we will employ the proposed system to control real-power wheelchairs or electric devices for serving people with quadriplegia.