The rapid technological evolution of urban air mobility has led to the exponential growth of unmanned aerial vehicles (UAVs), typically referred to as drones. Drones are used commercially for a variety of tasks such as transporting goods, shooting footage for movies, and monitoring activity on the ground. A critical application for UAVs is research-and-rescue missions in hard-to-reach and high-risk locations where the drone operator controls the mission from a safe and secure location. Brain-computer interface (BCI) technologies are gaining popularity in the UAV sector because they are applicable in a wide variety of situations where remote control of the drone is achieved through mental commands. The process that enables the connection of our thoughts directly to a drone is accomplished by recording and analyzing human brain-wave activity, identifying the patterns responsible for generating our cognitive ideas, and converting them into actionable data.

The human brain is a complex, constantly active organ that produces large amounts of data that we organically interpret with ease. When we try to replicate the process computationally, the emphasis is on producing the same accurate outcomes. This includes gathering data via multiple frequencies and channels from electrodes used to compute metrics for evaluating brain-wave activity. Our project utilized an Emotiv Epoc+ 14-channel electroencephalogram (EEG) whole-brain sensing headset. (See Figure 1.)

Figure 1. The brain-computer interface: Emotiv Epoc+ model
Source: Capgemini Engineering

When controlling a drone with a BCI, the human operator should state the desired command confidently but cautiously. That's because the technologies in the headset are directly linked to the emotional and cognitive state of the operator, where anxiety and distraction can influence the control and stability of the drone. The operator's mental state could lead them to send the drone the wrong instructions, potentially causing a crash or collision with other objects that disrupt the flight's mission.

Building an intelligent decision-making BCI system

This blog proposes a decision-making system that considers the operator's emotional state, deciding whether the mental command formulated by the operator should be sent to the drone. This is accomplished by developing a digital copy of the operator using digital-twin (DT) technology with predictive capabilities for the operator's emotional state visually through video recordings and cognitively through a BCI Emotiv Epoc+ data-stream subscription.

The DT can accurately identify if the operator is in a stable state of mind to send commands to the drone. As soon as the proper behavior of the operator is specified, the DT calculates the necessary information corresponding to the desired command and sends it to the drone. Additionally, communication between the DT and the drone is established through a ROS 2 client node that connects to a server node responsible for managing one or more drones.

The DT validation comprises two sessions: isolated environment validation; and the execution of a free scenario. The goal in the first session is to evaluate whether the DT can discriminate between the cognitive emotions while the operator reproduces certain states in an isolated fashion. In the second session, validation is performed during the simulation of a two-minute mission, where the operator commands the drone freely regardless of their emotional state. This determines whether the DT detects rapid emotional changes in response to external events. To perform this two-step validation process, we created an arena for the controlled execution of experiments with a Crazyflie drone. (see Figures 2 and 3.)

Figure2. The arena used to test the drone operator's test cases
Source: Capgemini Engineering

Figure 3. The Crazyflie drone model 2.1
Source:
Capgemini Engineering

To provide feedback on the operator's emotional state, we used four categories: calm, focused, stressed, or distracted. Each time point, where data is gathered and analyzed by the DT, was about two seconds. The goal was to classify the operator's emotional state based on this data according to one of the four behavioral categories, allowing the command to be or not to be forwarded to the drone. Also, it was required to define acceptable (i.e., positive) and unacceptable (i.e., negative) behavior from the operator to send or block the command. Positive classes (i.e., calm and focused) depict a stable cognitive state of mind and allow the broadcast of information to the drone. Negative classes (i.e., distracted and stressed) indicate the operator is not mentally capable of sending commands.

Based on about 80,000 observations of eight-second data sets, 20,000 per emotion, we computed the reliability of the cognitive DT to measure each emotion. When the operator was in the calm state, the highest accuracy of the cognitive DT was 87.5%; when in the focused state, the accuracy was 98.8%; in the distracted state, it was 93.5%, and in the stressed state, 100%. Based on these results, although the DT is flawed, we achieved high accuracy in classifying the operator's emotional state that was replicated in other experiments.

When the cognitive digital twin is uncertain about observations belonging to the same class group (e.g., a positive group of classes), it does not influence the final decision, even for an incorrect classification. Thus, in situations where the operator experiences negative emotions and the cognitive digital twin outputs a positive response - such as incorrectly classifying the distraction of the operator as a focused moment - it could potentially destabilize the drone's flight. However, when overlapping the visual digital twin, errors from cognitive classifications are not propagated to the physical drone as the commands are rechecked, and any errors are caught.

In addition, we tested the DT when we were executing a free flight with the drone, where the operator was free to feel any emotion. During a two-second mission, with alarms set to be triggered at specific timestamps, the goal was to distract the operator while sending commands to the drone. As a result, one of the two occurrences led to the classification of a positive mental emotion and a negative visual emotion due to the operator's surprised facial expression. This is an example of where the addition of the visual DT, which provides complimentary feedback, allows classification errors from the mental DT to be discovered and to revert the decision of sending a command while the operator is distracted or stressed.

To a Drone and Beyond

Our experiment proved that the digital twin can accurately identify the operator's mental state and handle the commands efficiently by computing coordinates and providing a communications channel to the physical drone. Also, it could identify rapid mood changes, adjusting to various scenarios when defining and deciding what action to take.

Overall, the system is a reliable and safe platform for controlling drones using mental commands. Furthermore, by showing that one ROS 2 client node can control a drone through a client-server architecture, it can easily compose a swarm of drones by adding as many client nodes as needed.

Figure4. Live demonstration with emotion detection
Source: Capgemini Engineering

This project is managed by the Capgemini Engineering Research & Innovation (R&I) department as an industrial use case. It creates the possibility of using a digital-twin-based system to control safety-critical systems.


Author:
Diana Ramos, Engineer, Capgemini Engineering
Diana received her bachelor's degree in computer science engineering in 2019 from the Polytechnic Institute of Engineering of Porto, Portugal. She finished her master's degree in software engineering at the University of Porto in July 2019. Her master's thesis was integrated into the Capgemini Engineering Research and Innovation Zeus project, featuring the visual and cognitive emotion recognition of drone operators using a brain-computer interface and machine learning. At Capgemini Engineering, she works on software engineering, data analysis, and machine-learning models for drone use cases.

Attachments

  • Original Link
  • Original Document
  • Permalink

Disclaimer

Capgemini SE published this content on 03 March 2022 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 03 March 2022 16:20:05 UTC.