Lehrgebiet: Theoretische Informatik und künstliche Intelligenz
Büro: 01.214
Labor: 04.105
Telefon: +49 208 88254-806
E-Mail:
Ioannis Iossifidis studierte Physik (Schwerpunkt: theoretische Teilchenphysik) an der Universität Dortmund und promovierte 2006 an der Fakultät für Physik und Astronomie der Ruhr-Universität Bochum.
Am Institut für Neuroinformatik leitete Prof. Dr. Iossifidis die Arbeitsgruppe Autonome Robotik und nahm mit seiner Forschungsgruppe erfolgreich an zahlreichen, vom BmBF und der EU, geförderten Forschungsprojekten aus dem Bereich der künstlichen Intelligenz teil. Seit dem 1. Oktober 2010 arbeitet er an der HRW am Institut Informatik und hält den Lehrstuhl für Theoretische Informatik – Künstliche Intelligenz.
Prof. Dr. Ioannis Iossifidis entwickelt seit über 20 Jahren biologisch inspirierte anthropomorphe, autonome Robotersysteme, die zugleich Teil und Ergebnis seiner Forschung im Bereich der rechnergestützten Neurowissenschaften sind. In diesem Rahmen entwickelte er Modelle zur Informationsverarbeitung im menschlichen Gehirn und wendete diese auf technische Systeme an.
Ausgewiesene Schwerpunkte seiner wissenschaftlichen Arbeit der letzten Jahre sind die Modellierung menschlicher Armbewegungen, der Entwurf von sogenannten «Simulierten Realitäten» zur Simulation und Evaluation der Interaktionen zwischen Mensch, Maschine und Umwelt sowie die Entwicklung von kortikalen exoprothetischen Komponenten. Entwicklung der Theorie und Anwendung von Algorithmen des maschinellen Lernens auf Basis tiefer neuronaler Architekturen bilden das Querschnittsthema seiner Forschung.
Ioannis Iossifidis’ Forschung wurde u.a. mit Fördermitteln im Rahmen großer Förderprojekte des BmBF (NEUROS, MORPHA, LOKI, DESIRE, Bernstein Fokus: Neuronale Grundlagen des Lernens etc.), der DFG («Motor‐parietal cortical neuroprosthesis with somatosensory feedback for restoring hand and arm functions in tetraplegic patients») und der EU (Neural Dynamics – EU (STREP), EUCogII, EUCogIII ) honoriert und gehört zu den Gewinnern der Leitmarktwettbewerbe Gesundheit.NRW und IKT.NRW 2019.
ARBEITS- UND FORSCHUNGSSCHWERPUNKTE
- Computational Neuroscience
- Brain Computer Interfaces
- Entwicklung kortikaler exoprothetischer Komponenten
- Theorie neuronaler Netze
- Modellierung menschlicher Armbewegungen
- Simulierte Realität
WISSENSCHAFTLICHE EINRICHTUNGEN
- Labor mit Verlinkung
- ???
- ???
LEHRVERANSTALTUNGEN
- ???
- ???
- ???
PROJEKTE
- Projekt mit Verlinkung
- ???
- ???
WISSENSCHAFTLICHE MITARBEITER*INNEN
Felix Grün
Büro: 02.216 (Campus Bottrop)
Marie Schmidt
Büro: 02.216 (Campus Bottrop)
Aline Xavier Fidencio
Gastwissenschaftlerin
Muhammad Ayaz Hussain
Doktorand
Tim Sziburis
Doktorand
Farhad Rahmat
studentische Hilfskraft
AUSGEWÄHLTE PUBLIKATIONEN
-
2024
152.Fidêncio, Aline Xavier; Klaes, Christian; Iossifidis, Ioannis
A Generic Error-Related Potential Classifier Based on Simulated Subjects Artikel
In: Bd. 18, 2024, ISSN: 1662-5161.
Abstract | Links | BibTeX | Schlagwörter: adaptive brain-machine (computer) interface, BCI, EEG, Error-related potential (ErrP), ErrP classifier, Generic decoder, Machine Learning, SEREEGA, Simulation
@article{xavierfidencioGenericErrorrelatedPotential2024,
title = {A Generic Error-Related Potential Classifier Based on Simulated Subjects},
author = {Aline Xavier Fidêncio and Christian Klaes and Ioannis Iossifidis},
url = {https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2024.1390714/full},
doi = {10.3389/fnhum.2024.1390714},
issn = {1662-5161},
year = {2024},
date = {2024-07-17},
urldate = {2024-07-17},
volume = {18},
publisher = {Frontiers},
abstract = {$<$p$>$Error-related potentials (ErrPs) are brain signals known to be generated as a reaction to erroneous events. Several works have shown that not only self-made errors but also mistakes generated by external agents can elicit such event-related potentials. The possibility of reliably measuring ErrPs through non-invasive techniques has increased the interest in the brain-computer interface (BCI) community in using such signals to improve performance, for example, by performing error correction. Extensive calibration sessions are typically necessary to gather sufficient trials for training subject-specific ErrP classifiers. This procedure is not only time-consuming but also boresome for participants. In this paper, we explore the effectiveness of ErrPs in closed-loop systems, emphasizing their dependency on precise single-trial classification. To guarantee the presence of an ErrPs signal in the data we employ and to ensure that the parameters defining ErrPs are systematically varied, we utilize the open-source toolbox SEREEGA for data simulation. We generated training instances and evaluated the performance of the generic classifier on both simulated and real-world datasets, proposing a promising alternative to conventional calibration techniques. Results show that a generic support vector machine classifier reaches balanced accuracies of 72.9%, 62.7%, 71.0%, and 70.8% on each validation dataset. While performing similarly to a leave-one-subject-out approach for error class detection, the proposed classifier shows promising generalization across different datasets and subjects without further adaptation. Moreover, by utilizing SEREEGA, we can systematically adjust parameters to accommodate the variability in the ErrP, facilitating the systematic validation of closed-loop setups. Furthermore, our objective is to develop a universal ErrP classifier that captures the signal's variability, enabling it to determine the presence or absence of an ErrP in real EEG data.$<$/p$>$},
keywords = {adaptive brain-machine (computer) interface, BCI, EEG, Error-related potential (ErrP), ErrP classifier, Generic decoder, Machine Learning, SEREEGA, Simulation},
pubstate = {published},
tppubtype = {article}
}
$<$p$>$Error-related potentials (ErrPs) are brain signals known to be generated as a reaction to erroneous events. Several works have shown that not only self-made errors but also mistakes generated by external agents can elicit such event-related potentials. The possibility of reliably measuring ErrPs through non-invasive techniques has increased the interest in the brain-computer interface (BCI) community in using such signals to improve performance, for example, by performing error correction. Extensive calibration sessions are typically necessary to gather sufficient trials for training subject-specific ErrP classifiers. This procedure is not only time-consuming but also boresome for participants. In this paper, we explore the effectiveness of ErrPs in closed-loop systems, emphasizing their dependency on precise single-trial classification. To guarantee the presence of an ErrPs signal in the data we employ and to ensure that the parameters defining ErrPs are systematically varied, we utilize the open-source toolbox SEREEGA for data simulation. We generated training instances and evaluated the performance of the generic classifier on both simulated and real-world datasets, proposing a promising alternative to conventional calibration techniques. Results show that a generic support vector machine classifier reaches balanced accuracies of 72.9%, 62.7%, 71.0%, and 70.8% on each validation dataset. While performing similarly to a leave-one-subject-out approach for error class detection, the proposed classifier shows promising generalization across different datasets and subjects without further adaptation. Moreover, by utilizing SEREEGA, we can systematically adjust parameters to accommodate the variability in the ErrP, facilitating the systematic validation of closed-loop setups. Furthermore, our objective is to develop a universal ErrP classifier that captures the signal's variability, enabling it to determine the presence or absence of an ErrP in real EEG data.$<$/p$>$151.Ali, Omair; Saif-ur-Rehman, Muhammad; Metzler, Marita; Glasmachers, Tobias; Iossifidis, Ioannis; Klaes, Christian
GET: A Generative EEG Transformer for Continuous Context-Based Neural Signals Artikel
In: arXiv:2406.03115 [q-bio], 2024.
Abstract | Links | BibTeX | Schlagwörter: BCI, EEG, Machine Learning, Quantitative Biology - Neurons and Cognition
@article{aliGETGenerativeEEG2024,
title = {GET: A Generative EEG Transformer for Continuous Context-Based Neural Signals},
author = {Omair Ali and Muhammad Saif-ur-Rehman and Marita Metzler and Tobias Glasmachers and Ioannis Iossifidis and Christian Klaes},
url = {http://arxiv.org/abs/2406.03115},
doi = {10.48550/arXiv.2406.03115},
year = {2024},
date = {2024-06-09},
urldate = {2024-06-09},
journal = {arXiv:2406.03115 [q-bio]},
abstract = {Generating continuous electroencephalography (EEG) signals through advanced artificial neural networks presents a novel opportunity to enhance brain-computer interface (BCI) technology. This capability has the potential to significantly enhance applications ranging from simulating dynamic brain activity and data augmentation to improving real-time epilepsy detection and BCI inference. By harnessing generative transformer neural networks, specifically designed for EEG signal generation, we can revolutionize the interpretation and interaction with neural data. Generative AI has demonstrated significant success across various domains, from natural language processing (NLP) and computer vision to content creation in visual arts and music. It distinguishes itself by using large-scale datasets to construct context windows during pre-training, a technique that has proven particularly effective in NLP, where models are fine-tuned for specific downstream tasks after extensive foundational training. However, the application of generative AI in the field of BCIs, particularly through the development of continuous, context-rich neural signal generators, has been limited. To address this, we introduce the Generative EEG Transformer (GET), a model leveraging transformer architecture tailored for EEG data. The GET model is pre-trained on diverse EEG datasets, including motor imagery and alpha wave datasets, enabling it to produce high-fidelity neural signals that maintain contextual integrity. Our empirical findings indicate that GET not only faithfully reproduces the frequency spectrum of the training data and input prompts but also robustly generates continuous neural signals. By adopting the successful training strategies of the NLP domain for BCIs, the GET sets a new standard for the development and application of neural signal generation technologies.},
keywords = {BCI, EEG, Machine Learning, Quantitative Biology - Neurons and Cognition},
pubstate = {published},
tppubtype = {article}
}
Generating continuous electroencephalography (EEG) signals through advanced artificial neural networks presents a novel opportunity to enhance brain-computer interface (BCI) technology. This capability has the potential to significantly enhance applications ranging from simulating dynamic brain activity and data augmentation to improving real-time epilepsy detection and BCI inference. By harnessing generative transformer neural networks, specifically designed for EEG signal generation, we can revolutionize the interpretation and interaction with neural data. Generative AI has demonstrated significant success across various domains, from natural language processing (NLP) and computer vision to content creation in visual arts and music. It distinguishes itself by using large-scale datasets to construct context windows during pre-training, a technique that has proven particularly effective in NLP, where models are fine-tuned for specific downstream tasks after extensive foundational training. However, the application of generative AI in the field of BCIs, particularly through the development of continuous, context-rich neural signal generators, has been limited. To address this, we introduce the Generative EEG Transformer (GET), a model leveraging transformer architecture tailored for EEG data. The GET model is pre-trained on diverse EEG datasets, including motor imagery and alpha wave datasets, enabling it to produce high-fidelity neural signals that maintain contextual integrity. Our empirical findings indicate that GET not only faithfully reproduces the frequency spectrum of the training data and input prompts but also robustly generates continuous neural signals. By adopting the successful training strategies of the NLP domain for BCIs, the GET sets a new standard for the development and application of neural signal generation technologies.2023
150.Ali, Omair; Saif-ur-Rehman, Muhammad; Glasmachers, Tobias; Iossifidis, Ioannis; Klaes, Christian
In: Computers in Biology and Medicine, S. 107649, 2023, ISSN: 0010-4825.
Abstract | Links | BibTeX | Schlagwörter: BCI, Brain computer interface, Deep learning, EEG decoding, EMG decoding, Machine Learning
@article{aliConTraNetHybridNetwork2023,
title = {ConTraNet: A Hybrid Network for Improving the Classification of EEG and EMG Signals with Limited Training Data},
author = {Omair Ali and Muhammad Saif-ur-Rehman and Tobias Glasmachers and Ioannis Iossifidis and Christian Klaes},
url = {https://www.sciencedirect.com/science/article/pii/S0010482523011149},
doi = {10.1016/j.compbiomed.2023.107649},
issn = {0010-4825},
year = {2023},
date = {2023-11-02},
urldate = {2023-11-02},
journal = {Computers in Biology and Medicine},
pages = {107649},
abstract = {Objective Bio-Signals such as electroencephalography (EEG) and electromyography (EMG) are widely used for the rehabilitation of physically disabled people and for the characterization of cognitive impairments. Successful decoding of these bio-signals is however non-trivial because of the time-varying and non-stationary characteristics. Furthermore, existence of short- and long-range dependencies in these time-series signal makes the decoding even more challenging. State-of-the-art studies proposed Convolutional Neural Networks (CNNs) based architectures for the classification of these bio-signals, which are proven useful to learn spatial representations. However, CNNs because of the fixed size convolutional kernels and shared weights pay only uniform attention and are also suboptimal in learning short-long term dependencies, simultaneously, which could be pivotal in decoding EEG and EMG signals. Therefore, it is important to address these limitations of CNNs. To learn short- and long-range dependencies simultaneously and to pay more attention to more relevant part of the input signal, Transformer neural network-based architectures can play a significant role. Nonetheless, it requires a large corpus of training data. However, EEG and EMG decoding studies produce limited amount of the data. Therefore, using standalone transformers neural networks produce ordinary results. In this study, we ask a question whether we can fix the limitations of CNN and transformer neural networks and provide a robust and generalized model that can simultaneously learn spatial patterns, long-short term dependencies, pay variable amount of attention to time-varying non-stationary input signal with limited training data. Approach In this work, we introduce a novel single hybrid model called ConTraNet, which is based on CNN and Transformer architectures that contains the strengths of both CNN and Transformer neural networks. ConTraNet uses a CNN block to introduce inductive bias in the model and learn local dependencies, whereas the Transformer block uses the self-attention mechanism to learn the short- and long-range or global dependencies in the signal and learn to pay different attention to different parts of the signals. Main results We evaluated and compared the ConTraNet with state-of-the-art methods on four publicly available datasets (BCI Competition IV dataset 2b, Physionet MI-EEG dataset, Mendeley sEMG dataset, Mendeley sEMG V1 dataset) which belong to EEG-HMI and EMG-HMI paradigms. ConTraNet outperformed its counterparts in all the different category tasks (2-class, 3-class, 4-class, 7-class, and 10-class decoding tasks). Significance With limited training data ConTraNet significantly improves classification performance on four publicly available datasets for 2, 3, 4, 7, and 10-classes compared to its counterparts.},
keywords = {BCI, Brain computer interface, Deep learning, EEG decoding, EMG decoding, Machine Learning},
pubstate = {published},
tppubtype = {article}
}
Objective Bio-Signals such as electroencephalography (EEG) and electromyography (EMG) are widely used for the rehabilitation of physically disabled people and for the characterization of cognitive impairments. Successful decoding of these bio-signals is however non-trivial because of the time-varying and non-stationary characteristics. Furthermore, existence of short- and long-range dependencies in these time-series signal makes the decoding even more challenging. State-of-the-art studies proposed Convolutional Neural Networks (CNNs) based architectures for the classification of these bio-signals, which are proven useful to learn spatial representations. However, CNNs because of the fixed size convolutional kernels and shared weights pay only uniform attention and are also suboptimal in learning short-long term dependencies, simultaneously, which could be pivotal in decoding EEG and EMG signals. Therefore, it is important to address these limitations of CNNs. To learn short- and long-range dependencies simultaneously and to pay more attention to more relevant part of the input signal, Transformer neural network-based architectures can play a significant role. Nonetheless, it requires a large corpus of training data. However, EEG and EMG decoding studies produce limited amount of the data. Therefore, using standalone transformers neural networks produce ordinary results. In this study, we ask a question whether we can fix the limitations of CNN and transformer neural networks and provide a robust and generalized model that can simultaneously learn spatial patterns, long-short term dependencies, pay variable amount of attention to time-varying non-stationary input signal with limited training data. Approach In this work, we introduce a novel single hybrid model called ConTraNet, which is based on CNN and Transformer architectures that contains the strengths of both CNN and Transformer neural networks. ConTraNet uses a CNN block to introduce inductive bias in the model and learn local dependencies, whereas the Transformer block uses the self-attention mechanism to learn the short- and long-range or global dependencies in the signal and learn to pay different attention to different parts of the signals. Main results We evaluated and compared the ConTraNet with state-of-the-art methods on four publicly available datasets (BCI Competition IV dataset 2b, Physionet MI-EEG dataset, Mendeley sEMG dataset, Mendeley sEMG V1 dataset) which belong to EEG-HMI and EMG-HMI paradigms. ConTraNet outperformed its counterparts in all the different category tasks (2-class, 3-class, 4-class, 7-class, and 10-class decoding tasks). Significance With limited training data ConTraNet significantly improves classification performance on four publicly available datasets for 2, 3, 4, 7, and 10-classes compared to its counterparts.149.Fidencio, Aline Xavier; Klaes, Christian; Iossifidis, Ioannis
Exploring Error-related Potentials in Adaptive Brain-Machine Interfaces: Challenges and Investigation of Occurrence and Detection Ratios Proceedings Article
In: BC23 : Computational Neuroscience & Neurotechnology Bernstein Conference 2022, BCCN Bernstein Network Computational Network, 2023.
Abstract | BibTeX | Schlagwörter: BCI, EEG, Machine Learning
@inproceedings{xavierfidencioExploringErrorrelatedPotentials2023,
title = {Exploring Error-related Potentials in Adaptive Brain-Machine Interfaces: Challenges and Investigation of Occurrence and Detection Ratios},
author = {Aline Xavier Fidencio and Christian Klaes and Ioannis Iossifidis},
year = {2023},
date = {2023-09-15},
urldate = {2023-09-15},
booktitle = {BC23 : Computational Neuroscience & Neurotechnology Bernstein Conference 2022},
publisher = {BCCN Bernstein Network Computational Network},
abstract = {Non-invasive techniques like EEG can record error-related potentials (ErrPs), neural signals associated with error processing and awareness. ErrPs are generated in response to self-made and external errors, including those produced by the BMI. Since ErrPs are implicitly elicited and don’t add extra workload for the subject, they serve as a natural and intrinsic feedback source for developing adaptive BMIs. In our study, we assess the occurrence of interaction ErrPs in an adaptive BMI that combines ErrPs and reinforcement learning. We intentionally provoke ErrPs when the BMI misinterprets the user’s intention and performs an incorrect action. Subjects participated in a game controlled by a keyboard and/or motor imagery (imagining hand movements), and EEG data were recorded using an eight-electrode gel-based EEG system. Results reveal that obtaining a distinct ErrPs signal for each subject is more challenging than anticipated. Current practices report the ErrP in terms of over all subjects and trials difference grand average (error minus correct). This approach has, however, the limitation of masking the inter-trial and subject variability, which are relevant for the online single-trial detection of such signals. Moreover, the reported ErrPs waveshape exhibit differences in terms of components observed, as well as their respective latencies, even when very similar tasks are used. Consequently, we conducted additional individualized data analysis to gain deeper insights into the single-trial nature of the ErrPs. As a result, we determined the need for a better understanding and further investigation of how effectively the ErrPs waveforms generalize across subjects, tasks, experimental protocols, and feedback modalities. Given the challenges in obtaining a clear signal for all subjects and the limitations found in existing literature (Xavier Fidêncio et al., 2022), we hypothesize whether an error signal measurable at the scalp level is consistently generated when subjects encounter erroneous conditions. To address this question, we will assess the occurrence-to-detection ratio of ErrPs using invasive and non-invasive recording techniques, examining how uncertainties regarding error generation in the brain impact the learning pipeline.},
keywords = {BCI, EEG, Machine Learning},
pubstate = {published},
tppubtype = {inproceedings}
}
Non-invasive techniques like EEG can record error-related potentials (ErrPs), neural signals associated with error processing and awareness. ErrPs are generated in response to self-made and external errors, including those produced by the BMI. Since ErrPs are implicitly elicited and don’t add extra workload for the subject, they serve as a natural and intrinsic feedback source for developing adaptive BMIs. In our study, we assess the occurrence of interaction ErrPs in an adaptive BMI that combines ErrPs and reinforcement learning. We intentionally provoke ErrPs when the BMI misinterprets the user’s intention and performs an incorrect action. Subjects participated in a game controlled by a keyboard and/or motor imagery (imagining hand movements), and EEG data were recorded using an eight-electrode gel-based EEG system. Results reveal that obtaining a distinct ErrPs signal for each subject is more challenging than anticipated. Current practices report the ErrP in terms of over all subjects and trials difference grand average (error minus correct). This approach has, however, the limitation of masking the inter-trial and subject variability, which are relevant for the online single-trial detection of such signals. Moreover, the reported ErrPs waveshape exhibit differences in terms of components observed, as well as their respective latencies, even when very similar tasks are used. Consequently, we conducted additional individualized data analysis to gain deeper insights into the single-trial nature of the ErrPs. As a result, we determined the need for a better understanding and further investigation of how effectively the ErrPs waveforms generalize across subjects, tasks, experimental protocols, and feedback modalities. Given the challenges in obtaining a clear signal for all subjects and the limitations found in existing literature (Xavier Fidêncio et al., 2022), we hypothesize whether an error signal measurable at the scalp level is consistently generated when subjects encounter erroneous conditions. To address this question, we will assess the occurrence-to-detection ratio of ErrPs using invasive and non-invasive recording techniques, examining how uncertainties regarding error generation in the brain impact the learning pipeline.148.Grün, Felix; Iossifidis, Ioannis
Investigation of the Interplay of Model-Based and Model-Free Learning Using Reinforcement Learning Proceedings Article
In: BC23 : Computational Neuroscience & Neurotechnology Bernstein Conference 2022, BCCN Bernstein Network Computational Network, 2023.
Abstract | BibTeX | Schlagwörter: Machine Learning, Reinforcement learning
@inproceedings{grunInvestigationInterplayModelBased2023,
title = {Investigation of the Interplay of Model-Based and Model-Free Learning Using Reinforcement Learning},
author = {Felix Grün and Ioannis Iossifidis},
year = {2023},
date = {2023-09-15},
urldate = {2023-09-15},
booktitle = {BC23 : Computational Neuroscience & Neurotechnology Bernstein Conference 2022},
publisher = {BCCN Bernstein Network Computational Network},
abstract = {The reward prediction error hypothesis of dopamine in the brain states that activity of dopaminergic neurons in certain brain regions correlates with the reward prediction error that corresponds to the temporal difference error, often used as a learning signal in model free reinforcement learning (RL). This suggests that some form of reinforcement learning is used in animal and human brains when learning a task. On the other hand, it is clear that humans are capable of building an internal model of a task, or environment, and using it for planning, especially in sequential tasks. In RL, these two learning approaches, model-driven and reward-driven, are known as model based and model-free RL approaches. Both systems were previously thought to exist in parallel, with some higher process choosing which to use. A decade ago, research suggested both could be used concurrently, with some subject-specific weight assigned to each [1]. Still, the prevalent belief appeared to be that model-free learning is the default mechanism used, replaced or assisted by model-based planning only when the task demands it, i.e. higher rewards justify the additional cognitive effort. Recently, Feher da Silva et al. [2] questioned this belief, presenting data and analyses that indicate model-based learning may be used on its own and can even be computationally more efficient. We take a RL perspective, consider different ways to combine model-based and model-free approaches for modeling and for performance, and discuss how to further study this interplay in human behavioral experiments.},
keywords = {Machine Learning, Reinforcement learning},
pubstate = {published},
tppubtype = {inproceedings}
}
The reward prediction error hypothesis of dopamine in the brain states that activity of dopaminergic neurons in certain brain regions correlates with the reward prediction error that corresponds to the temporal difference error, often used as a learning signal in model free reinforcement learning (RL). This suggests that some form of reinforcement learning is used in animal and human brains when learning a task. On the other hand, it is clear that humans are capable of building an internal model of a task, or environment, and using it for planning, especially in sequential tasks. In RL, these two learning approaches, model-driven and reward-driven, are known as model based and model-free RL approaches. Both systems were previously thought to exist in parallel, with some higher process choosing which to use. A decade ago, research suggested both could be used concurrently, with some subject-specific weight assigned to each [1]. Still, the prevalent belief appeared to be that model-free learning is the default mechanism used, replaced or assisted by model-based planning only when the task demands it, i.e. higher rewards justify the additional cognitive effort. Recently, Feher da Silva et al. [2] questioned this belief, presenting data and analyses that indicate model-based learning may be used on its own and can even be computationally more efficient. We take a RL perspective, consider different ways to combine model-based and model-free approaches for modeling and for performance, and discuss how to further study this interplay in human behavioral experiments.147.Schmidt, Marie Dominique; Iossifidis, Ioannis
The Link between Muscle Activity and Upper Limb Kinematics Proceedings Article
In: BC23 : Computational Neuroscience & Neurotechnology Bernstein Conference 2022, BCCN Bernstein Network Computational Network, 2023.
Abstract | BibTeX | Schlagwörter: BCI, Machine Learning
@inproceedings{schmidtLinkMuscleActivity2023,
title = {The Link between Muscle Activity and Upper Limb Kinematics},
author = {Marie Dominique Schmidt and Ioannis Iossifidis},
year = {2023},
date = {2023-09-15},
urldate = {2023-09-15},
booktitle = {BC23 : Computational Neuroscience & Neurotechnology Bernstein Conference 2022},
publisher = {BCCN Bernstein Network Computational Network},
abstract = {The upper limbs are crucial in performing daily tasks that require strength, a wide range of motion, and precision. To achieve coordinated motion, planning and timing are critical. Sensory information about the target and the current body state is essential, as well as integrating past experiences, represented by pre-learned inverse dynamics that generate associated muscle activity. We propose a generative model that predicts upper limb muscle activity from a variety of simple and complex everyday motions by means of a recurrent neural network. The model shows promising results, with a good fit for different subjects and abstracts well for new motions. We handle the high inter-subject variation in muscle activity using a transfer learning approach, resulting in a good fit for new subjects. Our approach has implications for fundamental movement control understanding and the rehabilitation of neuromuscular diseases using myoelectric prostheses and functional electrical stimulation. Our model can efficiently predict both muscle activity and motion trajectory, which can assist in developing more effective rehabilitation techniques.},
keywords = {BCI, Machine Learning},
pubstate = {published},
tppubtype = {inproceedings}
}
The upper limbs are crucial in performing daily tasks that require strength, a wide range of motion, and precision. To achieve coordinated motion, planning and timing are critical. Sensory information about the target and the current body state is essential, as well as integrating past experiences, represented by pre-learned inverse dynamics that generate associated muscle activity. We propose a generative model that predicts upper limb muscle activity from a variety of simple and complex everyday motions by means of a recurrent neural network. The model shows promising results, with a good fit for different subjects and abstracts well for new motions. We handle the high inter-subject variation in muscle activity using a transfer learning approach, resulting in a good fit for new subjects. Our approach has implications for fundamental movement control understanding and the rehabilitation of neuromuscular diseases using myoelectric prostheses and functional electrical stimulation. Our model can efficiently predict both muscle activity and motion trajectory, which can assist in developing more effective rehabilitation techniques.146.Sziburis, Tim; Blex, Susanne; Iossifidis, Ioannis
Variability Study of Human Hand Motion during 3D Center-out Tasks Captured for the Diagnosis of Movement Disorders Proceedings Article
In: BC23 : Computational Neuroscience & Neurotechnology Bernstein Conference 2022, BCCN Bernstein Network Computational Network, 2023.
Abstract | BibTeX | Schlagwörter: movement model
@inproceedings{sziburisVariabilityStudyHuman2023,
title = {Variability Study of Human Hand Motion during 3D Center-out Tasks Captured for the Diagnosis of Movement Disorders},
author = {Tim Sziburis and Susanne Blex and Ioannis Iossifidis},
year = {2023},
date = {2023-09-15},
urldate = {2023-09-15},
booktitle = {BC23 : Computational Neuroscience & Neurotechnology Bernstein Conference 2022},
publisher = {BCCN Bernstein Network Computational Network},
abstract = {Variability analysis bears the potential to differentiate between healthy and pathological human movements [1]. Our study is conducted in the context of developing a portable glove for the diagnosis of movement disorders. This proposal has methodical as well as technical requirements. Generally, the identification of movement disorders via an analysis of motion data needs to be confirmed within the given setup. Typically, rhythmic movements like gait or posture control are examined for their variability, but here, the characteristic pathological traits of arm movement like tremors are under observation. In addition, the usability of a portable sensor instead of a stationary tracking system has to be validated. In this part of the project, human motion data are recorded redundantly by both an optical tracking system and an IMU. In our setup, a small cylinder is transported in three-dimensional space from a unified start position to one of nine target positions, which are equidistantly aligned on a semicircle. 10 trials are performed per target and hand, resulting in 180 trials per participant in total. 31 participants (11 female and 20 male) without known movement disorders, aged between 21 and 78 years, took part in the study. In addition, the 10-item EHI is used. The purpose of the analysis is to compare different variability measures to uncover differences between trials (intra-subject variability) and participants (inter-subject variability), especially in terms of age and handedness effects. Particularly, a novel variability measure is introduced which makes use of the characteristic planarity of the examined hand paths [2]. For this, the angle of the plane which best fits the travel phase of the trajectory is determined. In addition to neurological motivation, the advantage of this measure is that it allows the comparison of trials of different time spans and to different target directions without depending on trajectory warping. In the future, measurements of the same experimental setup with patients experiencing movement disorders are planned. For the subsequent pathological analysis, this study provides a basis in terms of methodological considerations and ground truth data of healthy participants. In parallel, the captured motion data are modelled utilizing dynamical systems (extended attractor dynamics approach). For this approach, the recorded and modelled data can be compared by the variability measures examined in this study.},
keywords = {movement model},
pubstate = {published},
tppubtype = {inproceedings}
}
Variability analysis bears the potential to differentiate between healthy and pathological human movements [1]. Our study is conducted in the context of developing a portable glove for the diagnosis of movement disorders. This proposal has methodical as well as technical requirements. Generally, the identification of movement disorders via an analysis of motion data needs to be confirmed within the given setup. Typically, rhythmic movements like gait or posture control are examined for their variability, but here, the characteristic pathological traits of arm movement like tremors are under observation. In addition, the usability of a portable sensor instead of a stationary tracking system has to be validated. In this part of the project, human motion data are recorded redundantly by both an optical tracking system and an IMU. In our setup, a small cylinder is transported in three-dimensional space from a unified start position to one of nine target positions, which are equidistantly aligned on a semicircle. 10 trials are performed per target and hand, resulting in 180 trials per participant in total. 31 participants (11 female and 20 male) without known movement disorders, aged between 21 and 78 years, took part in the study. In addition, the 10-item EHI is used. The purpose of the analysis is to compare different variability measures to uncover differences between trials (intra-subject variability) and participants (inter-subject variability), especially in terms of age and handedness effects. Particularly, a novel variability measure is introduced which makes use of the characteristic planarity of the examined hand paths [2]. For this, the angle of the plane which best fits the travel phase of the trajectory is determined. In addition to neurological motivation, the advantage of this measure is that it allows the comparison of trials of different time spans and to different target directions without depending on trajectory warping. In the future, measurements of the same experimental setup with patients experiencing movement disorders are planned. For the subsequent pathological analysis, this study provides a basis in terms of methodological considerations and ground truth data of healthy participants. In parallel, the captured motion data are modelled utilizing dynamical systems (extended attractor dynamics approach). For this approach, the recorded and modelled data can be compared by the variability measures examined in this study.145.Hussain, Muhammad Ayaz; Iossifidis, Ioannis
In: arXiv:2309.04698 [cs.RO], 2023.
Abstract | Links | BibTeX | Schlagwörter: Autonomous robotics, BCI, Computer Science - Artificial Intelligence, Computer Science - Information Theory, Computer Science - Machine Learning, Exoskeleton
@article{ayazhussainAdvancementsUpperBody2023,
title = {Advancements in Upper Body Exoskeleton: Implementing Active Gravity Compensation with a Feedforward Controller},
author = {Muhammad Ayaz Hussain and Ioannis Iossifidis},
url = {https://doi.org/10.48550/arXiv.2309.04698},
doi = {10.48550/arXiv.2309.04698},
year = {2023},
date = {2023-09-09},
urldate = {2023-09-09},
journal = {arXiv:2309.04698 [cs.RO]},
abstract = {In this study, we present a feedforward control system designed for active gravity compensation on an upper body exoskeleton. The system utilizes only positional data from internal motor sensors to calculate torque, employing analytical control equations based on Newton-Euler Inverse Dynamics. Compared to feedback control systems, the feedforward approach offers several advantages. It eliminates the need for external torque sensors, resulting in reduced hardware complexity and weight. Moreover, the feedforward control exhibits a more proactive response, leading to enhanced performance. The exoskeleton used in the experiments is lightweight and comprises 4 Degrees of Freedom, closely mimicking human upper body kinematics and three-dimensional range of motion. We conducted tests on both hardware and simulations of the exoskeleton, demonstrating stable performance. The system maintained its position over an extended period, exhibiting minimal friction and avoiding undesired slewing.},
keywords = {Autonomous robotics, BCI, Computer Science - Artificial Intelligence, Computer Science - Information Theory, Computer Science - Machine Learning, Exoskeleton},
pubstate = {published},
tppubtype = {article}
}
In this study, we present a feedforward control system designed for active gravity compensation on an upper body exoskeleton. The system utilizes only positional data from internal motor sensors to calculate torque, employing analytical control equations based on Newton-Euler Inverse Dynamics. Compared to feedback control systems, the feedforward approach offers several advantages. It eliminates the need for external torque sensors, resulting in reduced hardware complexity and weight. Moreover, the feedforward control exhibits a more proactive response, leading to enhanced performance. The exoskeleton used in the experiments is lightweight and comprises 4 Degrees of Freedom, closely mimicking human upper body kinematics and three-dimensional range of motion. We conducted tests on both hardware and simulations of the exoskeleton, demonstrating stable performance. The system maintained its position over an extended period, exhibiting minimal friction and avoiding undesired slewing.144.Schmidt, Marie D.; Glasmachers, Tobias; Iossifidis, Ioannis
The Concepts of Muscle Activity Generation Driven by Upper Limb Kinematics Artikel
In: BioMedical Engineering OnLine, Bd. 22, Nr. 1, S. 63, 2023, ISSN: 1475-925X.
Abstract | Links | BibTeX | Schlagwörter: Artificial generated signal, BCI, Electromyography (EMG), Generative model, Inertial measurement unit (IMU), Machine Learning, Motion parameters, Muscle activity, Neural networks, transfer learning, Voluntary movement
@article{schmidtConceptsMuscleActivity2023,
title = {The Concepts of Muscle Activity Generation Driven by Upper Limb Kinematics},
author = {Marie D. Schmidt and Tobias Glasmachers and Ioannis Iossifidis},
url = {https://doi.org/10.1186/s12938-023-01116-9},
doi = {10.1186/s12938-023-01116-9},
issn = {1475-925X},
year = {2023},
date = {2023-06-24},
urldate = {2023-06-24},
journal = {BioMedical Engineering OnLine},
volume = {22},
number = {1},
pages = {63},
abstract = {The underlying motivation of this work is to demonstrate that artificial muscle activity of known and unknown motion can be generated based on motion parameters, such as angular position, acceleration, and velocity of each joint (or the end-effector instead), which are similarly represented in our brains. This model is motivated by the known motion planning process in the central nervous system. That process incorporates the current body state from sensory systems and previous experiences, which might be represented as pre-learned inverse dynamics that generate associated muscle activity.},
keywords = {Artificial generated signal, BCI, Electromyography (EMG), Generative model, Inertial measurement unit (IMU), Machine Learning, Motion parameters, Muscle activity, Neural networks, transfer learning, Voluntary movement},
pubstate = {published},
tppubtype = {article}
}
The underlying motivation of this work is to demonstrate that artificial muscle activity of known and unknown motion can be generated based on motion parameters, such as angular position, acceleration, and velocity of each joint (or the end-effector instead), which are similarly represented in our brains. This model is motivated by the known motion planning process in the central nervous system. That process incorporates the current body state from sensory systems and previous experiences, which might be represented as pre-learned inverse dynamics that generate associated muscle activity.143.Saif-ur-Rehman, Muhammad; Ali, Omair; Klaes, Christian; Iossifidis, Ioannis
In: arXiv:2304.01355 [cs, math, q-bio], 2023.
Links | BibTeX | Schlagwörter: BCI, Machine Learning, Spike Sorting
@article{saifurrehman2023adaptive,
title = {Adaptive SpikeDeep-Classifier: Self-organizing and self-supervised machine learning algorithm for online spike sorting},
author = {Muhammad Saif-ur-Rehman and Omair Ali and Christian Klaes and Ioannis Iossifidis},
doi = {10.48550/arXiv.2304.01355},
year = {2023},
date = {2023-05-02},
urldate = {2023-05-02},
journal = {arXiv:2304.01355 [cs, math, q-bio]},
keywords = {BCI, Machine Learning, Spike Sorting},
pubstate = {published},
tppubtype = {article}
}