Lehrgebiet: Theoretische Informatik und künstliche Intelligenz
Büro: 01.214
Labor: 04.105
Telefon: +49 208 88254-806
E-Mail:
Ioannis Iossifidis studierte Physik (Schwerpunkt: theoretische Teilchenphysik) an der Universität Dortmund und promovierte 2006 an der Fakultät für Physik und Astronomie der Ruhr-Universität Bochum.
Am Institut für Neuroinformatik leitete Prof. Dr. Iossifidis die Arbeitsgruppe Autonome Robotik und nahm mit seiner Forschungsgruppe erfolgreich an zahlreichen, vom BmBF und der EU, geförderten Forschungsprojekten aus dem Bereich der künstlichen Intelligenz teil. Seit dem 1. Oktober 2010 arbeitet er an der HRW am Institut Informatik und hält den Lehrstuhl für Theoretische Informatik – Künstliche Intelligenz.
Prof. Dr. Ioannis Iossifidis entwickelt seit über 20 Jahren biologisch inspirierte anthropomorphe, autonome Robotersysteme, die zugleich Teil und Ergebnis seiner Forschung im Bereich der rechnergestützten Neurowissenschaften sind. In diesem Rahmen entwickelte er Modelle zur Informationsverarbeitung im menschlichen Gehirn und wendete diese auf technische Systeme an.
Ausgewiesene Schwerpunkte seiner wissenschaftlichen Arbeit der letzten Jahre sind die Modellierung menschlicher Armbewegungen, der Entwurf von sogenannten «Simulierten Realitäten» zur Simulation und Evaluation der Interaktionen zwischen Mensch, Maschine und Umwelt sowie die Entwicklung von kortikalen exoprothetischen Komponenten. Entwicklung der Theorie und Anwendung von Algorithmen des maschinellen Lernens auf Basis tiefer neuronaler Architekturen bilden das Querschnittsthema seiner Forschung.
Ioannis Iossifidis’ Forschung wurde u.a. mit Fördermitteln im Rahmen großer Förderprojekte des BmBF (NEUROS, MORPHA, LOKI, DESIRE, Bernstein Fokus: Neuronale Grundlagen des Lernens etc.), der DFG («Motor‐parietal cortical neuroprosthesis with somatosensory feedback for restoring hand and arm functions in tetraplegic patients») und der EU (Neural Dynamics – EU (STREP), EUCogII, EUCogIII ) honoriert und gehört zu den Gewinnern der Leitmarktwettbewerbe Gesundheit.NRW und IKT.NRW 2019.
ARBEITS- UND FORSCHUNGSSCHWERPUNKTE
- Computational Neuroscience
- Brain Computer Interfaces
- Entwicklung kortikaler exoprothetischer Komponenten
- Theorie neuronaler Netze
- Modellierung menschlicher Armbewegungen
- Simulierte Realität
WISSENSCHAFTLICHE EINRICHTUNGEN
- Labor mit Verlinkung
- ???
- ???
LEHRVERANSTALTUNGEN
- ???
- ???
- ???
PROJEKTE
- Projekt mit Verlinkung
- ???
- ???
WISSENSCHAFTLICHE MITARBEITER*INNEN
Felix Grün
Büro: 02.216 (Campus Bottrop)
Marie Schmidt
Büro: 02.216 (Campus Bottrop)
Aline Xavier Fidencio
Gastwissenschaftlerin
Muhammad Ayaz Hussain
Doktorand
Tim Sziburis
Doktorand
Farhad Rahmat
studentische Hilfskraft
AUSGEWÄHLTE PUBLIKATIONEN
-
2024
160.Sziburis, Tim; Blex, Susanne; Glasmachers, Tobias; Iossifidis, Ioannis
Deep-learning-based identification of individual motion characteristics from upper-limb trajectories towards disorder stage evaluation Proceedings Article
In: Pons, Jose L.; Tornero, Jesus; Akay, Metin (Hrsg.): Converging Clinical and Engineering Research on Neurorehabilitation V - Proceedings of the 6th International Conference on Neurorehabilitation (ICNR2024), Springer International Publishing, La Granja, Spain, 2024.
BibTeX | Schlagwörter:
@inproceedings{icnr2024,
title = {Deep-learning-based identification of individual motion characteristics from upper-limb trajectories towards disorder stage evaluation},
author = {Tim Sziburis and Susanne Blex and Tobias Glasmachers and Ioannis Iossifidis},
editor = {Jose L. Pons and Jesus Tornero and Metin Akay},
year = {2024},
date = {2024-11-30},
urldate = {2024-11-01},
booktitle = {Converging Clinical and Engineering Research on Neurorehabilitation V - Proceedings of the 6th International Conference on Neurorehabilitation (ICNR2024)},
publisher = {Springer International Publishing},
address = {La Granja, Spain},
series = {Biosystems and Biorobotics},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
159.Lehmler, Stephan Johann; Saif-ur-Rehman, Muhammad; Glasmachers, Tobias; Iossifidis, Ioannis
In: Neurocomputing, S. 128473, 2024, ISSN: 0925-2312.
Abstract | Links | BibTeX | Schlagwörter: Artificial neural networks, Generalization, Machine Learning, Memorization, Poisson process, Stochastic modeling
@article{lehmlerUnderstandingActivationPatterns2024,
title = {Understanding Activation Patterns in Artificial Neural Networks by Exploring Stochastic Processes: Discriminating Generalization from Memorization},
author = {Stephan Johann Lehmler and Muhammad Saif-ur-Rehman and Tobias Glasmachers and Ioannis Iossifidis},
editor = {Elsevier},
url = {https://www.sciencedirect.com/science/article/pii/S092523122401244X},
doi = {10.1016/j.neucom.2024.128473},
issn = {0925-2312},
year = {2024},
date = {2024-09-19},
urldate = {2024-09-19},
journal = {Neurocomputing},
pages = {128473},
abstract = {To gain a deeper understanding of the behavior and learning dynamics of artificial neural networks, mathematical abstractions and models are valuable. They provide a simplified perspective and facilitate systematic investigations. In this paper, we propose to analyze dynamics of artificial neural activation using stochastic processes, which have not been utilized for this purpose thus far. Our approach involves modeling the activation patterns of nodes in artificial neural networks as stochastic processes. By focusing on the activation frequency, we can leverage techniques used in neuroscience to study neural spike trains. Specifically, we extract the activity of individual artificial neurons during a classification task and model their activation frequency. The underlying process model is an arrival process following a Poisson distribution.We examine the theoretical fit of the observed data generated by various artificial neural networks in image recognition tasks to the proposed model’s key assumptions. Through the stochastic process model, we derive measures describing activation patterns of each network. We analyze randomly initialized, generalizing, and memorizing networks, allowing us to identify consistent differences in learning methods across multiple architectures and training sets. We calculate features describing the distribution of Activation Rate and Fano Factor, which prove to be stable indicators of memorization during learning. These calculated features offer valuable insights into network behavior. The proposed model demonstrates promising results in describing activation patterns and could serve as a general framework for future investigations. It has potential applications in theoretical simulation studies as well as practical areas such as pruning or transfer learning.},
keywords = {Artificial neural networks, Generalization, Machine Learning, Memorization, Poisson process, Stochastic modeling},
pubstate = {published},
tppubtype = {article}
}
To gain a deeper understanding of the behavior and learning dynamics of artificial neural networks, mathematical abstractions and models are valuable. They provide a simplified perspective and facilitate systematic investigations. In this paper, we propose to analyze dynamics of artificial neural activation using stochastic processes, which have not been utilized for this purpose thus far. Our approach involves modeling the activation patterns of nodes in artificial neural networks as stochastic processes. By focusing on the activation frequency, we can leverage techniques used in neuroscience to study neural spike trains. Specifically, we extract the activity of individual artificial neurons during a classification task and model their activation frequency. The underlying process model is an arrival process following a Poisson distribution.We examine the theoretical fit of the observed data generated by various artificial neural networks in image recognition tasks to the proposed model’s key assumptions. Through the stochastic process model, we derive measures describing activation patterns of each network. We analyze randomly initialized, generalizing, and memorizing networks, allowing us to identify consistent differences in learning methods across multiple architectures and training sets. We calculate features describing the distribution of Activation Rate and Fano Factor, which prove to be stable indicators of memorization during learning. These calculated features offer valuable insights into network behavior. The proposed model demonstrates promising results in describing activation patterns and could serve as a general framework for future investigations. It has potential applications in theoretical simulation studies as well as practical areas such as pruning or transfer learning.158.Lehmler, Stephan Johann; Iossifidis, Ioannis
3D Movement Analysis of the Ruhr Hand Motion Catalog of Human Center-Out Transport Trajectories Proceedings Article
In: BC24 : Computational Neuroscience & Neurotechnology Bernstein Conference 2024, BCCN Bernstein Network Computational Networkvphantom, 2024.
Abstract | Links | BibTeX | Schlagwörter:
@inproceedings{3DMovementAnalysis2024,
title = {3D Movement Analysis of the Ruhr Hand Motion Catalog of Human Center-Out Transport Trajectories},
author = {Stephan Johann Lehmler and Ioannis Iossifidis},
url = {https://abstracts.g-node.org/conference/BC24/abstracts#/uuid/719bca6f-9fb9-4e53-96a5-a7b36b67c012},
year = {2024},
date = {2024-09-18},
urldate = {2024-09-18},
booktitle = {BC24 : Computational Neuroscience & Neurotechnology Bernstein Conference 2024},
publisher = {BCCN Bernstein Network Computational Networkvphantom},
abstract = {The Ruhr Hand Motion Catalog of Human Center-Out Transport Trajectories [1] is a compilation of three-dimensional task-space motion data simultaneously measured by two motion tracking systems. The first one, an optical motion capture system, provided robust reference data. The second recording system consisted of a single state-of-the-art IMU to demonstrate the feasibility of portable applications. The transport object was moved in 3D space from a unified start position to one of nine target positions, equidistantly aligned on a semicircle. Ten trials were performed per target and hand, resulting in 180 trials per participant in total. 31 participants (11 female, 20 male, age 21-78) without known movement disorders took part in the experiment. Based on those experimental data, we analyze several characteristics of upper-limb trajectories. All data are rotated so that the straight connection of the defined start and target positions composes the y-axis. By doing so, we explore properties which are independent of the directly measured target location for each task and focus on common properties shared between all target movements. Particularly, we investigate how individual or target-dependent differences can still be quantified after rotation. Furthermore, we model the measured movements by means of dynamical systems (extended attractor dynamics). Differences between the transportation movements to different targets would result in varying parameter sets. The investigated motion characteristics include the symmetry of velocity peaks and the polynomial target dependence of planarity attributes. To compare the diversity of trajectories in time and space, we introduce a novel variability measure for the planarity of hand paths regarding plane angles and path amplitudes within the plane. These aspects can expose differences between trials (intra-subject) and participants (inter-subject), explored in the modelling process and applied as a methodological framework for pathological analysis. For this, further measurements with patients experiencing movement disorders are planned for future examination. The separability can also be evaluated by machine learning of task classification and user identification. This can provide information on the potential of data-driven pathological analysis to extend the model-based approach since the described experiment and study are conducted in the context of developing a portable glove for the diagnosis of movement disorders.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
The Ruhr Hand Motion Catalog of Human Center-Out Transport Trajectories [1] is a compilation of three-dimensional task-space motion data simultaneously measured by two motion tracking systems. The first one, an optical motion capture system, provided robust reference data. The second recording system consisted of a single state-of-the-art IMU to demonstrate the feasibility of portable applications. The transport object was moved in 3D space from a unified start position to one of nine target positions, equidistantly aligned on a semicircle. Ten trials were performed per target and hand, resulting in 180 trials per participant in total. 31 participants (11 female, 20 male, age 21-78) without known movement disorders took part in the experiment. Based on those experimental data, we analyze several characteristics of upper-limb trajectories. All data are rotated so that the straight connection of the defined start and target positions composes the y-axis. By doing so, we explore properties which are independent of the directly measured target location for each task and focus on common properties shared between all target movements. Particularly, we investigate how individual or target-dependent differences can still be quantified after rotation. Furthermore, we model the measured movements by means of dynamical systems (extended attractor dynamics). Differences between the transportation movements to different targets would result in varying parameter sets. The investigated motion characteristics include the symmetry of velocity peaks and the polynomial target dependence of planarity attributes. To compare the diversity of trajectories in time and space, we introduce a novel variability measure for the planarity of hand paths regarding plane angles and path amplitudes within the plane. These aspects can expose differences between trials (intra-subject) and participants (inter-subject), explored in the modelling process and applied as a methodological framework for pathological analysis. For this, further measurements with patients experiencing movement disorders are planned for future examination. The separability can also be evaluated by machine learning of task classification and user identification. This can provide information on the potential of data-driven pathological analysis to extend the model-based approach since the described experiment and study are conducted in the context of developing a portable glove for the diagnosis of movement disorders.157.Grün, Felix; Iossifidis, Ioannis
Controversial Opinions on Model Based and Model Free Reinforcement Learning in the Brain Proceedings Article
In: BCCN Bernstein Network Computational Networkvphantom, 2024.
Abstract | Links | BibTeX | Schlagwörter: Machine Learning, Reinforcement learning
@inproceedings{ControversialOpinionsModel2024,
title = {Controversial Opinions on Model Based and Model Free Reinforcement Learning in the Brain},
author = {Felix Grün and Ioannis Iossifidis},
url = {https://abstracts.g-node.org/conference/BC24/abstracts#/uuid/18e92e07-e4b1-43af-b2ac-ea282f4e81e7},
year = {2024},
date = {2024-09-18},
urldate = {2024-09-24},
publisher = {BCCN Bernstein Network Computational Networkvphantom},
abstract = {Dopaminergic Reward Prediction Errors (RPEs) are a key motivation and inspiration for model free, temporal difference reinforcement learning methods. Originally, the correlation of RPEs with model free temporal difference errors was seen as a strong indicator for model free reinforcement learning in brains. The standard view was that model free learning is the norm and more computationally expensive model based decision-making is only used when it leads to outcomes that are good enough to justify the additional effort. Nowadays, the landscape of opinions, models and experimental evidence, both electrophysiological and behavioral, paints a more complex picture, including but not limited to mechanisms of arbitration between the two systems. Model based learning or hybrid models better capture experimental behavioral data, and model based signatures are found in RPEs that were previously thought to be model free or hybrid [1]. The evidence for clearly model free learning is scarce [2]. On the other hand, multiple approaches show how model based behavior and RPEs can be produced with fundamentally model free reinforcement learning methods [3, 4, 5]. We point out findings that seem to contradict each other, others that complement each other, speculate which ideas are compatible with each other and give our opinions on ways forward, towards understanding if and how model based and model free learning from rewards coexist and interact in the brain.},
keywords = {Machine Learning, Reinforcement learning},
pubstate = {published},
tppubtype = {inproceedings}
}
Dopaminergic Reward Prediction Errors (RPEs) are a key motivation and inspiration for model free, temporal difference reinforcement learning methods. Originally, the correlation of RPEs with model free temporal difference errors was seen as a strong indicator for model free reinforcement learning in brains. The standard view was that model free learning is the norm and more computationally expensive model based decision-making is only used when it leads to outcomes that are good enough to justify the additional effort. Nowadays, the landscape of opinions, models and experimental evidence, both electrophysiological and behavioral, paints a more complex picture, including but not limited to mechanisms of arbitration between the two systems. Model based learning or hybrid models better capture experimental behavioral data, and model based signatures are found in RPEs that were previously thought to be model free or hybrid [1]. The evidence for clearly model free learning is scarce [2]. On the other hand, multiple approaches show how model based behavior and RPEs can be produced with fundamentally model free reinforcement learning methods [3, 4, 5]. We point out findings that seem to contradict each other, others that complement each other, speculate which ideas are compatible with each other and give our opinions on ways forward, towards understanding if and how model based and model free learning from rewards coexist and interact in the brain.156.Schmidt, Marie Dominique; Iossifidis, Ioannis
Decoding Upper Limb Movements Proceedings Article
In: BCCN Bernstein Network Computational Networkvphantom, 2024.
Abstract | Links | BibTeX | Schlagwörter: BCI, Machine Learning, Muscle activity
@inproceedings{DecodingUpperLimb2024,
title = {Decoding Upper Limb Movements},
author = {Marie Dominique Schmidt and Ioannis Iossifidis},
url = {https://abstracts.g-node.org/conference/BC24/abstracts#/uuid/4725140f-ce7c-4ac5-b694-c627ceeb8d98},
year = {2024},
date = {2024-09-18},
urldate = {2024-09-24},
publisher = {BCCN Bernstein Network Computational Networkvphantom},
abstract = {The upper limbs are essential for performing everyday tasks that require a wide range of motion and precise coordination. Planning and timing are crucial to achieve coordinated movement. Sensory information about the target and current body state is critical, as is the integration of prior experience represented by prelearned inverse dynamics that generate the associated muscle activity. We propose a generative model that uses a recurrent neural network to predict upper limb muscle activity during various simple and complex everyday movements. By identifying movement primitives within the signal, our model enables the decomposition of these movements into a fundamental set, facilitating the reconstruction of muscle activity patterns. Our approach has implications for the fundamental understanding of movement control and the rehabilitation of neuromuscular disorders with myoelectric prosthetics and functional electrical stimulation.},
keywords = {BCI, Machine Learning, Muscle activity},
pubstate = {published},
tppubtype = {inproceedings}
}
The upper limbs are essential for performing everyday tasks that require a wide range of motion and precise coordination. Planning and timing are crucial to achieve coordinated movement. Sensory information about the target and current body state is critical, as is the integration of prior experience represented by prelearned inverse dynamics that generate the associated muscle activity. We propose a generative model that uses a recurrent neural network to predict upper limb muscle activity during various simple and complex everyday movements. By identifying movement primitives within the signal, our model enables the decomposition of these movements into a fundamental set, facilitating the reconstruction of muscle activity patterns. Our approach has implications for the fundamental understanding of movement control and the rehabilitation of neuromuscular disorders with myoelectric prosthetics and functional electrical stimulation.155.Lehmler, Stephan Johann; Iossifidis, Ioannis
Stochastic Process Model Derived Indicators of Overfitting for Deep Architectures: Applicability to Small Sample Recalibration of sEMG Decoders Proceedings Article
In: BC24 : Computational Neuroscience & Neurotechnology Bernstein Conference 2024, BCCN Bernstein Network Computational Networkvphantom, 2024.
Abstract | Links | BibTeX | Schlagwörter: Machine Learning
@inproceedings{StochasticProcessModel2024,
title = {Stochastic Process Model Derived Indicators of Overfitting for Deep Architectures: Applicability to Small Sample Recalibration of sEMG Decoders},
author = {Stephan Johann Lehmler and Ioannis Iossifidis},
url = {https://abstracts.g-node.org/conference/BC24/abstracts#/uuid/72f03ff1-61dc-443c-92c2-b623d672ce15},
year = {2024},
date = {2024-09-18},
urldate = {2024-09-24},
booktitle = {BC24 : Computational Neuroscience & Neurotechnology Bernstein Conference 2024},
publisher = {BCCN Bernstein Network Computational Networkvphantom},
abstract = {Our recent work presents a stochastic process model of the activations within an ANN and shows a promising indicator to distinguish memorizing from generalizing ANNs. The average λ, or mean firing rate (MFR), of a hidden layer, shows stable differences between memorizing and generalizing networks, comparatively independent of the underlying data used for evaluation. We first show the performance of this indicator during training on benchmark computer vision datasets such as MNIST and CIFAR-10. In a second step, we extend the work to the real-life use case of calibrating a pre-trained model to a new user. We focus on decoding surface electromyographic (sEMG) signals, which are highly variable within and between users, and therefore necessitate frequent user calibration. Especially in situations when user calibration has to only rely on a small number of samples, degradation in performance overtime due to memorization and overfitting is a not unlikely outcome. In those cases, traditional regularization methods that function by observing the performance on a validation set, such as early stopping, don’t necessarily work, because they are evaluated on data from the same subject and set of movements, which features are being memorized. Our new indicators of memorization could help as stable indicators for model performance and give live insights during model calibration when more samples from the new users would be necessary. We evaluate the usefulness of the MFR-indicator for identifying the moment a pre-trained sEMG decoder starts to memorize given inputs},
keywords = {Machine Learning},
pubstate = {published},
tppubtype = {inproceedings}
}
Our recent work presents a stochastic process model of the activations within an ANN and shows a promising indicator to distinguish memorizing from generalizing ANNs. The average λ, or mean firing rate (MFR), of a hidden layer, shows stable differences between memorizing and generalizing networks, comparatively independent of the underlying data used for evaluation. We first show the performance of this indicator during training on benchmark computer vision datasets such as MNIST and CIFAR-10. In a second step, we extend the work to the real-life use case of calibrating a pre-trained model to a new user. We focus on decoding surface electromyographic (sEMG) signals, which are highly variable within and between users, and therefore necessitate frequent user calibration. Especially in situations when user calibration has to only rely on a small number of samples, degradation in performance overtime due to memorization and overfitting is a not unlikely outcome. In those cases, traditional regularization methods that function by observing the performance on a validation set, such as early stopping, don’t necessarily work, because they are evaluated on data from the same subject and set of movements, which features are being memorized. Our new indicators of memorization could help as stable indicators for model performance and give live insights during model calibration when more samples from the new users would be necessary. We evaluate the usefulness of the MFR-indicator for identifying the moment a pre-trained sEMG decoder starts to memorize given inputs154.Fidencio, Aline Xavier; Klaes, Christian; Iossifidis, Ioannis
Adaptive Brain-Computer Interfaces Based on Error-Related Potentials and Reinforcement Learning Proceedings Article
In: BC24 : Computational Neuroscience & Neurotechnology Bernstein Conference 2024, BCCN Bernstein Network Computational Networkvphantom, 2024.
Abstract | Links | BibTeX | Schlagwörter: BCI, Machine Learning
@inproceedings{AdaptiveBraincomputerInterfaces2024,
title = {Adaptive Brain-Computer Interfaces Based on Error-Related Potentials and Reinforcement Learning},
author = {Aline Xavier Fidencio and Christian Klaes and Ioannis Iossifidis},
url = {https://abstracts.g-node.org/conference/BC24/abstracts#/uuid/03d3dd16-4c50-43d8-b878-abcfa7857386},
year = {2024},
date = {2024-09-18},
urldate = {2024-09-24},
booktitle = {BC24 : Computational Neuroscience & Neurotechnology Bernstein Conference 2024},
publisher = {BCCN Bernstein Network Computational Networkvphantom},
abstract = {Error-related potentials (ErrPs) represent the neural signature of error processing in the brain and numerous studies have demonstrated their reliable detection using non-invasive techniques such as electroencephalography (EEG). Over recent decades, the brain-computer interface (BCI) community has shown growing interest in leveraging these intrinsic feedback signals to enhance system performance. However, the effective use of ErrPs in a closed-loop setup crucially depends on accurate single-trial detection, which is typically achieved using a subject-specific classifier (or decoder) trained on samples recorded during extensive calibration sessions before the BCI system can be deployed. In our research, we explore the potential of simulated EEG data for training a truly generic ErrP classifier. Utilizing the SEREEGA simulator, we demonstrate that EEG data can be generated in a cost-effective manner, allowing for controlled and systematic variations in data distribution to accommodate uncertainties in ErrP generation. A classifier trained solely on the generated data exhibits promising generalization capabilities across different datasets and performs comparably to a leave-one-subject-out approach trained on real data (Xavier Fidêncio et al., 2024). In our experiments, we deliberately provoked ErrPs when the BCI misinterpreted the user's intention, resulting in incorrect actions. Subjects engaged in a game controlled via keyboard and/or motor imagery (imagining hand movements), with EEG data recorded using various EEG systems for comparison. Considering the challenges in obtaining clear ErrP signals for all subjects and the limitations identified in existing literature (Xavier Fidêncio et al., 2022), we hypothesize whether a measurable error signal is consistently generated at the scalp level when subjects encounter erroneous conditions, and how this influences closed-loop setups that incorporate ErrPs for improved BCI performance. To address these questions, we assess the effects of the occurrence-to-detection ratio of ErrPs in the classification pipeline using simulated data and explore the impact of error misclassification rates in an ErrP-based learning framework, which employs reinforcement learning to enhance BCI performance.},
keywords = {BCI, Machine Learning},
pubstate = {published},
tppubtype = {inproceedings}
}
Error-related potentials (ErrPs) represent the neural signature of error processing in the brain and numerous studies have demonstrated their reliable detection using non-invasive techniques such as electroencephalography (EEG). Over recent decades, the brain-computer interface (BCI) community has shown growing interest in leveraging these intrinsic feedback signals to enhance system performance. However, the effective use of ErrPs in a closed-loop setup crucially depends on accurate single-trial detection, which is typically achieved using a subject-specific classifier (or decoder) trained on samples recorded during extensive calibration sessions before the BCI system can be deployed. In our research, we explore the potential of simulated EEG data for training a truly generic ErrP classifier. Utilizing the SEREEGA simulator, we demonstrate that EEG data can be generated in a cost-effective manner, allowing for controlled and systematic variations in data distribution to accommodate uncertainties in ErrP generation. A classifier trained solely on the generated data exhibits promising generalization capabilities across different datasets and performs comparably to a leave-one-subject-out approach trained on real data (Xavier Fidêncio et al., 2024). In our experiments, we deliberately provoked ErrPs when the BCI misinterpreted the user's intention, resulting in incorrect actions. Subjects engaged in a game controlled via keyboard and/or motor imagery (imagining hand movements), with EEG data recorded using various EEG systems for comparison. Considering the challenges in obtaining clear ErrP signals for all subjects and the limitations identified in existing literature (Xavier Fidêncio et al., 2022), we hypothesize whether a measurable error signal is consistently generated at the scalp level when subjects encounter erroneous conditions, and how this influences closed-loop setups that incorporate ErrPs for improved BCI performance. To address these questions, we assess the effects of the occurrence-to-detection ratio of ErrPs in the classification pipeline using simulated data and explore the impact of error misclassification rates in an ErrP-based learning framework, which employs reinforcement learning to enhance BCI performance.153.Pilacinski, Artur; Christ, Lukas; Boshoff, Marius; Iossifidis, Ioannis; Adler, Patrick; Miro, Michael; Kuhlenkötter, Bernd; Klaes, Christian
In: Frontiers in Neurorobotics, Bd. 18, 2024, ISSN: 1662-5218.
Links | BibTeX | Schlagwörter: brain-machine interfaces, EEG, Human action recognition, human-robot collaboration, Sensor Fusion
@article{pilacinskiHumanCollaborativeLoop2024,
title = {Human in the Collaborative Loop: A Strategy for Integrating Human Activity Recognition and Non-Invasive Brain-Machine Interfaces to Control Collaborative Robots},
author = {Artur Pilacinski and Lukas Christ and Marius Boshoff and Ioannis Iossifidis and Patrick Adler and Michael Miro and Bernd Kuhlenkötter and Christian Klaes},
url = {https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2024.1383089/full},
doi = {10.3389/fnbot.2024.1383089},
issn = {1662-5218},
year = {2024},
date = {2024-09-18},
urldate = {2024-09-18},
journal = {Frontiers in Neurorobotics},
volume = {18},
publisher = {Frontiers},
keywords = {brain-machine interfaces, EEG, Human action recognition, human-robot collaboration, Sensor Fusion},
pubstate = {published},
tppubtype = {article}
}
152.Fidêncio, Aline Xavier; Klaes, Christian; Iossifidis, Ioannis
A Generic Error-Related Potential Classifier Based on Simulated Subjects Artikel
In: Frontiers in Human Neuroscience, Bd. 18, S. 1390714, 2024, ISSN: 1662-5161.
Abstract | Links | BibTeX | Schlagwörter: adaptive brain-machine (computer) interface, BCI, EEG, Error-related potential (ErrP), ErrP classifier, Generic decoder, Machine Learning, SEREEGA, Simulation
@article{xavierfidencioGenericErrorrelatedPotential2024,
title = {A Generic Error-Related Potential Classifier Based on Simulated Subjects},
author = {Aline Xavier Fidêncio and Christian Klaes and Ioannis Iossifidis},
editor = {Frontiers Media SA},
url = {https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2024.1390714/full},
doi = {10.3389/fnhum.2024.1390714},
issn = {1662-5161},
year = {2024},
date = {2024-07-19},
urldate = {2024-07-17},
journal = {Frontiers in Human Neuroscience},
volume = {18},
pages = {1390714},
publisher = {Frontiers},
abstract = {$<$p$>$Error-related potentials (ErrPs) are brain signals known to be generated as a reaction to erroneous events. Several works have shown that not only self-made errors but also mistakes generated by external agents can elicit such event-related potentials. The possibility of reliably measuring ErrPs through non-invasive techniques has increased the interest in the brain-computer interface (BCI) community in using such signals to improve performance, for example, by performing error correction. Extensive calibration sessions are typically necessary to gather sufficient trials for training subject-specific ErrP classifiers. This procedure is not only time-consuming but also boresome for participants. In this paper, we explore the effectiveness of ErrPs in closed-loop systems, emphasizing their dependency on precise single-trial classification. To guarantee the presence of an ErrPs signal in the data we employ and to ensure that the parameters defining ErrPs are systematically varied, we utilize the open-source toolbox SEREEGA for data simulation. We generated training instances and evaluated the performance of the generic classifier on both simulated and real-world datasets, proposing a promising alternative to conventional calibration techniques. Results show that a generic support vector machine classifier reaches balanced accuracies of 72.9%, 62.7%, 71.0%, and 70.8% on each validation dataset. While performing similarly to a leave-one-subject-out approach for error class detection, the proposed classifier shows promising generalization across different datasets and subjects without further adaptation. Moreover, by utilizing SEREEGA, we can systematically adjust parameters to accommodate the variability in the ErrP, facilitating the systematic validation of closed-loop setups. Furthermore, our objective is to develop a universal ErrP classifier that captures the signal's variability, enabling it to determine the presence or absence of an ErrP in real EEG data.$<$/p$>$},
keywords = {adaptive brain-machine (computer) interface, BCI, EEG, Error-related potential (ErrP), ErrP classifier, Generic decoder, Machine Learning, SEREEGA, Simulation},
pubstate = {published},
tppubtype = {article}
}
$<$p$>$Error-related potentials (ErrPs) are brain signals known to be generated as a reaction to erroneous events. Several works have shown that not only self-made errors but also mistakes generated by external agents can elicit such event-related potentials. The possibility of reliably measuring ErrPs through non-invasive techniques has increased the interest in the brain-computer interface (BCI) community in using such signals to improve performance, for example, by performing error correction. Extensive calibration sessions are typically necessary to gather sufficient trials for training subject-specific ErrP classifiers. This procedure is not only time-consuming but also boresome for participants. In this paper, we explore the effectiveness of ErrPs in closed-loop systems, emphasizing their dependency on precise single-trial classification. To guarantee the presence of an ErrPs signal in the data we employ and to ensure that the parameters defining ErrPs are systematically varied, we utilize the open-source toolbox SEREEGA for data simulation. We generated training instances and evaluated the performance of the generic classifier on both simulated and real-world datasets, proposing a promising alternative to conventional calibration techniques. Results show that a generic support vector machine classifier reaches balanced accuracies of 72.9%, 62.7%, 71.0%, and 70.8% on each validation dataset. While performing similarly to a leave-one-subject-out approach for error class detection, the proposed classifier shows promising generalization across different datasets and subjects without further adaptation. Moreover, by utilizing SEREEGA, we can systematically adjust parameters to accommodate the variability in the ErrP, facilitating the systematic validation of closed-loop setups. Furthermore, our objective is to develop a universal ErrP classifier that captures the signal's variability, enabling it to determine the presence or absence of an ErrP in real EEG data.$<$/p$>$151.Ali, Omair; Saif-ur-Rehman, Muhammad; Metzler, Marita; Glasmachers, Tobias; Iossifidis, Ioannis; Klaes, Christian
GET: A Generative EEG Transformer for Continuous Context-Based Neural Signals Artikel
In: arXiv:2406.03115 [q-bio], 2024.
Abstract | Links | BibTeX | Schlagwörter: BCI, EEG, Machine Learning, Quantitative Biology - Neurons and Cognition
@article{aliGETGenerativeEEG2024,
title = {GET: A Generative EEG Transformer for Continuous Context-Based Neural Signals},
author = {Omair Ali and Muhammad Saif-ur-Rehman and Marita Metzler and Tobias Glasmachers and Ioannis Iossifidis and Christian Klaes},
url = {http://arxiv.org/abs/2406.03115},
doi = {10.48550/arXiv.2406.03115},
year = {2024},
date = {2024-06-09},
urldate = {2024-06-09},
journal = {arXiv:2406.03115 [q-bio]},
abstract = {Generating continuous electroencephalography (EEG) signals through advanced artificial neural networks presents a novel opportunity to enhance brain-computer interface (BCI) technology. This capability has the potential to significantly enhance applications ranging from simulating dynamic brain activity and data augmentation to improving real-time epilepsy detection and BCI inference. By harnessing generative transformer neural networks, specifically designed for EEG signal generation, we can revolutionize the interpretation and interaction with neural data. Generative AI has demonstrated significant success across various domains, from natural language processing (NLP) and computer vision to content creation in visual arts and music. It distinguishes itself by using large-scale datasets to construct context windows during pre-training, a technique that has proven particularly effective in NLP, where models are fine-tuned for specific downstream tasks after extensive foundational training. However, the application of generative AI in the field of BCIs, particularly through the development of continuous, context-rich neural signal generators, has been limited. To address this, we introduce the Generative EEG Transformer (GET), a model leveraging transformer architecture tailored for EEG data. The GET model is pre-trained on diverse EEG datasets, including motor imagery and alpha wave datasets, enabling it to produce high-fidelity neural signals that maintain contextual integrity. Our empirical findings indicate that GET not only faithfully reproduces the frequency spectrum of the training data and input prompts but also robustly generates continuous neural signals. By adopting the successful training strategies of the NLP domain for BCIs, the GET sets a new standard for the development and application of neural signal generation technologies.},
keywords = {BCI, EEG, Machine Learning, Quantitative Biology - Neurons and Cognition},
pubstate = {published},
tppubtype = {article}
}
Generating continuous electroencephalography (EEG) signals through advanced artificial neural networks presents a novel opportunity to enhance brain-computer interface (BCI) technology. This capability has the potential to significantly enhance applications ranging from simulating dynamic brain activity and data augmentation to improving real-time epilepsy detection and BCI inference. By harnessing generative transformer neural networks, specifically designed for EEG signal generation, we can revolutionize the interpretation and interaction with neural data. Generative AI has demonstrated significant success across various domains, from natural language processing (NLP) and computer vision to content creation in visual arts and music. It distinguishes itself by using large-scale datasets to construct context windows during pre-training, a technique that has proven particularly effective in NLP, where models are fine-tuned for specific downstream tasks after extensive foundational training. However, the application of generative AI in the field of BCIs, particularly through the development of continuous, context-rich neural signal generators, has been limited. To address this, we introduce the Generative EEG Transformer (GET), a model leveraging transformer architecture tailored for EEG data. The GET model is pre-trained on diverse EEG datasets, including motor imagery and alpha wave datasets, enabling it to produce high-fidelity neural signals that maintain contextual integrity. Our empirical findings indicate that GET not only faithfully reproduces the frequency spectrum of the training data and input prompts but also robustly generates continuous neural signals. By adopting the successful training strategies of the NLP domain for BCIs, the GET sets a new standard for the development and application of neural signal generation technologies.