Lehrgebiet: Theoretische Informatik und künstliche Intelligenz
Büro: 01.214
Labor: 04.105
Telefon: +49 208 88254-806
E-Mail:
Ioannis Iossifidis studierte Physik (Schwerpunkt: theoretische Teilchenphysik) an der Universität Dortmund und promovierte 2006 an der Fakultät für Physik und Astronomie der Ruhr-Universität Bochum.
Am Institut für Neuroinformatik leitete Prof. Dr. Iossifidis die Arbeitsgruppe Autonome Robotik und nahm mit seiner Forschungsgruppe erfolgreich an zahlreichen, vom BmBF und der EU, geförderten Forschungsprojekten aus dem Bereich der künstlichen Intelligenz teil. Seit dem 1. Oktober 2010 arbeitet er an der HRW am Institut Informatik und hält den Lehrstuhl für Theoretische Informatik – Künstliche Intelligenz.
Prof. Dr. Ioannis Iossifidis entwickelt seit über 20 Jahren biologisch inspirierte anthropomorphe, autonome Robotersysteme, die zugleich Teil und Ergebnis seiner Forschung im Bereich der rechnergestützten Neurowissenschaften sind. In diesem Rahmen entwickelte er Modelle zur Informationsverarbeitung im menschlichen Gehirn und wendete diese auf technische Systeme an.
Ausgewiesene Schwerpunkte seiner wissenschaftlichen Arbeit der letzten Jahre sind die Modellierung menschlicher Armbewegungen, der Entwurf von sogenannten «Simulierten Realitäten» zur Simulation und Evaluation der Interaktionen zwischen Mensch, Maschine und Umwelt sowie die Entwicklung von kortikalen exoprothetischen Komponenten. Entwicklung der Theorie und Anwendung von Algorithmen des maschinellen Lernens auf Basis tiefer neuronaler Architekturen bilden das Querschnittsthema seiner Forschung.
Ioannis Iossifidis’ Forschung wurde u.a. mit Fördermitteln im Rahmen großer Förderprojekte des BmBF (NEUROS, MORPHA, LOKI, DESIRE, Bernstein Fokus: Neuronale Grundlagen des Lernens etc.), der DFG («Motor‐parietal cortical neuroprosthesis with somatosensory feedback for restoring hand and arm functions in tetraplegic patients») und der EU (Neural Dynamics – EU (STREP), EUCogII, EUCogIII ) honoriert und gehört zu den Gewinnern der Leitmarktwettbewerbe Gesundheit.NRW und IKT.NRW 2019.
ARBEITS- UND FORSCHUNGSSCHWERPUNKTE
- Computational Neuroscience
- Brain Computer Interfaces
- Entwicklung kortikaler exoprothetischer Komponenten
- Theorie neuronaler Netze
- Modellierung menschlicher Armbewegungen
- Simulierte Realität
WISSENSCHAFTLICHE EINRICHTUNGEN
- Labor mit Verlinkung
- ???
- ???
LEHRVERANSTALTUNGEN
- ???
- ???
- ???
PROJEKTE
- Projekt mit Verlinkung
- ???
- ???
WISSENSCHAFTLICHE MITARBEITER*INNEN
Felix Grün
Büro: 02.216 (Campus Bottrop)
Marie Schmidt
Büro: 02.216 (Campus Bottrop)
Aline Xavier Fidencio
Gastwissenschaftlerin
Muhammad Ayaz Hussain
Doktorand
Tim Sziburis
Doktorand
Farhad Rahmat
studentische Hilfskraft
AUSGEWÄHLTE PUBLIKATIONEN
-
2022
4.Ali, Omair; Saif-ur-Rehman, Muhammad; Glasmachers, Tobias; Iossifidis, Ioannis; Klaes, Christian
ConTraNet: A Single End-to-End Hybrid Network for EEG-based and EMG-based Human Machine Interfaces Artikel
In: 2022.
Abstract | Links | BibTeX | Schlagwörter: BCI, Machine Learning, neural processing, signal processing
@article{aliConTraNetSingleEndtoend2022,
title = {ConTraNet: A Single End-to-End Hybrid Network for EEG-based and EMG-based Human Machine Interfaces},
author = {Omair Ali and Muhammad Saif-ur-Rehman and Tobias Glasmachers and Ioannis Iossifidis and Christian Klaes},
url = {http://arxiv.org/abs/2206.10677},
doi = {10.48550/arXiv.2206.10677},
year = {2022},
date = {2022-06-21},
urldate = {2022-06-21},
abstract = {Objective: Electroencephalography (EEG) and electromyography (EMG) are two non-invasive bio-signals, which are widely used in human machine interface (HMI) technologies (EEG-HMI and EMG-HMI paradigm) for the rehabilitation of physically disabled people. Successful decoding of EEG and EMG signals into respective control command is a pivotal step in the rehabilitation process. Recently, several Convolutional neural networks (CNNs) based architectures are proposed that directly map the raw time-series signal into decision space and the process of meaningful features extraction and classification are performed simultaneously. However, these networks are tailored to the learn the expected characteristics of the given bio-signal and are limited to single paradigm. In this work, we addressed the question that can we build a single architecture which is able to learn distinct features from different HMI paradigms and still successfully classify them. Approach: In this work, we introduce a single hybrid model called ConTraNet, which is based on CNN and Transformer architectures that is equally useful for EEG-HMI and EMG-HMI paradigms. ConTraNet uses CNN block to introduce inductive bias in the model and learn local dependencies, whereas the Transformer block uses the self-attention mechanism to learn the long-range dependencies in the signal, which are crucial for the classification of EEG and EMG signals. Main results: We evaluated and compared the ConTraNet with state-of-the-art methods on three publicly available datasets which belong to EEG-HMI and EMG-HMI paradigms. ConTraNet outperformed its counterparts in all the different category tasks (2-class, 3-class, 4-class, and 10-class decoding tasks). Significance: The results suggest that ConTraNet is robust to learn distinct features from different HMI paradigms and generalizes well as compared to the current state of the art algorithms.},
keywords = {BCI, Machine Learning, neural processing, signal processing},
pubstate = {published},
tppubtype = {article}
}
Objective: Electroencephalography (EEG) and electromyography (EMG) are two non-invasive bio-signals, which are widely used in human machine interface (HMI) technologies (EEG-HMI and EMG-HMI paradigm) for the rehabilitation of physically disabled people. Successful decoding of EEG and EMG signals into respective control command is a pivotal step in the rehabilitation process. Recently, several Convolutional neural networks (CNNs) based architectures are proposed that directly map the raw time-series signal into decision space and the process of meaningful features extraction and classification are performed simultaneously. However, these networks are tailored to the learn the expected characteristics of the given bio-signal and are limited to single paradigm. In this work, we addressed the question that can we build a single architecture which is able to learn distinct features from different HMI paradigms and still successfully classify them. Approach: In this work, we introduce a single hybrid model called ConTraNet, which is based on CNN and Transformer architectures that is equally useful for EEG-HMI and EMG-HMI paradigms. ConTraNet uses CNN block to introduce inductive bias in the model and learn local dependencies, whereas the Transformer block uses the self-attention mechanism to learn the long-range dependencies in the signal, which are crucial for the classification of EEG and EMG signals. Main results: We evaluated and compared the ConTraNet with state-of-the-art methods on three publicly available datasets which belong to EEG-HMI and EMG-HMI paradigms. ConTraNet outperformed its counterparts in all the different category tasks (2-class, 3-class, 4-class, and 10-class decoding tasks). Significance: The results suggest that ConTraNet is robust to learn distinct features from different HMI paradigms and generalizes well as compared to the current state of the art algorithms.2011
3.Zibner, S K U; Faubel, Christian; Iossifidis, Ioannis; Schöner, G
In: IEEE Transactions on Autonomous Mental Development, Bd. 3, Nr. 1, 2011, ISSN: 19430604.
Abstract | Links | BibTeX | Schlagwörter: Autonomous robotics, dynamic field theory (DFT), Dynamical systems, embodied cognition, neural processing
@article{Zibner2011,
title = {Dynamic neural fields as building blocks of a cortex-inspired architecture for robotic scene representation},
author = {S K U Zibner and Christian Faubel and Ioannis Iossifidis and G Schöner},
doi = {10.1109/TAMD.2011.2109714},
issn = {19430604},
year = {2011},
date = {2011-01-01},
urldate = {2011-01-01},
journal = {IEEE Transactions on Autonomous Mental Development},
volume = {3},
number = {1},
abstract = {Based on the concepts of dynamic field theory (DFT), we present an architecture that autonomously generates scene representations by controlling gaze and attention, creating visual objects in the foreground, tracking objects, reading them into working memory, and taking into account their visibility. At the core of this architecture are three-dimensional dynamic neural fields (DNFs) that link feature to spatial information. These three-dimensional fields couple into lower dimensional fields, which provide the links to the sensory surface and to the motor systems. We discuss how DNFs can be used as building blocks for cognitive architectures, characterize the critical bifurcations in DNFs, as well as the possible coupling structures among DNFs. In a series of robotic experiments, we demonstrate how the DNF architecture provides the core functionalities of a scene representation. textcopyright 2011 IEEE.},
keywords = {Autonomous robotics, dynamic field theory (DFT), Dynamical systems, embodied cognition, neural processing},
pubstate = {published},
tppubtype = {article}
}
Based on the concepts of dynamic field theory (DFT), we present an architecture that autonomously generates scene representations by controlling gaze and attention, creating visual objects in the foreground, tracking objects, reading them into working memory, and taking into account their visibility. At the core of this architecture are three-dimensional dynamic neural fields (DNFs) that link feature to spatial information. These three-dimensional fields couple into lower dimensional fields, which provide the links to the sensory surface and to the motor systems. We discuss how DNFs can be used as building blocks for cognitive architectures, characterize the critical bifurcations in DNFs, as well as the possible coupling structures among DNFs. In a series of robotic experiments, we demonstrate how the DNF architecture provides the core functionalities of a scene representation. textcopyright 2011 IEEE.2010
2.Zibner, S K U; Faubel, Christian; Iossifidis, Ioannis; Schöner, G; Spencer, J P
Scenes and tracking with dynamic neural fields: How to update a robotic scene representation Proceedings Article
In: 2010 IEEE 9th International Conference on Development and Learning, ICDL-2010 - Conference Program, 2010, ISBN: 9781424469024.
Abstract | Links | BibTeX | Schlagwörter: Autonomous robotics, dynamic field theory (DFT), Dynamical systems, embodied cognition, neural processing
@inproceedings{Zibner2010,
title = {Scenes and tracking with dynamic neural fields: How to update a robotic scene representation},
author = {S K U Zibner and Christian Faubel and Ioannis Iossifidis and G Schöner and J P Spencer},
doi = {10.1109/DEVLRN.2010.5578837},
isbn = {9781424469024},
year = {2010},
date = {2010-01-01},
urldate = {2010-01-01},
booktitle = {2010 IEEE 9th International Conference on Development and Learning, ICDL-2010 - Conference Program},
abstract = {We present an architecture based on the Dynamic Field Theory for the problem of scene representation. At the core of this architecture are three-dimensional neural fields linking feature to spatial information. These three-dimensional fields are coupled to lower-dimensional fields that provide both a close link to the sensory surface and a close link to motor behavior. We highlight the updating mechanism of this architecture, both when a single object is selected and followed by the robot's head in smooth pursuit and in multi-item tracking when several items move simultaneously. textcopyright 2010 IEEE.},
keywords = {Autonomous robotics, dynamic field theory (DFT), Dynamical systems, embodied cognition, neural processing},
pubstate = {published},
tppubtype = {inproceedings}
}
We present an architecture based on the Dynamic Field Theory for the problem of scene representation. At the core of this architecture are three-dimensional neural fields linking feature to spatial information. These three-dimensional fields are coupled to lower-dimensional fields that provide both a close link to the sensory surface and a close link to motor behavior. We highlight the updating mechanism of this architecture, both when a single object is selected and followed by the robot's head in smooth pursuit and in multi-item tracking when several items move simultaneously. textcopyright 2010 IEEE.1.Zibner, S K U; Faubel, Christian; Iossifidis, Ioannis; Schöner, G
Scene representation for anthropomorphic robots: A dynamic neural field approach Proceedings Article
In: Joint 41st International Symposium on Robotics and 6th German Conference on Robotics 2010, ISR/ROBOTIK 2010, 2010, ISBN: 9781617387197.
Abstract | BibTeX | Schlagwörter: Autonomous robotics, dynamic field theory (DFT), Dynamical systems, embodied cognition, neural processing
@inproceedings{Zibner2010b,
title = {Scene representation for anthropomorphic robots: A dynamic neural field approach},
author = {S K U Zibner and Christian Faubel and Ioannis Iossifidis and G Schöner},
isbn = {9781617387197},
year = {2010},
date = {2010-01-01},
urldate = {2010-01-01},
booktitle = {Joint 41st International Symposium on Robotics and 6th German Conference on Robotics 2010, ISR/ROBOTIK 2010},
volume = {2},
abstract = {For autonomous robotic systems, the ability to represent a scene, to memorize and track objects and their associated features is a prerequisite for reasonable interactive behavior. In this paper, we present a biologically inspired architecture for scene representation that is based on Dynamic Field Theory. At the core of the architecture we make use of three-dimensional Dynamic Neural Fields for representing space-feature associations. These associations are built up autonomously in a sequential way and they are maintained and continuously updated. We demonstrate these capabilities in two experiments on an anthropomorphic robotic platform. In the first experiment we show the sequential scanning of a scene. The second experiment demonstrates the maintenance of associations for objects, which get out of view, and the correct update of the scene representation, if such objects are removed.},
keywords = {Autonomous robotics, dynamic field theory (DFT), Dynamical systems, embodied cognition, neural processing},
pubstate = {published},
tppubtype = {inproceedings}
}
For autonomous robotic systems, the ability to represent a scene, to memorize and track objects and their associated features is a prerequisite for reasonable interactive behavior. In this paper, we present a biologically inspired architecture for scene representation that is based on Dynamic Field Theory. At the core of the architecture we make use of three-dimensional Dynamic Neural Fields for representing space-feature associations. These associations are built up autonomously in a sequential way and they are maintained and continuously updated. We demonstrate these capabilities in two experiments on an anthropomorphic robotic platform. In the first experiment we show the sequential scanning of a scene. The second experiment demonstrates the maintenance of associations for objects, which get out of view, and the correct update of the scene representation, if such objects are removed.