Lehrgebiet: Theoretische Informatik und künstliche Intelligenz
Büro: 01.214
Labor: 04.105
Telefon: +49 208 88254-806
E-Mail:
Ioannis Iossifidis studierte Physik (Schwerpunkt: theoretische Teilchenphysik) an der Universität Dortmund und promovierte 2006 an der Fakultät für Physik und Astronomie der Ruhr-Universität Bochum.
Am Institut für Neuroinformatik leitete Prof. Dr. Iossifidis die Arbeitsgruppe Autonome Robotik und nahm mit seiner Forschungsgruppe erfolgreich an zahlreichen, vom BmBF und der EU, geförderten Forschungsprojekten aus dem Bereich der künstlichen Intelligenz teil. Seit dem 1. Oktober 2010 arbeitet er an der HRW am Institut Informatik und hält den Lehrstuhl für Theoretische Informatik – Künstliche Intelligenz.
Prof. Dr. Ioannis Iossifidis entwickelt seit über 20 Jahren biologisch inspirierte anthropomorphe, autonome Robotersysteme, die zugleich Teil und Ergebnis seiner Forschung im Bereich der rechnergestützten Neurowissenschaften sind. In diesem Rahmen entwickelte er Modelle zur Informationsverarbeitung im menschlichen Gehirn und wendete diese auf technische Systeme an.
Ausgewiesene Schwerpunkte seiner wissenschaftlichen Arbeit der letzten Jahre sind die Modellierung menschlicher Armbewegungen, der Entwurf von sogenannten «Simulierten Realitäten» zur Simulation und Evaluation der Interaktionen zwischen Mensch, Maschine und Umwelt sowie die Entwicklung von kortikalen exoprothetischen Komponenten. Entwicklung der Theorie und Anwendung von Algorithmen des maschinellen Lernens auf Basis tiefer neuronaler Architekturen bilden das Querschnittsthema seiner Forschung.
Ioannis Iossifidis’ Forschung wurde u.a. mit Fördermitteln im Rahmen großer Förderprojekte des BmBF (NEUROS, MORPHA, LOKI, DESIRE, Bernstein Fokus: Neuronale Grundlagen des Lernens etc.), der DFG («Motor‐parietal cortical neuroprosthesis with somatosensory feedback for restoring hand and arm functions in tetraplegic patients») und der EU (Neural Dynamics – EU (STREP), EUCogII, EUCogIII ) honoriert und gehört zu den Gewinnern der Leitmarktwettbewerbe Gesundheit.NRW und IKT.NRW 2019.
ARBEITS- UND FORSCHUNGSSCHWERPUNKTE
- Computational Neuroscience
- Brain Computer Interfaces
- Entwicklung kortikaler exoprothetischer Komponenten
- Theorie neuronaler Netze
- Modellierung menschlicher Armbewegungen
- Simulierte Realität
WISSENSCHAFTLICHE EINRICHTUNGEN
- Labor mit Verlinkung
- ???
- ???
LEHRVERANSTALTUNGEN
- ???
- ???
- ???
PROJEKTE
- Projekt mit Verlinkung
- ???
- ???
WISSENSCHAFTLICHE MITARBEITER*INNEN
Felix Grün
Büro: 02.216 (Campus Bottrop)
Marie Schmidt
Büro: 02.216 (Campus Bottrop)
Aline Xavier Fidencio
Gastwissenschaftlerin
Muhammad Ayaz Hussain
Doktorand
Tim Sziburis
Doktorand
Farhad Rahmat
studentische Hilfskraft
AUSGEWÄHLTE PUBLIKATIONEN
-
2010
6.Zibner, Stephan K U; Faubel, Christian; Spencer, John P; Iossifidis, Ioannis; Schöner, Gregor
Scenes and Tracking with Dynamic Neural Fields: How to Update a Robotic Scene Representation Proceedings Article
In: Proc. Int. Conf. on Development and Learning (ICDL10), 2010.
BibTeX | Schlagwörter: Autonomous robotics, dynamic neural field, Dynamical systems, man machine interaction, scene representation, speech recognition
@inproceedings{Zibner2010c,
title = {Scenes and Tracking with Dynamic Neural Fields: How to Update a Robotic Scene Representation},
author = {Stephan K U Zibner and Christian Faubel and John P Spencer and Ioannis Iossifidis and Gregor Schöner},
year = {2010},
date = {2010-01-01},
booktitle = {Proc. Int. Conf. on Development and Learning (ICDL10)},
keywords = {Autonomous robotics, dynamic neural field, Dynamical systems, man machine interaction, scene representation, speech recognition},
pubstate = {published},
tppubtype = {inproceedings}
}
5.Zibner, Stephan K U; Faubel, Christian; Iossifidis, Ioannis; Schöner, Gregor
Scene Representation with Dynamic Neural Fields: An Example of Complex Cognitive Architectures Based on Dynamic Neural Field Theory Proceedings Article
In: Proc. Int. Conf. on Development and Learning (ICDL10), 2010.
BibTeX | Schlagwörter: Autonomous robotics, dynamic neural field, Dynamical systems, man machine interaction, scene representation, speech recognition
@inproceedings{Zibnersubmittedb,
title = {Scene Representation with Dynamic Neural Fields: An Example of Complex Cognitive Architectures Based on Dynamic Neural Field Theory},
author = {Stephan K U Zibner and Christian Faubel and Ioannis Iossifidis and Gregor Schöner},
year = {2010},
date = {2010-01-01},
booktitle = {Proc. Int. Conf. on Development and Learning (ICDL10)},
keywords = {Autonomous robotics, dynamic neural field, Dynamical systems, man machine interaction, scene representation, speech recognition},
pubstate = {published},
tppubtype = {inproceedings}
}
4.Zibner, Stephan; Faubel, Christian; Iossifidis, Ioannis; Schöner, Gregor; Spencer, John P
Scene and Tracking with Dynamic Neural Field Approach Proceedings Article
In: ISR / ROBOTIK 2010, Munich, Germany, 2010.
Abstract | BibTeX | Schlagwörter: Autonomous robotics, dynamic neural field, Dynamical systems, man machine interaction, scene representation, speech recognition
@inproceedings{Zibneri,
title = {Scene and Tracking with Dynamic Neural Field Approach},
author = {Stephan Zibner and Christian Faubel and Ioannis Iossifidis and Gregor Schöner and John P Spencer},
year = {2010},
date = {2010-01-01},
booktitle = {ISR / ROBOTIK 2010},
address = {Munich, Germany},
abstract = {An internal representation of a scene is essential to generate actions on scene objects. A stabilized storage of object location and features offers the flexibility to process queries phrased in human-based terms relating to objects, which may not be in the current camera view. Scene representation is therefore an internal representation of the surrounding world that is stabilized against head and body movement. It contains associated information about location and features of objects. Because objects and bodies move, scene representation is not a one-time process, but a constantly scene- adapting mechanism of scanning for, storing, updating, and deleting information.
Our novel architecture incorporates the generation of autonomous scanning sequences on real-time camera images. The head can then be oriented towards a selected object and the color feature can be extracted. Object location and feature information are associatively stored in a three-dimensional Dynamic Neural Field. Changes in the scene, even for multiple objects, can be tracked simultaneously. The stored information is used to generate behavior for cued recall. Cues can be table regions, features, or object labels. The robot demonstrates a successful recall by centering its gaze on the stated object.},
keywords = {Autonomous robotics, dynamic neural field, Dynamical systems, man machine interaction, scene representation, speech recognition},
pubstate = {published},
tppubtype = {inproceedings}
}
An internal representation of a scene is essential to generate actions on scene objects. A stabilized storage of object location and features offers the flexibility to process queries phrased in human-based terms relating to objects, which may not be in the current camera view. Scene representation is therefore an internal representation of the surrounding world that is stabilized against head and body movement. It contains associated information about location and features of objects. Because objects and bodies move, scene representation is not a one-time process, but a constantly scene- adapting mechanism of scanning for, storing, updating, and deleting information.
Our novel architecture incorporates the generation of autonomous scanning sequences on real-time camera images. The head can then be oriented towards a selected object and the color feature can be extracted. Object location and feature information are associatively stored in a three-dimensional Dynamic Neural Field. Changes in the scene, even for multiple objects, can be tracked simultaneously. The stored information is used to generate behavior for cued recall. Cues can be table regions, features, or object labels. The robot demonstrates a successful recall by centering its gaze on the stated object.2005
3.Iossifidis, Ioannis; Bruckhoff, Carsten; Theis, C; Grote, Claudia; Faubel, Christian; Schöner, G
A cooperative robotic assistant for human environments Buch
2005, ISSN: 16107438.
Abstract | BibTeX | Schlagwörter: Autonomous robotics, direct physical interaction, haptic interface, human robot collaboration, man machine interaction
@book{Iossifidis2005a,
title = {A cooperative robotic assistant for human environments},
author = {Ioannis Iossifidis and Carsten Bruckhoff and C Theis and Claudia Grote and Christian Faubel and G Schöner},
issn = {16107438},
year = {2005},
date = {2005-01-01},
urldate = {2005-01-01},
booktitle = {Springer Tracts in Advanced Robotics},
volume = {14},
abstract = {CoRA is a robotic assistant whose task is to collaborate with a human operator on simple manipulation or handling tasks. Its sensory channels comprising vision, audition, haptics, and force sensing are used to extract perceptual information about speech, gestures and gaze of the operator, and object recognition. The anthropomorphic robot arm makes goal-directed movements to pick up and hand-over objects. The human operator may mechanically interact with the arm by pushing it away (haptics) or by taking an object out of the robot's gripper (force sensing). The design objective has been to exploit the human operator's intuition by modeling the mechanical structure, the senses, and the behaviors of the assistant on human anatomy, human perception, and human motor behavior. textcopyright Springer-Verlag Berlin Heidelberg 2005.},
keywords = {Autonomous robotics, direct physical interaction, haptic interface, human robot collaboration, man machine interaction},
pubstate = {published},
tppubtype = {book}
}
CoRA is a robotic assistant whose task is to collaborate with a human operator on simple manipulation or handling tasks. Its sensory channels comprising vision, audition, haptics, and force sensing are used to extract perceptual information about speech, gestures and gaze of the operator, and object recognition. The anthropomorphic robot arm makes goal-directed movements to pick up and hand-over objects. The human operator may mechanically interact with the arm by pushing it away (haptics) or by taking an object out of the robot's gripper (force sensing). The design objective has been to exploit the human operator's intuition by modeling the mechanical structure, the senses, and the behaviors of the assistant on human anatomy, human perception, and human motor behavior. textcopyright Springer-Verlag Berlin Heidelberg 2005.2004
2.Iossifidis, Ioannis; Schöner, Gregor; Schoner, Gregor
Autonomous reaching and obstacle avoidance with the anthropomorphic arm of a robotic assistant using the attractor dynamics approach Proceedings Article
In: Proc. IEEE International Conference on Robotics and Automation ICRA '04, S. 4295––4300 Vol.5, 2004, ISSN: 1050-4729.
Abstract | Links | BibTeX | Schlagwörter: anthropomorphic arm, attractor dynamics, autonomous reaching, collision avoidance, end effector shift, end effectors, man machine interaction, manipulator dynamics, obstacle avoidance, robotic assistant, time varying environment, time-varying systems
@inproceedings{Iossifidis2004b,
title = {Autonomous reaching and obstacle avoidance with the anthropomorphic arm of a robotic assistant using the attractor dynamics approach},
author = {Ioannis Iossifidis and Gregor Schöner and Gregor Schoner},
doi = {10.1109/ROBOT.2004.1302393},
issn = {1050-4729},
year = {2004},
date = {2004-01-01},
booktitle = {Proc. IEEE International Conference on Robotics and Automation ICRA '04},
volume = {5},
pages = {4295----4300 Vol.5},
abstract = {To enable a robotic assistant to autonomously reach for and transport objects while avoiding obstacles we have generalized the attractor dynamics approach established for vehicles to trajectory formation in robot arms. This approach is able to deal with the time-varying environments that occur when a human operator moves in a shared workspace. Stable fixed points (attractors) for the heading direction of the end-effector shift during movement and are being tracked by the system. This enables the attractor dynamics approach to avoid the spurious states that hamper potential field methods. Separating planning and control computationally, the approach is also simpler to implement. The stability properties of the movement plan make it possible to deal with fluctuating and imprecise sensory information. We implement this approach on a seven degree of freedom anthropomorphic arm reaching for objects on a working surface. We use an exact solution of the inverse kinematics, which enables us to steer the spatial position of the elbow clear of obstacles. The straight-line trajectories of the end-effector that emerge as long as the arm is far from obstacles make the movement goals of the robotic assistant predictable for the human operator, improving man-machine interaction.},
keywords = {anthropomorphic arm, attractor dynamics, autonomous reaching, collision avoidance, end effector shift, end effectors, man machine interaction, manipulator dynamics, obstacle avoidance, robotic assistant, time varying environment, time-varying systems},
pubstate = {published},
tppubtype = {inproceedings}
}
To enable a robotic assistant to autonomously reach for and transport objects while avoiding obstacles we have generalized the attractor dynamics approach established for vehicles to trajectory formation in robot arms. This approach is able to deal with the time-varying environments that occur when a human operator moves in a shared workspace. Stable fixed points (attractors) for the heading direction of the end-effector shift during movement and are being tracked by the system. This enables the attractor dynamics approach to avoid the spurious states that hamper potential field methods. Separating planning and control computationally, the approach is also simpler to implement. The stability properties of the movement plan make it possible to deal with fluctuating and imprecise sensory information. We implement this approach on a seven degree of freedom anthropomorphic arm reaching for objects on a working surface. We use an exact solution of the inverse kinematics, which enables us to steer the spatial position of the elbow clear of obstacles. The straight-line trajectories of the end-effector that emerge as long as the arm is far from obstacles make the movement goals of the robotic assistant predictable for the human operator, improving man-machine interaction.1.Iossifidis, Ioannis; Lawitzky, Gisbert; Knoop, Stephan; Zöllner, Raoul
Towards Benchmarking of Domestic Robotic Assistants Buchabschnitt
In: Prassler, Erwin; Lawitzky, Gisbert; Stopp, Andreas; Grunwald, Gerhard; Hägele, Martin; Dillmann, Rüdiger; Iossifidis, Ioannis (Hrsg.): Advances in Human Robot Interaction, Bd. 14/2004, Nr. ISBN: 3-540-23211-7,, S. 403–414, Springer Press, 2004.
Abstract | Links | BibTeX | Schlagwörter: Autonomous robotics, benchmarking, human robot collaboration, man machine interaction
@incollection{Iossifidis2004c,
title = {Towards Benchmarking of Domestic Robotic Assistants},
author = {Ioannis Iossifidis and Gisbert Lawitzky and Stephan Knoop and Raoul Zöllner},
editor = {Erwin Prassler and Gisbert Lawitzky and Andreas Stopp and Gerhard Grunwald and Martin Hägele and Rüdiger Dillmann and Ioannis Iossifidis},
url = {http://www.springerlink.com/index/AB4F63B9DADFE299},
year = {2004},
date = {2004-01-01},
booktitle = {Advances in Human Robot Interaction},
volume = {14/2004},
number = {ISBN: 3-540-23211-7,},
pages = {403--414},
publisher = {Springer Press},
chapter = {7},
series = {Springer Tracts in Advanced Robotics STAR},
abstract = {As service robotics research advances rapidly, availability of objective, reproducible test specifications and evaluation criteria and also of benchmarking is more and more felt to be desirable in the community. As a first step towards benchmarking, in this paper we propose a formalization of tests - exemplified for domestic grasp&place tasks. The underlying philosophy of our approach is to confront the robot system in a black-box manner with requirements of a ldquorational customerrdquo, and characterize the performance of the system in an objective way by the outcomes of a test-suite tailored to this scenario. A formalized single test description consists of a clear and reproducible specification of the robotrsquos task and the full context on the one hand, and a number of figures which objectively characterize the test result on the other hand. We illustrate this methodology for the domestic assistance scenario.},
keywords = {Autonomous robotics, benchmarking, human robot collaboration, man machine interaction},
pubstate = {published},
tppubtype = {incollection}
}
As service robotics research advances rapidly, availability of objective, reproducible test specifications and evaluation criteria and also of benchmarking is more and more felt to be desirable in the community. As a first step towards benchmarking, in this paper we propose a formalization of tests - exemplified for domestic grasp&place tasks. The underlying philosophy of our approach is to confront the robot system in a black-box manner with requirements of a ldquorational customerrdquo, and characterize the performance of the system in an objective way by the outcomes of a test-suite tailored to this scenario. A formalized single test description consists of a clear and reproducible specification of the robotrsquos task and the full context on the one hand, and a number of figures which objectively characterize the test result on the other hand. We illustrate this methodology for the domestic assistance scenario.