Scope:
Transportation is currently under rapid transformation with emerging technologies and systems. On the one hand, connected and automated vehicles (CAVs) have been developed, tested, and started to get ready for real-world deployment. On the other hand, roadway infrastructure (such as traffic lights and intersections) is increasingly installed with sensing and data collection systems (video cameras, Lidars, edge computing devices, etc.), enabling robust and fast data collection and sharing, and optimization of traffic flow. Naturally, infrastructure and CAVs should cooperate for better sensing, data collection, and joint traffic and vehicle optimization and control, which is essential to improve safety, mobility, and related performance goals in future urban areas.
Supported by the DFG program for the initiation of international collaborations, this 2-day exploratory workshop is centred around the main theme of cooperation of CAVs and roadway infrastructure, as well as related issues such as cybersecurity and AI applications. The workshop will feature podium presentations and panel discussions on recent achievements and current challenges related to the workshop theme. Academia and industry researchers from around the world will present state-of-the-art methods and emerging techniques. Workshop participants will also discuss and brainstorm with funding agencies on pressing issues related to the workshop theme, and how academia, industry, and agencies can work together to address these issues to help accelerate the development and deployment of CAVs in the real world.
Download: Flyer
Organizers:
- Prof. Dr. Anne Stockem Novo (Ruhr West University of Applied Sciences)
- Prof. Dr. Xuegang (Jeff) Ban (University of Washington)
Topics:
- Connected, automated vehicles (CAVs)
- Short-term CAV trajectory prediction
- Long-term traffic flow prediction
- AI for CAV and traffic safety, cybersecurity, and efficiency
- Simulator frameworks
- Real-world sensing and testing
- The industry and government perspectives
Time | July 2nd | Time | July 3rd |
09:00 | Welcome Vera Stadelmann (DFG) |
09:00 | Explainable AI and Testing Anselm Haselhoff Robin Baumann Martin Washausen |
09:30 | Sensing and Perception Richard Altendorfer Sam Vadidar |
||
10:30 | Coffee break | 10:30 | Coffee break |
11:00 | Traffic modeling (Part I) Kun Zhao Franz Albers |
11:00 | Traffic modeling (Part II) Marvin Glomsda Timo Osterburg |
12:00 | Lunch break | 12:00 | Lunch break |
13:00 | CAV cybersecurity (Feng) Yiheng Feng Kaidi Yang |
13:00 | Working groups |
14:00 | Coffee break | 14:00 | Presentation of working group results |
14:30 | Panel discussion | ||
15:30 | Wrap up day 1 | 15:00 | Wrap up day 2 |
Speaker: Prof. Dr. Anselm Haselhoff
Title: Transparent Visual Counterfactual Explanations with a Self-Explainable Model (GdVAE)
Abstract:
Visual counterfactual explanations help answer "what-if" questions by altering key image features to change a model's prediction. While existing methods often rely on post-hoc optimization or lack transparency, our approach — GdVAE — integrates counterfactual generation directly into a self-explainable model. GdVAE combines a conditional variational autoencoder with a Gaussian discriminant classifier, enabling closed-form counterfactuals in the latent space and model transparency. Class-specific prototypes guide both prediction and explanation, grounding the model in interpretable representations. By aligning the latent structure with the explanation objective, GdVAE produces consistent, high-quality counterfactuals that match the performance of existing methods—while preserving transparency.
Short bio:
Anselm Haselhoff is Professor of Vehicle Information Technology at the Computer Science Institute of Ruhr West University of Applied Sciences. He earned his degrees in Information Technology from the University of Wuppertal, with international experience at Queensland University of Technology. His PhD focused on machine learning and computer vision for vehicle perception. He currently leads the Trustworthy AI Laboratory, with research spanning information and signal processing, machine learning, and computer vision. His work emphasizes explainable AI, probabilistic and generative models, sensor data fusion, and perception for autonomous driving. His international collaborations include a visiting researcher position at the Sydney AI Center, and he previously worked on computer vision systems for autonomous vehicles at Delphi (now Aptiv).
Speaker: Robin Baumann
Title: Towards Explainability in Self-Supervised Vision Models: Insights into Model Behavior
Abstract:
Vision-based self-supervised learning (SSL) has emerged as a powerful approach for learning image representations without manual labels, achieving performance that is on a par with or even better than that of its supervised counterparts. However, understanding what these SSL models learn and making their decision-making processes transparent remains an ongoing challenge. Explainable AI (XAI) aims to shed light on the 'black box' nature of deep learning (DL) models, enhancing trust and insight. The intersection of XAI and vision-based SSL focuses on explaining how unlabelled data training encodes visual features and captures semantic knowledge, and why models make certain decisions or recognise similarities without explicit labels. This task poses a unique challenge to SSL models, as they cannot rely on explicit class targets or human-defined labels to ground their interpretations, necessitating novel interpretive approaches.
Short bio:
Robin Baumann, M.Sc., is a PhD candidate at the Graduate School for Applied Research in North Rhine-Westphalia and a research associate at the University of Applied Sciences Bochum-Ruhr West. He earned his M.Sc. in Electrical Engineering from the Bochum University of Applied Sciences, having previously completed a B.Eng. in Medical Engineering. As part of his doctoral thesis, he is investigating how self-supervised learned embeddings can be understood and leveraged towards improved and robust computer vision tasks. Alongside his PhD work, he has worked on various applied deep learning projects, including motion prediction for autonomous vehicles as well as precipitation radar forecasting.
Speaker: Yiheng Feng
Title: Resembling Cyber Attacks for Transportation System Evaluation with Traffic Informed Symbolic Regression
Abstract:
Cyber-attacks on vehicles and infrastructure are one major threat to the next-generation transportation system and have direct impacts on safety and mobility. Many existing cyber-attack modeling approaches are difficult to implement directly in transportation system evaluation due to high reproducing and execution costs. This study introduces a novel cyber-attack resembling framework that can mimic both trajectory-level and traffic-level attack behavior using symbolic regression (SR). Moreover, safety metrics (i.e., crash rate) are integrated with the SR training process (i.e., traffic informed) to improve the traffic level performance of the generated mathematical formulations. A representative GPS spoofing attack towards the Multi-sensor Fusion (MSF) module of autonomous vehicles is selected as an attack example. Using driving features derived from the original attack vehicle trajectories and attack parameters, the proposed model finds the optimal mathematical formulation representing the original attack behaviors with strong interpretability that reveals the relationship between formulation parameters and the attack outcome.
Short bio:
Dr. Yiheng Feng is an assistant professor at Lyles School of Civil and Construction Engineering, Purdue University. He received his Ph.D. from the Department of Systems and Industrial Engineering at the University of Arizona. His research areas include connected and automated vehicles (CAVs) and smart transportation infrastructure, with a focus on cooperative driving automation and transportation system cybersecurity. He has served as PI and Co-PI in many research projects funded by NSF,USDOT, USDOE, state DOTs, and industrial companies. His work appeared in a number of top transportation journals and security conferences . He is a member of the Traffic Signal Systems Committee (ACP25) at TRB and co-chair of Simulation Subcommittee. He is a recipient of the NSF CAREER award and best paper and dissertation awards from multiple organizations.
Speaker: Sam Vadidar
Title: Cooperative RGB-Thermal Sensor Fusion for Enhanced Object Detection in CAVs and Smart Infrastructure Systems
Abstract:
Reliable object detection is a cornerstone of connected and automated vehicle (CAV) systems, particularly in complex urban environments where sensing conditions can vary drastically. In this presentation, we introduce a robust, multi-modal perception framework that fuses visual (RGB) and thermal (LWIR) imaging data to enhance object detection performance under diverse lighting and weather conditions. In this work, we present a cooperative perception framework that leverages multi-modal sensor fusion between visual (RGB) and thermal (long-wave infrared) imaging for enhanced object detection. By synchronizing and (cross-) labeling the FLIR dataset, we train a convolutional neural network (CNN) equipped with a novel Entropy-Block Attention Module (EBAM) to detect pedestrians, cyclists, and vehicles with high accuracy. Our RGB-thermal fusion network achieves a mean average precision (mAP) of 82.9%, outperforming state-of-the-art models by 10%. Importantly, the proposed architecture is well-suited not only for onboard CAV deployment but also for integration with intelligent roadway infrastructure equipped with fixed sensors. Such cooperative sensing systems enable shared situational awareness between vehicles and infrastructure, facilitating real-time data sharing and fusion, improved safety, and optimized traffic flow. This work contributes toward the vision of interconnected urban mobility systems where CAVs and smart infrastructure collaborate to achieve reliable perception and control.
Short bio:
Sam Vadidar held various roles, including Research and Development Engineer, Academic Supervisor for university students, and Software Integration Engineer at Techhub by efs . He specializes in multi-modal sensor systems and artificial intelligence for intelligent transportation systems. His research focuses on improving the perception capabilities of connected and automated vehicles (CAVs) and smart infrastructure through innovative sensor fusion techniques and deep learning models. Sam Vadidar has made contributions to the development of advanced object detection frameworks that combine visual and thermal imaging data to enhance safety and efficiency in urban mobility. He has published in peer-reviewed conferences and collaborates with international research communities to advance the field of cooperative perception in transportation systems.
Speaker: Dr. Richard Altendorfer
Title: Deep Neural Network-based Perception For Automated Driving
Abstract:
Major advances in deep neural networks have boosted their use in environment perception for automated driving. We demonstrate that DNNs offer significant advantages for radar object detection and tracking by comparison of metrics with a probabilistic, model-based method. We also explore methods for fusion of radar point clouds with objects from computer vision – a hybrid fusion mode that is particularly relevant for modular perception systems.
Short bio:
Richard Altendorfer earned an M.Sc. in physics from Durham University in 1993, a Physik Diplom from the Ludwig–Maximilians-University/Munich in 1995, and a Ph.D. in physics from the Johns Hopkins University/Baltimore in 2000. He was a research fellow in Prof. Koditschek's group at the Dept. of Electrical Engineering and Computer Science at the University of Michigan in Ann Arbor from 2000 to 2003, with research interests in dynamical systems and legged locomotion. From 2003 to 2008 he worked as an engineer in research and advanced development on environment perception at AUDI AG. Since 2008 he has been employed by TRW Automotive – now ZF Group - where he is currently Engineering Manager in the area of algorithm development for automated driving.
Speaker: M.Sc. Marvin Glomsda
Title: Comparison of Hybrid Methods for Vehicle State Estimation
Abstract:
Hybrid methods of state estimation, combining data-based and knowledge-based models, have gained some popularity over the last years. There are many different terminologies for these methods, including hybrid methods for state estimation, physics-informed neural networks, neural ordinary differential equations, and gray-box models, among others. This presentation aims to create a systematic approach to classify these models and presents some selected exemplary models. Figure 1 gives a structured overview of the methods to be presented.
Short bio:
Marvin Glomsda received a M.Sc. degree in mechanical engineering from the University of Duisburg-Essen, Duisburg, Germany, in 2023 and a B.Sc. degree in mechanical engineering at the same place in 2021. He works as a Research Associate with the Chair of Mechatronics, University of Duisburg-Essen, since 2023. His research interests include state estimation and control with a focus on vehicle dynamics as well as traffic flow simulation and inland vessel automation. Mr. Glomsda is a member of the VDI, the Association of German Engineers.
Speaker: M.Sc. Franz Albers
Title: Data-driven Evaluation of Localization Algorithms in Rural Environments
Abstract:
Accurate and reliable ego-vehicle localization is essential for automated driving, particularly in challenging rural environments characterized by sparse infrastructure and limited features. State-of-the-art localization methods typically utilize either Global Navigation Satellite Systems (GNSS) or onboard sensors such as lidar and cameras to localize the ego-vehicle within a pre-mapped environment. This study evaluates GNSS-based, camera-based, and lidar-based localization techniques using a novel dataset collected over an extended period in rural South Westphalia. The dataset captures diverse conditions, including seasonal variations, different weather scenarios, and varying lighting conditions along a defined measurement track. The performance of each localization approach is systematically analyzed, emphasizing infrastructure-related factors to identify road segments where existing algorithms demonstrate degrading performance or failures. These insights can help inform improvements and enhance the robustness of localization systems in rural automated driving.
Short bio:
2010 – 2015 | Bachelor studies „Elektro- und Informationstechnik“, TU Dortmund |
2015 – 2017 | Master studies „Elektro- und Informationstechnik“, TU Dortmund |
2017 – today |
Scientific employee, Lehrstuhl für Regelungssystemtechnik, TU Dortmund, Focus: Automated Driving in rural environments |
Speaker: M.Sc. Timo Osterburg
Title: A Principled Approach to Short-Term Trajectory Prediction
Abstract:
Tracking other road users is essential for autonomous vehicles, as motion history is used to predict future positions and enable collision-free trajectory planning. Tracking-by-detection systems depend on accurate motion prediction to associate detections over time. Classical constant velocity (CV) models remain the de facto short-term predictor [1, 2] because they are simple and robust for short horizons. However, they cannot account for the nonlinear dynamics of real traffic. Purely data-driven neural predictors capture these complexities, but they must learn the dynamics from scratch, limiting interpretability and potentially hindering the learning process. We present a hybrid predictor that augments predictions from a CV model with a selective state space model (SSM) built from modular Mamba [3] blocks. Given an actor's recent trajectory, the SSM outputs a residual offset to the CV estimate, learning only the nonlinear component while preserving the physics-based prior. We propose three variants of the hybrid prediction approach and achieve up to a 36% reduction in the final displacement error (3 cm absolute) and reduce large error outliers compared to both the CV baseline and a full neural predictor. However, when integrated with Poly-MOT [1], the improved ADE and FDE do not translate into higher multi-object tracking scores, indicating a distribution gap between curated prediction data and noisy tracker histories. The results highlight the potential of residual, principled motion models and motivate joint training of prediction and tracking within the MOT pipeline.
Short bio:
-
B. Sc. 2017 at TU Dortmund University with topic:
„Bewertung von SDR-basierten LTE Jamming-Verfahren sowie geeigneten Gegenmaßnahmen“ -
M. Sc. 2020 at TU Dortmund University with topic:
„Dual Control in Model Predictive Control“ -
Since 2021 Ph.D. candidate at the Institute of Control Theory and Systems Engineering / TU Dortmund University
studying the application of transformer-based neural networks to the problem of multi-object tracking for autonomous vehicles.
Speaker: Dipl.-Math. Martin Washausen
Title: Vehicle Safety Testing of AI based ADAS
Abstract:
With the testing of the safety functions of modern ADAS (Advanced Driver Assistance Systems), the requirements for the evaluation in vehicle safety have changed compared to the classic crash test. Non-destructive testing allows an increased number of tests and variations in scenarios that require intelligent analysis methods. In addition, modern AI-based ADAS are increasingly coming into focus to increase safety for all road users. The challenge is to find assessment methods and reference measurement techniques that enable an efficient and objective evaluation independent of the training data.
Short bio:
Martin Washausen is specialist for data management and data analysis methods of vehicle safety tests at Volkswagen. In addition to working on methods for passive safety tests, his focus in recent years has shifted increasingly to active safety tests with ADAS systems. He has studied mathematics and has several years of experience as consultant for simulation data management and optimization workflows in the automotive industry.
Speaker: Kun Zhao
Title: From prediction to planning, From research to deployment in vehicle
Abstract:
Machine learning (Deep learning) has been widely accepted in ADAS and AD applications for perception tasks, cross sensors like Camera, Lidar, even Radar. As another essential components in the ADAS software stack, prediction and planning are the absolute focus for lots of recent works. In this short presentation, based on current state of the art perception, I will talk about a few topics from industrial’s perspective:
- What is the prediction and planning with Machine Learning?
- How is the academic research work deployed in industry?
- End2End Driving becomes the new trend and is claimed by many – are we really there? What is missing?
Kun Zhao graduated from University Duisburg-Essen, Angewandte Informatik, Diplom
Started my overall 14 years of experience at Aptiv (former Delphi) as a student and switched as full-time employee after one year. First half-of my work at Aptiv, I started as vision engineer and focused on vision-based perception system for ADAS applications. With more experience I became technical lead, and with a group of engineers we designed, developed and delivered vision-based perception solution to product, running in the vehicles. The second half of my Aptiv career, I completely switched the topic to start working on Machine Learning (ML) based prediction, and currently managing a team fully focusing on development and deliver ML-based prediction and planning solutions, and working towards E2E ADAS solutions.
Alessandro Becciu
Edwin Kamau
Justyna Sedkowska
Philipp Sieberg
Participant: Dr. Alessandro Becciu
Short bio:
Dr. Alessandro Becciu has more than 10 years experience as ADAS lead engineer. In 2022 has founded Nuraxys GmbH, a startup in the field of ADAS and in-cabin sensing.
Participant: Prof. Dr. Edwin N. Kamau
Short bio:
Prof. Dr. Edwin N. Kamau is a Professor at the University of Applied Sciences (UAS) Cologne and Executive Director of the Institute of Automotive Engineering. He specializes in engineering and information technology, with research focusing on autonomous driving, XiL simulation, sensor and data fusion, and AI algorithms for automotive applications.
Participant: Justyna Sedkowska
Short bio:
Justyna Sedkowska is a PhD candidate at the Hochschule Ruhr West, researching human-machine interaction in autonomous public transport vehicles. Within the competence network innocam.NRW, her work focuses on multi-stakeholder acceptance of connected and automated mobility.
Participant: Dr. Philipp Sieberg
Short bio:
Philipp M. Sieberg received a Dr.-Ing. degree in mechanical engineering from the University of Duisburg-Essen, Germany in 2021. He is a researcher and lecturer at the University of Duisburg-Essen, Germany, and General Manager of Schotte Automotive GmbH & Co. KG. His research interests are primarily in the field of applied artificial intelligence and intelligent transportation systems. Dr. Sieberg is an associate editor of several conferences and is a member of the Executive Committee of IEEE Germany and the IEEE ITSS German Chapter.
Confirmed contributors from:
ZF Automotive Germany GmbH, Volkswagen, Schotte Automotive, INGgreen, University Duisburg-Essen, TU Dortmund, University of Washington, Ruhr West University of Applied Sciences
Format of the workshop:
The exploratory workshop brings together a selected group of experts from different countries with the aim of identifying open problems and relevant research questions. The workshop will be organized to allocate dedicated sessions for deep discussions.
Time and location:
- July 2-3, 2025
- Hochschule Ruhr West University of Applied Sciences (Bottrop, Germany)
Accomodation:
Funding acknowledgement:
This workshop is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 558307204.