Generative Artificial Intelligence (GenAI) is transforming the landscape of digital
content creation — from software and code generation to text, images, and synthetic data. These
technologies enable
automation at scale and open new opportunities across sectors such as software
engineering,
cybersecurity, healthcare, and education. However, as GenAI systems become increasingly
deployed in mission-critical and sensitive domains, their inherent vulnerabilities raise
pressing
concerns regarding security, dependability, and ethical use.
DSGenAI-2025 is an international workshop dedicated to exploring the challenges and
advancements in building dependable and secure GenAI systems. The workshop will bring
together
researchers, practitioners, and policymakers from diverse disciplines to examine the
threats and
risks posed by GenAI technologies and develop strategies to improve their robustness,
reliability,
and trustworthiness.
Through peer-reviewed paper presentations and interactive sessions, the workshop will
provide a
platform to share knowledge, foster collaboration, and shape the future of secure GenAI
research
and development.
We invite original research papers, position papers, tool demonstrations, and case studies on topics including, but not limited to:
Secure training and fine-tuning of generative AI models to prevent adversarial manipulation and backdoor attacks
Adversarial attacks and defenses against GenAI models and outputs, including evasion, poisoning, and prompt injection techniques
Dependability and fault tolerance in GenAI pipelines, focusing on robust model performance in dynamic or degraded environments
Explainability and interpretability of AI-generated content to support human oversight and trust
Secure prompt engineering, mitigation of prompt injection, prompt leakage, and malicious output risks
Formal methods for verification and validation of AI-generated artifacts, especially code and scripts
Privacy-preserving GenAI techniques, including federated learning, data minimization, and synthetic data generation
Ethical, legal, and regulatory compliance in GenAI system development and deployment
Benchmarking and evaluation metrics for assessing GenAI system security, safety, and dependability
Prospective authors are invited to submit original, high-quality papers.
Papers must be submitted through the following EasyChair submission link: DSGenAI-2025
All submissions must be in English.
All accepted papers will be included in the companion volume published by IEEE. All submissions must follow the IEEE template in MS Word format, consisting of a full paper of 4-6 pages. Download the IEEE conference templates: IEEE Templates
Please take note that IEEE has a strict policy on no-shows. Therefore, if your paper is accepted, one of the authors and their representatives MUST PRESENT their paper at the conference, either by appearing physically or online. Papers with no-show participants without a valid reason will not be submitted to IEEExplore. No refund of the paid fees may be claimed by the no-show author.
August 30, 2025
September 20, 2025
September 30, 2025
October 21, 2025
All papers will be peer reviewed.
Accepted papers will be presented at the workshop.
Director, System Security Lab (SSL)
Associate Professor, Bishop's University, Canada
Dr. Malik’s research focuses on applying machine learning and deep learning techniques to software engineering problems, particularly enhancing the security and resilience of distributed and mobile systems. His current work explores how deep learning can optimize software analysis for vulnerability detection, aiming to develop predictive security controls for real-time protection.
WebsiteChairholder, Institutional Research Chair on Cyberdefense and Personal Data Protection (CybPro)
Associate Professor, Université du Québec à Chicoutimi (UQAC), Canada
Dr. Jaafar is an Associate Professor in the Department of Computer Sciences and Mathematics at UQAC. His research focuses on cybersecurity, software security, and applying machine learning techniques to secure software systems and networks. He has contributed significantly to AI-based solutions for vulnerability detection and threat mitigation, especially in mobile, IoT, and critical infrastructure environments.
WebsiteUniversité du Québec à Chicoutimi (UQAC), Canada
University of Roehampton, UK
This is a half-day workshop designed to host a focused, high-quality program with 6–8 paper presentations and interactive discussions.
Stay tuned for the detailed program. We will be updating this section with the latest information soon.
Stay tuned for the detailed registration information. We will be updating this section with the latest information soon.
Email: ymalik@ubishops.ca
Email: fjaafar@uqac.ca