Monthly 228August 01, 2022
Past Issues - How to submit an announcement
- SIGLOG MATTERS
- JOB ANNOUNCEMENTS
|AAAI-23:||Aug 08, 2022 (Abstract), Aug 15, 2022 (Paper)|
|AiML 2022:||Aug 10, 2022 (Registration deadline)|
|The ALP Alain Colmerauer Prolog Heritage Prize:||Sep 02, 2022 (Deadline for nominations)|
|CPP 2023:||Sep 14, 2022 (Abstract), Sep 21, 2022 (Paper)|
|BEWARE:||Sep 23, 2022 (Submission deadline)|
|OVERLAY 2022:||Sep 30, 2022 (Paper)|
|FSEN 23:||Oct 07, 2022 (Abstract), Oct 14, 2022 (Paper)|
|PODS 2023:||Nov 28, 2022 (Second cycle abstract), Dec 05, 2022 (Full paper)|
- Registration is still possible: https://www.floc2022.org/registration
- Reminder of the program: https://easychair.org/smart-program/FLoC2022/index.html
- We are pleased to announce that AAAI-23 will have a new special track on Safe and Robust AI, covering research on creating safe and robust AI systems, as well as using AI to create other safe and robust systems. We invite you to submit your contributions to this special track at AAAI-23.
- AIMS AND SCOPE
This special track focuses on the theory and practice of safety and robustness in AI-based systems. AI systems are increasingly being deployed throughout society within different domains such as data science, robotics and autonomous systems, medicine, economy, and safety-critical systems. Although the widespread use of AI systems in today's world is growing, they have fundamental limitations and practical shortcomings, which can result in catastrophic failures. Specifically, many of the AI algorithms that are being implemented nowadays fail to guarantee safety and success and lack robustness in the face of uncertainties.
To ensure that AI systems are reliable, they need to be robust to disturbance, failure, and novel circumstances. Furthermore, this technology needs to offer assurance that it will reasonably avoid unsafe and irrecoverable situations. In order to push the boundaries of AI systems' reliability, this special track at AAAI-23 will focus on cutting-edge research on both the theory and practice of developing safe and robust AI systems. Specifically, the goal of this special track is to promote research that studies 1) the safety and robustness of AI systems, 2) AI algorithms that are able to analyze and guarantee their own safety and robustness, and 3) AI algorithms that can analyze the safety and robustness of other systems. For acceptance into this track, we would expect papers to have fundamental contributions to safe and robust AI, as well as applicability to the complexity and uncertainty inherent in real-world applications.
In short, the special track covers topics related to safety and robustness of AI-based systems and to using AI-based technologies to enhance the safety and robustness of themselves and other critical systems, including but not limited to:
- Safe and Robust AI Systems
- Safe Learning and Control
- Quantification of Uncertainty and Risk
- Safe Decision Making Under Uncertainty and Limited Information
- Robustness Against Perturbations and Distribution Shifts
- Detection and Explanation of Anomalies and Model Misspecification
- Formal Methods for AI Systems
- On-line Verification of AI Systems
- Safe Human-Machine Interaction
- SUBMISSION INSTRUCTIONS
Submissions to this special track will follow the regular AAAI technical paper submission procedure, but the authors need to select the Safe and Robust AI special track (SRAI).
- IMPORTANT DATES (AoE)
Abstract submission: Aug 08, 2022 Paper submission: Aug 15, 2022
- The workshop invites submissions from computer scientists, philosophers, economists and sociologists wanting to discuss contributions ranging from the formulation of epistemic and normative principles for AI, their conceptual representation in formal models, to their development in formal design procedures and translation into computational implementations.
Topics of interest include, but are not at all limited to:
- Conceptual and formal definitions of bias, risk and opacity in AI
- Epistemological and normative principles for fair and trustworthy AI
- Ethical AI and the challenges brought by AI to Ethics
- Explainable AI
- Uncertainty in AI
- Ontological modelling of trustworthy as opposed to biased AI systems
- Defining trust and its determinants for implementation in AI systems
- Methods for evaluating and comparing the performances of AI systems
- Approaches to verification of ethical behaviour
- Logic Programming Applications in Machine Ethics
- Integrating Logic Programming with methods for Machine Ethics and Explainable AI
Manuscripts must be formatted using the 1-column CEUR-ART Style. For more information, please see the CEUR website http://ceur-ws.org/HOWTOSUBMIT.html. Papers must be submitted through EasyChair https://easychair.org/conferences/?conf=beware22.
- IMPORTANT DATES
Submission deadline: Sep 23, 2022 Notification: Oct 21, 2022 Camera ready: Nov 18, 2022
- ORGANIZATION AND PROGRAMME COMMITTEE
- The group of André Platzer, the Alexander von Humboldt Professor for Logic of Autonomous Dynamical Systems, in the Department of Informatics at KIT is recruiting a PhD student or postdoc (TVL E13, full-time). Our research develops the logical foundations for cyber-physical systems and practical theorem proving tools for analyzing and correctly building such systems, including the theorem prover KeYmaera X, verified runtime monitoring ModelPlex, verified compilation, and verified safe machine learning techniques. Our techniques are used to analyze the safety of autonomous cars, airplanes and collision avoidance protocols in aerospace applications, robotics, and train control.
Key requirements for successful applications:
- Strong demonstrable commitment to research.
- Strong background in logic, formal methods, theorem proving, or programming language theory.
- Strong background in mathematics, physics, or engineering.
- Excellent M.Sc. degree in computer science, mathematics or related subjects.
- Proficiency in English, excellent speaking and writing skills.
- Experience in software development projects is a plus.
- FACULTY / DIVISION:
Alexander von Humboldt Professor on Logic of Autonomous Dynamical Systems
Institute of Information Security and Dependability (KASTEL)
- STARTING DATE:
- CONTACT PERSON:
André Platzer https://lfcps.org/pub/job-ad.html
To the SIGLOG or LICS website